Skills shap
📊

shap

Safe ⚙️ External commands🌐 Network access

Explain model predictions with SHAP

Also available from: davila7

Machine learning models often act as black boxes. SHAP provides a unified framework to explain any model prediction by computing feature contributions using Shapley values from game theory. Use this skill to visualize feature importance, debug model behavior, and implement explainable AI.

Supports: Claude Codex Code(CC)
🥉 72 Bronze
1

Download the skill ZIP

2

Upload in Claude

Go to Settings → Capabilities → Skills → Upload skill

3

Toggle on and start using

Test it

Using "shap". Explain my XGBoost model's predictions for customer churn

Expected outcome:

  • Top drivers of churn: Contract Type (0.25), Monthly Charges (0.18), Tenure (0.15)
  • Customers without annual contracts are 25% more likely to churn
  • Higher monthly charges increase churn probability when tenure is low
  • Loyal customers (36+ months) have 60% lower churn risk

Using "shap". Why was this loan application denied

Expected outcome:

  • Key factors reducing approval probability: High debt-to-income ratio (-0.22), Short employment history (-0.15), Low credit utilization ratio (-0.12)
  • Positive factors: Stable income (+0.08), Good payment history (+0.05)
  • Missing factors: No recent credit inquiries helped slightly (+0.03)

Security Audit

Safe
v4 • 1/17/2026

Documentation-only skill with no executable code. All 295 static findings are false positives: markdown code blocks flagged as shell backticks, 'SHAP' acronym flagged as SHA cryptographic algorithm, GitHub URL path component flagged as C2, and standard documentation URLs. This is a pure reference guide for the SHAP machine learning library.

6
Files scanned
4,006
Lines analyzed
2
findings
4
Total audits
Audited by: claude View Audit History →

Quality Score

45
Architecture
100
Maintainability
87
Content
30
Community
100
Security
87
Spec Compliance

What You Can Build

Debug model behavior

Identify why models make errors by examining SHAP values for misclassified samples and detecting data leakage.

Add explanations to APIs

Integrate SHAP explanations into production prediction services with feature attribution for each prediction.

Validate model fairness

Check for biased predictions across demographic groups by comparing SHAP values for protected attributes.

Try These Prompts

Basic SHAP explanation
Generate a SHAP beeswarm plot to show feature importance for my XGBoost model trained on my tabular dataset. Show the top 15 most important features with their value distributions.
Individual prediction
Explain why my model predicted a specific outcome for a single sample. Generate a waterfall plot showing how each feature contributed to the prediction.
Model comparison
Compare feature importance between my Random Forest and XGBoost models using SHAP. Show which features are ranked differently between the two models.
Fairness analysis
Analyze my model for potential bias across gender and age groups. Compare SHAP values for the protected attributes and identify if any proxies exist.

Best Practices

  • Use TreeExplainer for tree-based models instead of KernelExplainer for faster computation
  • Start with global plots (beeswarm, bar) to understand overall patterns before examining individual predictions
  • Select 50-1000 representative background samples for accurate SHAP value computation

Avoid

  • Using KernelExplainer for tree models when TreeExplainer is available (much slower)
  • Interpreting SHAP values as causal relationships without domain validation
  • Ignoring model output type (log-odds vs probability) when interpreting SHAP values

Frequently Asked Questions

What is SHAP?
SHAP (SHapley Additive exPlanations) is a game-theoretic approach to explain model predictions by computing each feature's contribution using Shapley values.
Which explainer should I use?
Use TreeExplainer for XGBoost/LightGBM, DeepExplainer for neural networks, LinearExplainer for linear models, and KernelExplainer only as a last resort.
Why are my SHAP values slow to compute?
KernelExplainer is slow. Switch to a specialized explainer for your model type or sample your data to 100-1000 rows for visualization.
What do positive and negative SHAP values mean?
Positive SHAP values push predictions higher; negative values push predictions lower. The magnitude shows the strength of each feature's impact.
Can SHAP detect data leakage?
Yes. Unexpectedly high feature importance on certain features may indicate data leakage or problematic features that should be removed.
Does SHAP work with all model types?
Yes. SHAP works with any model through KernelExplainer, but specialized explainers (Tree, Deep, Linear) are much faster and recommended.

Developer Details

File structure