Compétences llm-evaluation
📦

llm-evaluation

Sûr

Evaluate LLM Applications with Comprehensive Metrics

Également disponible depuis: wshobson

Measuring LLM performance is complex and error-prone. This skill provides systematic evaluation frameworks combining automated metrics, human judgment, and statistical testing to validate AI application quality.

Prend en charge: Claude Codex Code(CC)
🥉 74 Bronze
1

Télécharger le ZIP du skill

2

Importer dans Claude

Allez dans Paramètres → Capacités → Skills → Importer un skill

3

Activez et commencez Ă  utiliser

Tester

Utilisation de "llm-evaluation". Evaluate a summarization model using ROUGE metrics

Résultat attendu:

ROUGE-1: 0.72, ROUGE-2: 0.58, ROUGE-L: 0.65 - Strong performance on unigram overlap with moderate bigram coherence

Utilisation de "llm-evaluation". Compare two responses using LLM-as-Judge

Résultat attendu:

Winner: Response B (confidence: 8/10). Response B provides more accurate citations and better structured arguments, though both answers address the core question adequately.

Utilisation de "llm-evaluation". Analyze A/B test results for statistical significance

Résultat attendu:

Variant B shows 12 percent improvement over A with p-value 0.03. Result is statistically significant at alpha=0.05 with medium effect size (Cohen's d=0.54).

Audit de sécurité

Sûr
v1 • 2/25/2026

This skill is documentation-only containing Python code examples for LLM evaluation. All static analysis findings are false positives: Python code blocks were misidentified as Ruby/shell commands, and dictionary keys were incorrectly flagged as cryptographic operations. No executable code or security risks detected.

1
Fichiers analysés
486
Lignes analysées
0
résultats
1
Total des audits
Aucun problème de sécurité trouvé
Audité par: claude

Score de qualité

38
Architecture
100
Maintenabilité
87
Contenu
50
Communauté
100
Sécurité
91
Conformité aux spécifications

Ce que vous pouvez construire

ML Engineer Validating Model Changes

Run comprehensive evaluation suites before deploying prompt or model updates to catch performance regressions early.

Product Team Comparing AI Vendors

Benchmark multiple LLM providers on domain-specific tasks to make data-driven vendor selection decisions.

Research Team Publishing Results

Generate statistically rigorous evaluation results with proper metrics and significance testing for academic publications.

Essayez ces prompts

Basic Metric Selection
I need to evaluate an LLM that generates customer support responses. What metrics should I use and how do I implement them?
Build Evaluation Suite
Create an evaluation suite for my RAG application that measures accuracy, groundedness, and retrieval quality. Include both automated and human evaluation components.
A/B Test Analysis
I have evaluation scores from two prompt variants: Variant A [scores] and Variant B [scores]. Determine if the difference is statistically significant and calculate effect size.
Production Evaluation Pipeline
Design a CI/CD integration that runs regression detection on every model update, alerts on performance drops above 5 percent, and generates comparison reports against baseline.

Bonnes pratiques

  • Use multiple complementary metrics rather than optimizing for a single score
  • Always establish baseline performance before measuring improvements
  • Combine automated metrics with human evaluation for comprehensive assessment

Éviter

  • Drawing conclusions from evaluation on too few test examples
  • Using evaluation metrics that do not align with business objectives
  • Testing on data that overlaps with training data (data contamination)

Foire aux questions

What is the minimum sample size for reliable LLM evaluation?
For statistical significance testing, aim for at least 100 evaluation examples. For high-stakes decisions, 500-1000 examples provide more reliable results with narrower confidence intervals.
How do I choose between automated metrics and human evaluation?
Use automated metrics for fast iteration and regression detection. Add human evaluation for final validation, especially when assessing subjective qualities like helpfulness, safety, or nuanced correctness.
Can LLM-as-Judge replace human evaluators entirely?
LLM-as-Judge works well for routine quality checks and scales efficiently, but human evaluation remains essential for complex judgments, safety assessment, and validating the judge model itself.
How often should I re-run evaluations on my LLM application?
Run evaluations on every code or prompt change as part of CI/CD. For production monitoring, run daily or weekly evaluations on fresh samples to detect drift or performance degradation.
What should I do when metrics disagree with each other?
Metric disagreement often reveals trade-offs. Investigate which metric aligns best with your actual goals through error analysis, and consider using a weighted composite score reflecting business priorities.
How do I evaluate multi-turn conversations?
Use conversation-level metrics like task completion rate and user satisfaction alongside turn-level metrics. Consider coherence across turns and whether the model maintains context appropriately throughout the dialogue.

Détails du développeur

Structure de fichiers

đź“„ SKILL.md