Error Detective
Detect and diagnose errors across your systems
Production errors are hard to trace across distributed systems. This skill analyzes logs, correlates errors, and identifies root causes to accelerate debugging.
Télécharger le ZIP du skill
Importer dans Claude
Allez dans Paramètres → Capacités → Skills → Importer un skill
Activez et commencez Ă utiliser
Tester
Utilisation de "Error Detective". Log file with repeated NullPointerException errors
Résultat attendu:
Extracted 47 NullPointerException occurrences between 14:32-14:45 UTC. Peak frequency: 12 errors/minute at 14:38. All errors originate from UserService.getUser() method. Correlation: errors began 2 minutes after deployment v2.3.1.
Utilisation de "Error Detective". Stack trace from payment service timeout
Résultat attendu:
Root cause: Database connection pool exhaustion. Evidence: Timeout at ConnectionPool.getConnection (line 142), preceded by 200+ pending requests. Fix: Increase pool size from 10 to 50 connections and add circuit breaker.
Audit de sécurité
SûrThis is a prompt-only skill with no executable code, network access, or filesystem operations. Static analysis scanned 0 files and detected no security patterns. The skill provides guidance for log analysis and error investigation without any attack vectors.
Score de qualité
Ce que vous pouvez construire
Production Incident Investigation
Analyze error logs from a production outage to identify the root cause and timeline of failures across microservices.
Debugging Intermittent Failures
Correlate sporadic errors across application logs to find patterns and triggering conditions that cause intermittent bugs.
Post-Mortem Analysis
Review historical error data after an incident to understand failure chains and recommend prevention strategies.
Essayez ces prompts
Analyze this log excerpt and extract all error messages with their timestamps. Group similar errors and identify the most frequent error type.
Examine this stack trace and identify the root cause of the failure. Explain which code path triggered the error and suggest fixes.
I have logs from three microservices during an outage. Correlate errors by timestamp and identify which service failed first and caused the cascade.
Generate Elasticsearch and Splunk queries to detect this specific error pattern in production. Include alerting thresholds for error rate spikes.
Bonnes pratiques
- Always include timestamps and correlation IDs when providing log samples for analysis
- Share logs from all affected services to enable accurate cross-system correlation
- Provide context about recent deployments or configuration changes that may relate to errors
Éviter
- Do not share sensitive data like API keys, passwords, or personal information in logs
- Avoid analyzing isolated error messages without surrounding log context
- Do not assume the first visible error is the root cause - trace backward through the chain