context-degradation
诊断上下文衰减问题
也可从以下获取: muratcankoylan,sickn33,ChakshuGautam
AI代理会随着上下文变大而衰减。本技能提供识别、测量和修复衰减的模式,以免影响结果。使用它来构建能够可靠处理长对话的弹性系统。
下载技能 ZIP
在 Claude 中上传
前往 设置 → 功能 → 技能 → 上传技能
开启并开始使用
测试它
正在使用“context-degradation”。 My agent produces poor results after about 50 conversation turns
预期结果:
- Context degradation likely caused by: lost-in-middle effect or context poisoning
- Immediate mitigation: reorganize context so critical info is at the start or end
- Medium-term: implement compaction triggers before reaching 60K tokens
- Long-term: consider partitioning context across sub-agents or sessions
- Monitor health score: token utilization, degradation score, poisoning indicators
正在使用“context-degradation”。 I need to understand why my model ignores requirements I put in the middle of my prompt
预期结果:
- This is the classic lost-in-middle phenomenon - attention follows a U-curve
- Beginning and end tokens receive high attention, middle tokens get degraded attention
- Move critical requirements to start (after system prompt) or end of context
- Use explicit section markers like [CURRENT TASK] and [KEY FINDINGS]
- Consider splitting long contexts into multiple focused sessions
正在使用“context-degradation”。 How do I prevent my agent from accumulating errors over time
预期结果:
- Implement hallucination tracking to detect when errors enter context
- Use explicit versioning for retrieved documents to prevent outdated info
- Set compaction triggers when error indicators exceed threshold (3-5 errors)
- Periodically truncate context to remove poisoned elements
- Design isolation patterns so errors in one section dont spread to others
安全审计
安全Pure documentation and analysis utility skill. Contains only educational content about context degradation patterns in AI systems. No network calls, no file writes, no command execution. All code examples are for demonstration purposes only with safe, simulated functions.
风险因素
⚙️ 外部命令 (18)
🌐 网络访问 (1)
📁 文件系统访问 (1)
质量评分
你能构建什么
设计弹性系统
通过经过验证的模式构建能够处理大量上下文而不会性能下降的代理架构。
调试代理故障
诊断代理在长对话中产生错误输出的原因,并应用针对性的修复。
理解注意力模式
了解注意力机制如何导致可预测的衰减模式,并设计更好的缓解措施。
试试这些提示
My agent is producing poor results after many conversation turns. Analyze what context degradation patterns might be causing this and how to fix them.
The critical requirements I provided in the middle of context are being ignored by the model. This is the lost-in-middle phenomenon. Help me reorganize my context structure to prevent this.
My agent seems stuck on incorrect assumptions from earlier in the conversation. This looks like context poisoning. How do I detect and recover from this?
I am designing a system that must handle 100K+ tokens of context reliably. What architectural patterns should I use to prevent context degradation?
最佳实践
- 将关键信息(目标、约束、要求)放在上下文的开头或结尾,因为这些位置的注意力最高
- 持续监控上下文健康状况,在衰减变得严重之前触发压缩或分区
- 对检索到的文档使用显式版本控制,以防止过时信息造成的上下文冲突
避免
- 假设更大的上下文窗口可以解决所有问题,而不采取架构保护措施
- 将所有检索到的文档放入上下文而不进行相关性过滤
- 无限期地继续会话而不截断或分区累积的上下文