技能 prompt-engineering-patterns
📝

prompt-engineering-patterns

安全 ⚙️ 外部命令📁 檔案系統存取🌐 網路存取

掌握提示工程以獲得更好的AI結果

大型語言模型在處理提示不佳的提示時會產生不一致的結果。本技能提供了經過實際驗證的模式和模板,用於思維鏈推理、少量學習和系統性提示優化,以提高輸出品質。

支援: Claude Codex Code(CC)
🥈 81 白銀
1

下載技能 ZIP

2

在 Claude 中上傳

前往 設定 → 功能 → 技能 → 上傳技能

3

開啟並開始使用

測試它

正在使用「prompt-engineering-patterns」。 撰寫一個提示來摘要客戶回饋

預期結果:

  • 從系統角色開始:您是專業分析師。
  • 新增特定約束:以3個要點摘要。
  • 包含範例:顯示回饋類別的輸入輸出配對。
  • 定義格式:為每個摘要使用一致的結構。

安全審計

安全
v4 • 1/17/2026

This is a documentation-focused skill containing markdown guides and a local Python utility script for prompt optimization. The 228 static findings are false positives triggered by documentation patterns: backticks in Python code examples misinterpreted as shell commands, cryptographic terminology (SHA, MD5) mentioned in text, and references to API keys and file paths. The skill makes no network calls, has no sensitive filesystem access, and does not execute external commands. The optimize-prompt.py script uses a mock LLM client for local testing only.

10
已掃描檔案
2,919
分析行數
3
發現項
4
審計總數

風險因素

⚙️ 外部命令 (169)
assets/prompt-template-library.md:6-12 assets/prompt-template-library.md:12-15 assets/prompt-template-library.md:15-23 assets/prompt-template-library.md:23-26 assets/prompt-template-library.md:26-33 assets/prompt-template-library.md:33-38 assets/prompt-template-library.md:38-50 assets/prompt-template-library.md:50-53 assets/prompt-template-library.md:53-68 assets/prompt-template-library.md:68-73 assets/prompt-template-library.md:73-84 assets/prompt-template-library.md:84-87 assets/prompt-template-library.md:87-101 assets/prompt-template-library.md:101-104 assets/prompt-template-library.md:104-113 assets/prompt-template-library.md:113-118 assets/prompt-template-library.md:118-125 assets/prompt-template-library.md:125-128 assets/prompt-template-library.md:128-137 assets/prompt-template-library.md:137-140 assets/prompt-template-library.md:140-147 assets/prompt-template-library.md:147-152 assets/prompt-template-library.md:152-163 assets/prompt-template-library.md:163-166 assets/prompt-template-library.md:166-183 assets/prompt-template-library.md:183-188 assets/prompt-template-library.md:188-197 assets/prompt-template-library.md:197-200 assets/prompt-template-library.md:200-207 assets/prompt-template-library.md:207-212 assets/prompt-template-library.md:212-221 assets/prompt-template-library.md:221-224 assets/prompt-template-library.md:224-234 assets/prompt-template-library.md:234-237 assets/prompt-template-library.md:237-244 references/chain-of-thought.md:12-29 references/chain-of-thought.md:29-34 references/chain-of-thought.md:34-53 references/chain-of-thought.md:53-58 references/chain-of-thought.md:58-83 references/chain-of-thought.md:83-90 references/chain-of-thought.md:90-125 references/chain-of-thought.md:125-130 references/chain-of-thought.md:130-176 references/chain-of-thought.md:176-181 references/chain-of-thought.md:181-218 references/chain-of-thought.md:218-223 references/chain-of-thought.md:223-248 references/chain-of-thought.md:248-251 references/chain-of-thought.md:251-278 references/chain-of-thought.md:278-281 references/chain-of-thought.md:281-303 references/chain-of-thought.md:303-308 references/chain-of-thought.md:308-328 references/chain-of-thought.md:328-331 references/chain-of-thought.md:331-345 references/chain-of-thought.md:345-349 references/chain-of-thought.md:349-359 references/few-shot-learning.md:12-27 references/few-shot-learning.md:27-34 references/few-shot-learning.md:34-56 references/few-shot-learning.md:56-63 references/few-shot-learning.md:63-73 references/few-shot-learning.md:73-80 references/few-shot-learning.md:80-94 references/few-shot-learning.md:94-103 references/few-shot-learning.md:103-121 references/few-shot-learning.md:121-126 references/few-shot-learning.md:126-138 references/few-shot-learning.md:138-143 references/few-shot-learning.md:143-154 references/few-shot-learning.md:154-161 references/few-shot-learning.md:161-166 references/few-shot-learning.md:166-169 references/few-shot-learning.md:169-195 references/few-shot-learning.md:195-200 references/few-shot-learning.md:200-214 references/few-shot-learning.md:214-219 references/few-shot-learning.md:219-228 references/few-shot-learning.md:228-231 references/few-shot-learning.md:231-240 references/few-shot-learning.md:240-243 references/few-shot-learning.md:243-252 references/few-shot-learning.md:252-257 references/few-shot-learning.md:257-266 references/few-shot-learning.md:266-269 references/few-shot-learning.md:269-293 references/few-shot-learning.md:293-300 references/few-shot-learning.md:300-334 references/few-shot-learning.md:334-339 references/few-shot-learning.md:339-354 references/prompt-optimization.md:6-26 references/prompt-optimization.md:26-29 references/prompt-optimization.md:29-31 references/prompt-optimization.md:31-33 references/prompt-optimization.md:33-64 references/prompt-optimization.md:64-67 references/prompt-optimization.md:67-114 references/prompt-optimization.md:114-119 references/prompt-optimization.md:119-144 references/prompt-optimization.md:144-147 references/prompt-optimization.md:147-167 references/prompt-optimization.md:167-170 references/prompt-optimization.md:170-192 references/prompt-optimization.md:192-197 references/prompt-optimization.md:197-230 references/prompt-optimization.md:230-233 references/prompt-optimization.md:233-272 references/prompt-optimization.md:272-277 references/prompt-optimization.md:277-324 references/prompt-optimization.md:324-329 references/prompt-optimization.md:329-368 references/prompt-optimization.md:368-384 references/prompt-optimization.md:384-387 references/prompt-optimization.md:387-390 references/prompt-optimization.md:390-393 references/prompt-optimization.md:393-396 references/prompt-optimization.md:396-399 references/prompt-optimization.md:399-402 references/prompt-optimization.md:402-405 references/prompt-templates.md:6-30 references/prompt-templates.md:30-33 references/prompt-templates.md:33-84 references/prompt-templates.md:84-87 references/prompt-templates.md:87-131 references/prompt-templates.md:131-136 references/prompt-templates.md:136-153 references/prompt-templates.md:153-156 references/prompt-templates.md:156-171 references/prompt-templates.md:171-174 references/prompt-templates.md:174-198 references/prompt-templates.md:198-201 references/prompt-templates.md:201-217 references/prompt-templates.md:217-222 references/prompt-templates.md:222-251 references/prompt-templates.md:251-254 references/prompt-templates.md:254-294 references/prompt-templates.md:294-297 references/prompt-templates.md:297-321 references/prompt-templates.md:321-326 references/prompt-templates.md:326-349 references/prompt-templates.md:349-352 references/prompt-templates.md:352-393 references/prompt-templates.md:393-409 references/prompt-templates.md:409-432 references/prompt-templates.md:432-435 references/prompt-templates.md:435-462 references/system-prompts.md:9-11 references/system-prompts.md:11-14 references/system-prompts.md:14-34 references/system-prompts.md:34-39 references/system-prompts.md:39-59 references/system-prompts.md:59-62 references/system-prompts.md:62-85 references/system-prompts.md:85-88 references/system-prompts.md:88-110 references/system-prompts.md:110-115 references/system-prompts.md:115-136 references/system-prompts.md:136-139 references/system-prompts.md:139-149 references/system-prompts.md:149-170 references/system-prompts.md:170-189 SKILL.md:59-82 SKILL.md:82-102 SKILL.md:102-104 SKILL.md:104-134 SKILL.md:134-144 SKILL.md:144-147 SKILL.md:147-158
📁 檔案系統存取 (3)
🌐 網路存取 (1)
審計者: claude 查看審計歷史 →

品質評分

82
架構
100
可維護性
83
內容
30
社群
100
安全
91
規範符合性

你能建構什麼

優化生產提示

使用A/B測試框架系統性地測試和改進生產LLM應用程式的提示。

建立模板庫

建立具有變數插值的可重複使用提示模板,以實現一致的內容生成。

應用進階技術

為複雜推理任務實施思維鏈和自我一致性模式。

試試這些提示

簡單分類
將此文字分類為以下類別之一:正面、負面、中立。

文字:{text}

類別:
少量學習萃取
以JSON格式萃取資訊。

範例:
文字:Apple執行長Tim Cook宣布新手機。
輸出:{"persons":["Tim Cook"],"organizations":["Apple"],"products":["iPhone"]}

文字:{text}

輸出:
思維鏈
逐步解決此問題。

問題:{problem}

步驟1:識別我們已知的信息
步驟2:確定方法
步驟3:計算
步驟4:驗證

答案:
自我一致性
用三種不同的方法解決此問題。然後找出出現最頻繁的答案。

問題:{problem}

方法1:
結果:

方法2:
結果:

方法3:
結果:

最終答案(最常見):

最佳實務

  • Be specific about format, length, and style requirements rather than relying on implied instructions
  • Use few-shot examples to demonstrate the exact output format you need, especially for structured data
  • Test prompts on edge cases and diverse inputs before deploying to production

避免

  • Overloading prompts with too many examples, causing token limits to reduce space for actual input
  • Using vague instructions like 'be helpful' or 'be accurate' that different models interpret differently
  • Skipping verification steps for factual or logical outputs that require validation

常見問題

Which LLMs work with these patterns?
Patterns work with Claude, GPT-4, Claude Code, and most instruction-tuned models. Some techniques like chain-of-thought work best on reasoning-capable models.
What is the optimal number of few-shot examples?
Most tasks perform well with 3 to 5 examples. More examples can dilute focus and consume token budget. Test different counts for your specific use case.
How do I integrate with my existing codebase?
The skill provides template systems and Python utilities. Adapt the PromptTemplate classes to your LLM client. The optimize-prompt.py script shows a testing workflow.
Is my data sent anywhere?
No. This skill runs locally. The reference materials and utility scripts operate entirely on your machine. No external network calls are made by any component.
Why do my prompts work differently across models?
Models have different training and capabilities. Test and adjust templates per model. Chain-of-thought works better on reasoning models. Some models need more explicit format instructions.
How does this compare to other prompting skills?
This skill focuses on production-ready patterns with systematic optimization workflows. It covers template systems, A/B testing, and evaluation metrics for real-world deployment.