技能 prompt-engineering-patterns
📝

prompt-engineering-patterns

安全

Apply Prompt Engineering Patterns

也可從以下取得: wshobson

Improve LLM outputs with proven prompt engineering techniques. This skill provides patterns for chain-of-thought reasoning, few-shot learning, and template systems that make AI interactions more reliable and controllable.

支援: Claude Codex Code(CC)
📊 86 充足
1

下載技能 ZIP

2

在 Claude 中上傳

前往 設定 → 功能 → 技能 → 上傳技能

3

開啟並開始使用

測試它

正在使用「prompt-engineering-patterns」。 Design a prompt that helps users write professional emails

預期結果:

A structured email writing template with role definition, tone guidelines, and format sections that produces consistent, professional email outputs.

正在使用「prompt-engineering-patterns」。 How can I improve code review quality with AI?

預期結果:

A few-shot prompt providing code review examples with common bug patterns, security considerations, and best practices that guides the AI to provide thorough, constructive feedback.

安全審計

安全
v1 • 2/24/2026

All 216 static findings are false positives. The flagged files are markdown documentation (.md) and example JSON files containing educational content about prompt engineering techniques. The scanner incorrectly interprets backticks in markdown code blocks as shell commands, text references to cryptographic terms as weak crypto implementations, and tutorial references to system commands as reconnaissance. This is a documentation skill with no executable security issues.

9
已掃描檔案
2,696
分析行數
4
發現項
1
審計總數
低風險問題 (4)
False Positive: Ruby/Shell Backtick Detection in Documentation
Static scanner flagged 170 instances of 'Ruby/shell backtick execution' in markdown files. These are false positives - the backticks are markdown code block delimiters showing Python/code examples in documentation, not actual shell commands. Files affected: SKILL.md, references/*.md, assets/*.md
False Positive: Weak Cryptographic Algorithm References
Static scanner flagged 39 instances of 'Weak cryptographic algorithm' in documentation files. These are false positives - the files contain educational content explaining prompt engineering patterns, with text examples mentioning cryptographic concepts in context of AI safety, not actual crypto implementations.
False Positive: System/Network Reconnaissance in Tutorials
Static scanner flagged 'System reconnaissance' and 'Network reconnaissance' patterns in markdown documentation. These are false positives - the files contain educational tutorials that reference system commands and networking concepts as part of prompt engineering examples, not actual reconnaissance tools.
False Positive: Filesystem Path Traversal in Documentation
Static scanner flagged 'Path traversal sequence' in references/prompt-optimization.md and scripts/optimize-prompt.py. The markdown file contains text explaining path handling concepts in prompts. The Python script is a utility for prompt optimization with legitimate file operations.
審計者: claude

品質評分

82
架構
100
可維護性
87
內容
50
社群
97
安全
100
規範符合性

你能建構什麼

Build Reliable AI Products

Design production-ready prompt systems with consistent output formats and error handling patterns for AI-powered applications.

Improve Code Generation

Apply structured prompting techniques to get better code completion and generation results from Claude or Codex.

Create AI Training Materials

Develop comprehensive prompt libraries and templates for team-wide AI adoption and best practices.

試試這些提示

Chain-of-Thought Reasoning
Solve this problem step by step:

Problem: {problem}

Think through each step carefully:
1. [First step]
2. [Second step]
3. [Third step]

Final answer:
Few-Shot Classification
Classify the following input into one of these categories: {categories}

Examples:
{examples}

Now classify this:
Input: {input}

Category:
System Prompt Template
You are {role}. Your task is {task}.

Guidelines:
- {guideline1}
- {guideline2}
- {guideline3}

Output format:
{output_format}
Iterative Refinement
Initial request: {request}

Current draft: {draft}

Feedback to address:
{feedback}

Please revise the draft based on this feedback:

最佳實務

  • Start with clear role definitions in system prompts to establish AI behavior boundaries
  • Use specific, diverse examples in few-shot prompts that cover edge cases
  • Test prompts across multiple query variations to ensure consistent outputs

避免

  • Using vague instructions like 'be helpful' without specific behavioral guidelines
  • Overloading prompts with too many examples that exceed context window limits
  • Assuming prompts work identically across different LLM models without testing

常見問題

What is chain-of-thought prompting?
Chain-of-thought prompting encourages LLMs to show their reasoning step-by-step, improving accuracy on complex math, logic, and reasoning tasks.
How many examples should I include in few-shot learning?
Most tasks work well with 2-5 examples. Balance example diversity with context window limits. Too many examples can reduce response quality.
Can I use these patterns with Claude Code?
Yes, these patterns work with Claude, Claude Code, Codex, and most modern LLMs. Claude Code can execute the prompt templates directly.
How do I debug prompts that produce inconsistent outputs?
Add explicit output format constraints, include more diverse examples, and use step-by-step reasoning to make the AI process more transparent.
What is the difference between system prompts and user prompts?
System prompts define the AI assistant's role and behavior at the start of conversation. User prompts contain the specific task or question for each interaction.
How do I handle prompt injection risks?
Use clear instruction boundaries, validate user inputs, and avoid concatenating untrusted content directly into prompts. Structure prompts with distinct sections for instructions and user data.