prompt-engineering-patterns
Apply Prompt Engineering Patterns
๋ํ ๋ค์์์ ์ฌ์ฉํ ์ ์์ต๋๋ค: wshobson
Improve LLM outputs with proven prompt engineering techniques. This skill provides patterns for chain-of-thought reasoning, few-shot learning, and template systems that make AI interactions more reliable and controllable.
์คํฌ ZIP ๋ค์ด๋ก๋
Claude์์ ์ ๋ก๋
์ค์ โ ๊ธฐ๋ฅ โ ์คํฌ โ ์คํฌ ์ ๋ก๋๋ก ์ด๋
ํ ๊ธ์ ์ผ๊ณ ์ฌ์ฉ ์์
ํ ์คํธํด ๋ณด๊ธฐ
"prompt-engineering-patterns" ์ฌ์ฉ ์ค์ ๋๋ค. Design a prompt that helps users write professional emails
์์ ๊ฒฐ๊ณผ:
A structured email writing template with role definition, tone guidelines, and format sections that produces consistent, professional email outputs.
"prompt-engineering-patterns" ์ฌ์ฉ ์ค์ ๋๋ค. How can I improve code review quality with AI?
์์ ๊ฒฐ๊ณผ:
A few-shot prompt providing code review examples with common bug patterns, security considerations, and best practices that guides the AI to provide thorough, constructive feedback.
๋ณด์ ๊ฐ์ฌ
์์ All 216 static findings are false positives. The flagged files are markdown documentation (.md) and example JSON files containing educational content about prompt engineering techniques. The scanner incorrectly interprets backticks in markdown code blocks as shell commands, text references to cryptographic terms as weak crypto implementations, and tutorial references to system commands as reconnaissance. This is a documentation skill with no executable security issues.
๋ฎ์ ์ํ ๋ฌธ์ (4)
ํ์ง ์ ์
๋ง๋ค ์ ์๋ ๊ฒ
Build Reliable AI Products
Design production-ready prompt systems with consistent output formats and error handling patterns for AI-powered applications.
Improve Code Generation
Apply structured prompting techniques to get better code completion and generation results from Claude or Codex.
Create AI Training Materials
Develop comprehensive prompt libraries and templates for team-wide AI adoption and best practices.
์ด ํ๋กฌํํธ๋ฅผ ์ฌ์ฉํด ๋ณด์ธ์
Solve this problem step by step:
Problem: {problem}
Think through each step carefully:
1. [First step]
2. [Second step]
3. [Third step]
Final answer:Classify the following input into one of these categories: {categories}
Examples:
{examples}
Now classify this:
Input: {input}
Category:You are {role}. Your task is {task}.
Guidelines:
- {guideline1}
- {guideline2}
- {guideline3}
Output format:
{output_format}Initial request: {request}
Current draft: {draft}
Feedback to address:
{feedback}
Please revise the draft based on this feedback:๋ชจ๋ฒ ์ฌ๋ก
- Start with clear role definitions in system prompts to establish AI behavior boundaries
- Use specific, diverse examples in few-shot prompts that cover edge cases
- Test prompts across multiple query variations to ensure consistent outputs
ํผํ๊ธฐ
- Using vague instructions like 'be helpful' without specific behavioral guidelines
- Overloading prompts with too many examples that exceed context window limits
- Assuming prompts work identically across different LLM models without testing
์์ฃผ ๋ฌป๋ ์ง๋ฌธ
What is chain-of-thought prompting?
How many examples should I include in few-shot learning?
Can I use these patterns with Claude Code?
How do I debug prompts that produce inconsistent outputs?
What is the difference between system prompts and user prompts?
How do I handle prompt injection risks?
๊ฐ๋ฐ์ ์ธ๋ถ ์ ๋ณด
์์ฑ์
sickn33๋ผ์ด์ ์ค
MIT
๋ฆฌํฌ์งํ ๋ฆฌ
https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/prompt-engineering-patterns์ฐธ์กฐ
main
ํ์ผ ๊ตฌ์กฐ
๐ assets/
๐ prompt-template-library.md
๐ references/
๐ chain-of-thought.md
๐ few-shot-learning.md
๐ prompt-templates.md
๐ system-prompts.md
๐ scripts/
๐ optimize-prompt.py
๐ SKILL.md