Skills prompt-engineering-patterns
📝

prompt-engineering-patterns

Safe ⚙️ External commands📁 Filesystem access🌐 Network access

Master Prompt Engineering for Better AI Results

LLMs produce inconsistent results with poorly crafted prompts. This skill provides battle-tested patterns and templates for chain-of-thought reasoning, few-shot learning, and systematic prompt optimization to improve output quality.

Supports: Claude Codex Code(CC)
🥈 81 Silver
1

Download the skill ZIP

2

Upload in Claude

Go to Settings → Capabilities → Skills → Upload skill

3

Toggle on and start using

Test it

Using "prompt-engineering-patterns". Write a prompt to summarize customer feedback

Expected outcome:

  • Start with the system role: You are a professional analyst.
  • Add specific constraints: Summarize in 3 bullet points.
  • Include examples: Show input-output pairs for feedback categories.
  • Define format: Use consistent structure for each summary.

Security Audit

Safe
v4 • 1/17/2026

This is a documentation-focused skill containing markdown guides and a local Python utility script for prompt optimization. The 228 static findings are false positives triggered by documentation patterns: backticks in Python code examples misinterpreted as shell commands, cryptographic terminology (SHA, MD5) mentioned in text, and references to API keys and file paths. The skill makes no network calls, has no sensitive filesystem access, and does not execute external commands. The optimize-prompt.py script uses a mock LLM client for local testing only.

10
Files scanned
2,919
Lines analyzed
3
findings
4
Total audits

Risk Factors

⚙️ External commands (169)
assets/prompt-template-library.md:6-12 assets/prompt-template-library.md:12-15 assets/prompt-template-library.md:15-23 assets/prompt-template-library.md:23-26 assets/prompt-template-library.md:26-33 assets/prompt-template-library.md:33-38 assets/prompt-template-library.md:38-50 assets/prompt-template-library.md:50-53 assets/prompt-template-library.md:53-68 assets/prompt-template-library.md:68-73 assets/prompt-template-library.md:73-84 assets/prompt-template-library.md:84-87 assets/prompt-template-library.md:87-101 assets/prompt-template-library.md:101-104 assets/prompt-template-library.md:104-113 assets/prompt-template-library.md:113-118 assets/prompt-template-library.md:118-125 assets/prompt-template-library.md:125-128 assets/prompt-template-library.md:128-137 assets/prompt-template-library.md:137-140 assets/prompt-template-library.md:140-147 assets/prompt-template-library.md:147-152 assets/prompt-template-library.md:152-163 assets/prompt-template-library.md:163-166 assets/prompt-template-library.md:166-183 assets/prompt-template-library.md:183-188 assets/prompt-template-library.md:188-197 assets/prompt-template-library.md:197-200 assets/prompt-template-library.md:200-207 assets/prompt-template-library.md:207-212 assets/prompt-template-library.md:212-221 assets/prompt-template-library.md:221-224 assets/prompt-template-library.md:224-234 assets/prompt-template-library.md:234-237 assets/prompt-template-library.md:237-244 references/chain-of-thought.md:12-29 references/chain-of-thought.md:29-34 references/chain-of-thought.md:34-53 references/chain-of-thought.md:53-58 references/chain-of-thought.md:58-83 references/chain-of-thought.md:83-90 references/chain-of-thought.md:90-125 references/chain-of-thought.md:125-130 references/chain-of-thought.md:130-176 references/chain-of-thought.md:176-181 references/chain-of-thought.md:181-218 references/chain-of-thought.md:218-223 references/chain-of-thought.md:223-248 references/chain-of-thought.md:248-251 references/chain-of-thought.md:251-278 references/chain-of-thought.md:278-281 references/chain-of-thought.md:281-303 references/chain-of-thought.md:303-308 references/chain-of-thought.md:308-328 references/chain-of-thought.md:328-331 references/chain-of-thought.md:331-345 references/chain-of-thought.md:345-349 references/chain-of-thought.md:349-359 references/few-shot-learning.md:12-27 references/few-shot-learning.md:27-34 references/few-shot-learning.md:34-56 references/few-shot-learning.md:56-63 references/few-shot-learning.md:63-73 references/few-shot-learning.md:73-80 references/few-shot-learning.md:80-94 references/few-shot-learning.md:94-103 references/few-shot-learning.md:103-121 references/few-shot-learning.md:121-126 references/few-shot-learning.md:126-138 references/few-shot-learning.md:138-143 references/few-shot-learning.md:143-154 references/few-shot-learning.md:154-161 references/few-shot-learning.md:161-166 references/few-shot-learning.md:166-169 references/few-shot-learning.md:169-195 references/few-shot-learning.md:195-200 references/few-shot-learning.md:200-214 references/few-shot-learning.md:214-219 references/few-shot-learning.md:219-228 references/few-shot-learning.md:228-231 references/few-shot-learning.md:231-240 references/few-shot-learning.md:240-243 references/few-shot-learning.md:243-252 references/few-shot-learning.md:252-257 references/few-shot-learning.md:257-266 references/few-shot-learning.md:266-269 references/few-shot-learning.md:269-293 references/few-shot-learning.md:293-300 references/few-shot-learning.md:300-334 references/few-shot-learning.md:334-339 references/few-shot-learning.md:339-354 references/prompt-optimization.md:6-26 references/prompt-optimization.md:26-29 references/prompt-optimization.md:29-31 references/prompt-optimization.md:31-33 references/prompt-optimization.md:33-64 references/prompt-optimization.md:64-67 references/prompt-optimization.md:67-114 references/prompt-optimization.md:114-119 references/prompt-optimization.md:119-144 references/prompt-optimization.md:144-147 references/prompt-optimization.md:147-167 references/prompt-optimization.md:167-170 references/prompt-optimization.md:170-192 references/prompt-optimization.md:192-197 references/prompt-optimization.md:197-230 references/prompt-optimization.md:230-233 references/prompt-optimization.md:233-272 references/prompt-optimization.md:272-277 references/prompt-optimization.md:277-324 references/prompt-optimization.md:324-329 references/prompt-optimization.md:329-368 references/prompt-optimization.md:368-384 references/prompt-optimization.md:384-387 references/prompt-optimization.md:387-390 references/prompt-optimization.md:390-393 references/prompt-optimization.md:393-396 references/prompt-optimization.md:396-399 references/prompt-optimization.md:399-402 references/prompt-optimization.md:402-405 references/prompt-templates.md:6-30 references/prompt-templates.md:30-33 references/prompt-templates.md:33-84 references/prompt-templates.md:84-87 references/prompt-templates.md:87-131 references/prompt-templates.md:131-136 references/prompt-templates.md:136-153 references/prompt-templates.md:153-156 references/prompt-templates.md:156-171 references/prompt-templates.md:171-174 references/prompt-templates.md:174-198 references/prompt-templates.md:198-201 references/prompt-templates.md:201-217 references/prompt-templates.md:217-222 references/prompt-templates.md:222-251 references/prompt-templates.md:251-254 references/prompt-templates.md:254-294 references/prompt-templates.md:294-297 references/prompt-templates.md:297-321 references/prompt-templates.md:321-326 references/prompt-templates.md:326-349 references/prompt-templates.md:349-352 references/prompt-templates.md:352-393 references/prompt-templates.md:393-409 references/prompt-templates.md:409-432 references/prompt-templates.md:432-435 references/prompt-templates.md:435-462 references/system-prompts.md:9-11 references/system-prompts.md:11-14 references/system-prompts.md:14-34 references/system-prompts.md:34-39 references/system-prompts.md:39-59 references/system-prompts.md:59-62 references/system-prompts.md:62-85 references/system-prompts.md:85-88 references/system-prompts.md:88-110 references/system-prompts.md:110-115 references/system-prompts.md:115-136 references/system-prompts.md:136-139 references/system-prompts.md:139-149 references/system-prompts.md:149-170 references/system-prompts.md:170-189 SKILL.md:59-82 SKILL.md:82-102 SKILL.md:102-104 SKILL.md:104-134 SKILL.md:134-144 SKILL.md:144-147 SKILL.md:147-158
📁 Filesystem access (3)
🌐 Network access (1)
Audited by: claude View Audit History →

Quality Score

82
Architecture
100
Maintainability
83
Content
30
Community
100
Security
91
Spec Compliance

What You Can Build

Optimize Production Prompts

Systematically test and refine prompts for production LLM applications using A/B testing frameworks.

Build Template Libraries

Create reusable prompt templates with variable interpolation for consistent content generation.

Apply Advanced Techniques

Implement chain-of-thought and self-consistency patterns for complex reasoning tasks.

Try These Prompts

Simple Classification
Classify this text into one of these categories: Positive, Negative, Neutral.

Text: {text}

Category:
Few-Shot Extraction
Extract information in JSON format.

Example:
Text: Apple CEO Tim Cook announced new iPhone.
Output: {"persons":["Tim Cook"],"organizations":["Apple"],"products":["iPhone"]}

Text: {text}

Output:
Chain-of-Thought
Solve this step by step.

Problem: {problem}

Step 1: Identify what we know
Step 2: Determine the approach
Step 3: Calculate
Step 4: Verify

Answer:
Self-Consistency
Solve this problem three different ways. Then identify which answer appears most frequently.

Problem: {problem}

Approach 1:
Result:

Approach 2:
Result:

Approach 3:
Result:

Final Answer (most common):

Best Practices

  • Be specific about format, length, and style requirements rather than relying on implied instructions
  • Use few-shot examples to demonstrate the exact output format you need, especially for structured data
  • Test prompts on edge cases and diverse inputs before deploying to production

Avoid

  • Overloading prompts with too many examples, causing token limits to reduce space for actual input
  • Using vague instructions like 'be helpful' or 'be accurate' that different models interpret differently
  • Skipping verification steps for factual or logical outputs that require validation

Frequently Asked Questions

Which LLMs work with these patterns?
Patterns work with Claude, GPT-4, Claude Code, and most instruction-tuned models. Some techniques like chain-of-thought work best on reasoning-capable models.
What is the optimal number of few-shot examples?
Most tasks perform well with 3 to 5 examples. More examples can dilute focus and consume token budget. Test different counts for your specific use case.
How do I integrate with my existing codebase?
The skill provides template systems and Python utilities. Adapt the PromptTemplate classes to your LLM client. The optimize-prompt.py script shows a testing workflow.
Is my data sent anywhere?
No. This skill runs locally. The reference materials and utility scripts operate entirely on your machine. No external network calls are made by any component.
Why do my prompts work differently across models?
Models have different training and capabilities. Test and adjust templates per model. Chain-of-thought works better on reasoning models. Some models need more explicit format instructions.
How does this compare to other prompting skills?
This skill focuses on production-ready patterns with systematic optimization workflows. It covers template systems, A/B testing, and evaluation metrics for real-world deployment.