Compétences llm-application-dev-prompt-optimize
🎯

llm-application-dev-prompt-optimize

Sûr

Optimize LLM Prompts with Advanced Engineering Techniques

Transform basic instructions into production-ready prompts that improve accuracy by 40% and reduce costs by 50-80%. This skill provides expert guidance on chain-of-thought reasoning, constitutional AI patterns, and model-specific optimization for Claude, GPT, and Gemini.

Prend en charge: Claude Codex Code(CC)
đŸ„‰ 74 Bronze
1

Télécharger le ZIP du skill

2

Importer dans Claude

Allez dans ParamĂštres → CapacitĂ©s → Skills → Importer un skill

3

Activez et commencez Ă  utiliser

Tester

Utilisation de "llm-application-dev-prompt-optimize". Optimize this prompt: 'Answer customer questions about refunds'

Résultat attendu:

Optimized prompt with role definition, diagnostic framework, solution delivery structure, verification steps, constraints, and JSON output format. Includes chain-of-thought reasoning section and self-review checklist for quality assurance.

Utilisation de "llm-application-dev-prompt-optimize". Make this prompt better for data analysis: 'Analyze the sales data'

Résultat attendu:

Comprehensive analysis framework with five phases: data validation, trend analysis with statistical significance testing, segment analysis across multiple dimensions, insights template with confidence scoring, and prioritized recommendations in YAML format.

Audit de sécurité

Sûr
v1 ‱ 2/25/2026

Static analysis detected 62 potential security issues in code examples within documentation files. All findings are false positives - the detected patterns (Ruby backticks, MD5 references, reconnaissance commands) appear exclusively within markdown code blocks that demonstrate prompt engineering techniques. The skill contains no executable code, performs no file operations, network requests, or command execution. It is a documentation-only skill providing guidance on prompt optimization best practices.

2
Fichiers analysés
632
Lignes analysées
0
résultats
1
Total des audits
Aucun problÚme de sécurité trouvé
Audité par: claude

Score de qualité

38
Architecture
100
Maintenabilité
87
Contenu
50
Communauté
100
Sécurité
91
Conformité aux spécifications

Ce que vous pouvez construire

Customer Support Prompt Optimization

Transform a basic customer support prompt into a structured, empathetic response system with diagnostic reasoning frameworks, escalation paths, and quality constraints for consistent, professional support interactions.

Data Analysis Prompt Enhancement

Upgrade a simple data analysis request into a comprehensive analytics framework with phase-based validation, statistical significance testing, segment analysis, and executive reporting in structured YAML format.

Code Generation Safety Improvements

Enhance code generation prompts with security-first design thinking, input validation requirements, SOLID principles, and self-review checklists to prevent injection vulnerabilities and ensure production-ready code.

Essayez ces prompts

Basic Chain-of-Thought Prompt
Analyze this step by step:

1. Identify the core problem
2. Break down into smaller components
3. Reason through each component carefully
4. Synthesize findings
5. Provide final answer with confidence level

Input: {your_input}
Few-Shot Learning Template
Example 1:
Input: {simple_case}
Output: {correct_output}

Example 2:
Input: {edge_case}
Output: {correct_output}

Example 3:
Input: {error_case}
Wrong: {incorrect_output}
Correct: {correct_output}

Now apply to: {actual_input}
Constitutional AI with Self-Critique
{task_instructions}

Review your response against these principles:
1. ACCURACY: Verify all claims, flag uncertainties
2. SAFETY: Check for harm, bias, ethical issues
3. QUALITY: Ensure clarity, consistency, completeness

Initial Response: [Generate]
Self-Review: [Evaluate against principles]
Final Response: [Refined based on review]
Model-Optimized Structure (Claude)
<context>
{background_information}
</context>

<task>
{clear_objective_with_constraints}
</task>

<thinking>
1. Understanding requirements...
2. Identifying components...
3. Planning approach...
</thinking>

<output_format>
{xml_structured_response_specification}
</output_format>

Bonnes pratiques

  • Always define clear role, context, task, and output format in your prompts
  • Use chain-of-thought reasoning for complex multi-step problems to improve accuracy
  • Include 3-5 diverse examples covering typical, edge, and error cases for few-shot learning
  • Implement self-critique loops with constitutional principles for safety-critical applications

Éviter

  • Avoid vague instructions like 'analyze this' without specifying framework, output format, or success criteria
  • Do not use single examples without edge case coverage - this leads to brittle prompt behavior
  • Never deploy prompts without testing against adversarial inputs, out-of-scope queries, and edge cases
  • Avoid model-agnostic prompts - optimize structure for specific LLMs (Claude prefers XML tags, GPT prefers ## headers)

Foire aux questions

What is chain-of-thought prompting and when should I use it?
Chain-of-thought (CoT) prompting breaks complex problems into step-by-step reasoning. Use it for tasks requiring logical deduction, multi-step calculations, or systematic analysis. CoT can improve reasoning accuracy by 25-40% for math, logic, and analytical tasks.
How many examples should I include in few-shot learning?
Include 3-5 carefully selected examples covering: simple cases (expected behavior), edge cases (boundary conditions), and error cases (what to avoid). Quality matters more than quantity - each example should teach a distinct aspect of the desired behavior.
What is constitutional AI and why does it matter?
Constitutional AI embeds principles (accuracy, safety, quality) into prompts with self-critique loops. The model generates an initial response, evaluates it against principles, and refines accordingly. This reduces harmful outputs by 40% and is essential for production applications.
Should I optimize prompts differently for Claude vs GPT?
Yes. Claude performs best with XML tags (<context>, <task>, <output_format>) and explicit reasoning sections. GPT-4/5 prefers ## headers and JSON structures. Gemini responds well to **bold** section markers with explicit constraints. Always tailor syntax to the target model.
How do I measure if my prompt optimization is working?
Use LLM-as-judge evaluation with 20 test cases (10 typical, 5 edge, 3 adversarial, 2 out-of-scope). Rate on task completion, accuracy, reasoning, format adherence, and safety. Track success rate, token efficiency, and cost over time. A/B test against the original prompt for 48 hours.
What are common prompt optimization mistakes to avoid?
Common mistakes: omitting output format specifications (causing inconsistent responses), using ambiguous language (leading to varied interpretations), skipping edge case examples (creating brittle behavior), and deploying without testing against adversarial inputs (risking safety issues).

Détails du développeur

Structure de fichiers

📁 resources/

📄 implementation-playbook.md

📄 SKILL.md