Fähigkeiten senior-prompt-engineer

senior-prompt-engineer

Sicher ⚡ Enthält Skripte📁 Dateisystemzugriff

Master LLM Prompt Engineering

Auch verfügbar von: alirezarezvani

Creating effective prompts for large language models requires deep expertise in patterns, frameworks, and optimization techniques. This skill provides production-ready prompt engineering strategies for Claude, GPT-4, and other LLMs, including structured outputs, chain-of-thought reasoning, and agentic system design.

Unterstützt: Claude Codex Code(CC)
🥈 77 Silber
1

Die Skill-ZIP herunterladen

2

In Claude hochladen

Gehe zu Einstellungen → Fähigkeiten → Skills → Skill hochladen

3

Einschalten und loslegen

Teste es

Verwendung von "senior-prompt-engineer". Create a prompt that helps developers write clean, documented Python code with type hints

Erwartetes Ergebnis:

  • Define clear role and expertise level for the AI
  • Specify output structure and format requirements
  • Include example code with inline comments
  • Add validation criteria for successful outputs
  • Provide guidelines for handling edge cases

Verwendung von "senior-prompt-engineer". Design a few-shot prompt for sentiment analysis of customer reviews

Erwartetes Ergebnis:

  • Provide 3-5 labeled examples showing positive, negative, and neutral reviews
  • Include brief explanations for each example classification
  • Add format instructions for consistent response structure
  • Specify handling for ambiguous cases

Verwendung von "senior-prompt-engineer". Create a chain-of-thought prompt for mathematical problem solving

Erwartetes Ergebnis:

  • Break down the problem into explicit steps
  • Show intermediate calculations and reasoning
  • Verify each step before proceeding to the next
  • Final answer with supporting work shown

Sicherheitsaudit

Sicher
v5 • 1/17/2026

Documentation-focused skill containing reference guides and template Python scripts for prompt optimization, RAG evaluation, and agent orchestration. All scripts are skeleton implementations with standard file I/O only. No network calls, credential access, or shell command execution detected. All 68 static findings are false positives caused by scanner misinterpretation of documentation keywords and markdown code fence syntax.

8
Gescannte Dateien
1,042
Analysierte Zeilen
2
befunde
5
Gesamtzahl Audits
Auditiert von: claude Audit-Verlauf anzeigen →

Qualitätsbewertung

68
Architektur
100
Wartbarkeit
87
Inhalt
23
Community
100
Sicherheit
91
Spezifikationskonformität

Was du bauen kannst

Build Production AI Systems

Design robust prompting pipelines for production AI applications with evaluation frameworks and monitoring strategies.

Define AI Product Requirements

Create clear prompt specifications and success metrics for AI-powered features and user experiences.

Optimize LLM Performance

Apply systematic evaluation techniques to improve model outputs and reduce costs through prompt optimization.

Probiere diese Prompts

Chain-of-Thought
Solve this problem step by step. First, identify the key components. Second, analyze each component. Third, reason through the relationships. Finally, derive the conclusion with supporting evidence.
Few-Shot Examples
Complete the following tasks based on these examples: Example 1: [input] -> [output]. Example 2: [input] -> [output]. Now complete: [new input] -> ?
Structured Output
Provide your response as a valid JSON object with these exact fields: { "summary": "brief summary", "confidence": 0.0-1.0, "reasoning": "explanation" }. Do not include any other text.
System Persona
You are an expert [role] with [years] years of experience. You always: 1) Start with clear understanding, 2) Provide structured responses, 3) Include practical examples, 4) Ask clarifying questions when needed.

Bewährte Verfahren

  • Start with clear role definition and context setting for the AI model
  • Use specific output formats like JSON or structured lists to reduce variability
  • Iterate systematically by testing prompts and measuring results against metrics
  • Include error handling and fallback instructions for edge cases

Vermeiden

  • Avoid vague instructions that leave interpretation to the model
  • Do not overload prompts with too many constraints simultaneously
  • Avoid assuming the AI has context you have not explicitly provided
  • Do not use prompts that try to circumvent safety guidelines

Häufig gestellte Fragen

Which LLMs work with these prompts?
These patterns work with Claude, GPT-4, Gemini, and most modern instruction-tuned LLMs. Some adjustments may be needed for optimal results.
What are token limits for prompts?
Most models support 4K-128K context tokens. Keep system prompts under 2K tokens to leave room for user content and responses.
Can I combine multiple patterns?
Yes. Chain-of-thought works well with structured outputs. Few-shot examples enhance role-based prompts. Start simple and add complexity gradually.
How do I protect sensitive data?
Never include personal information, credentials, or proprietary code in prompts. Use placeholder values and document data requirements separately.
Why does my prompt work differently over time?
Model updates can change behavior. Version your prompts, test regularly, and update prompts when model versions change.
How is this different from basic prompting?
This skill covers production patterns including evaluation, optimization, and system design. Basic prompting provides single-turn responses only.