llm-app-patterns
Build Production LLM Applications
Building LLM applications requires navigating complex architectural decisions. This skill provides battle-tested patterns for RAG pipelines, agent systems, and production operations.
Die Skill-ZIP herunterladen
In Claude hochladen
Gehe zu Einstellungen → Fähigkeiten → Skills → Skill hochladen
Einschalten und loslegen
Teste es
Verwendung von "llm-app-patterns". User asks: What is the company's refund policy?
Erwartetes Ergebnis:
- Retrieves relevant policy documents from vector database
- Generates answer grounded in retrieved context with source citations
- Returns response with confidence score and document references
Verwendung von "llm-app-patterns". User asks: Plan a research project on climate change impacts
Erwartetes Ergebnis:
- Creates plan with steps: gather data, analyze trends, identify sources, draft report
- Executes each step sequentially with tool calls
- Synthesizes findings into comprehensive research outline
Sicherheitsaudit
SicherThis skill is a documentation file containing educational content about LLM application patterns. All static analysis findings are false positives caused by markdown formatting. The backticks flagged are code block delimiters and ASCII art borders, not shell command execution. URLs are documentation references, not active network calls. Code examples like hashlib.sha256 are illustrative and use secure algorithms. No executable code or security risks detected.
Qualitätsbewertung
Was du bauen kannst
RAG Knowledge Base
Build a question-answering system grounded in your documentation using hybrid search and contextual compression.
Agent Task Automation
Create multi-step agents that can search, calculate, and synthesize information using the ReAct or Plan-and-Execute patterns.
LLM Production Monitoring
Implement observability for LLM applications with metrics tracking, distributed tracing, and evaluation frameworks.
Probiere diese Prompts
Answer the user's question based ONLY on the following context. If the context doesn't contain enough information, say you don't have enough information.
Context:
{context}
Question: {question}
Answer:You are an AI assistant that can use tools to answer questions.
Available tools:
{tools_description}
Use this format:
Thought: [your reasoning about what to do next]
Action: [tool_name(arguments)]
Observation: [tool result]
... (repeat as needed)
Thought: I have enough information to answer
Final Answer: [your response]
Question: {question}Input: {example1_input}
Output: {example1_output}
Input: {example2_input}
Output: {example2_output}
Input: {user_input}
Output:Step 1 (Research): Research the topic: {input}
Step 2 (Analyze): Analyze these findings: {research}
Step 3 (Summarize): Summarize this analysis in 3 bullet points: {analysis}Bewährte Verfahren
- Use hybrid search combining semantic and keyword matching for better retrieval accuracy
- Implement caching for deterministic prompts to reduce latency and costs
- Track key metrics like latency, token usage, and user satisfaction for continuous improvement
Vermeiden
- Using fixed-size chunking without considering document structure, which breaks context
- Skipping evaluation and monitoring, making it impossible to detect quality degradation
- Not implementing fallback strategies when primary LLM providers experience outages