llm-app-patterns
Build Production LLM Applications
Building LLM applications requires navigating complex architectural decisions. This skill provides battle-tested patterns for RAG pipelines, agent systems, and production operations.
Télécharger le ZIP du skill
Importer dans Claude
Allez dans Paramètres → Capacités → Skills → Importer un skill
Activez et commencez Ă utiliser
Tester
Utilisation de "llm-app-patterns". User asks: What is the company's refund policy?
Résultat attendu:
- Retrieves relevant policy documents from vector database
- Generates answer grounded in retrieved context with source citations
- Returns response with confidence score and document references
Utilisation de "llm-app-patterns". User asks: Plan a research project on climate change impacts
Résultat attendu:
- Creates plan with steps: gather data, analyze trends, identify sources, draft report
- Executes each step sequentially with tool calls
- Synthesizes findings into comprehensive research outline
Audit de sécurité
SûrThis skill is a documentation file containing educational content about LLM application patterns. All static analysis findings are false positives caused by markdown formatting. The backticks flagged are code block delimiters and ASCII art borders, not shell command execution. URLs are documentation references, not active network calls. Code examples like hashlib.sha256 are illustrative and use secure algorithms. No executable code or security risks detected.
Score de qualité
Ce que vous pouvez construire
RAG Knowledge Base
Build a question-answering system grounded in your documentation using hybrid search and contextual compression.
Agent Task Automation
Create multi-step agents that can search, calculate, and synthesize information using the ReAct or Plan-and-Execute patterns.
LLM Production Monitoring
Implement observability for LLM applications with metrics tracking, distributed tracing, and evaluation frameworks.
Essayez ces prompts
Answer the user's question based ONLY on the following context. If the context doesn't contain enough information, say you don't have enough information.
Context:
{context}
Question: {question}
Answer:You are an AI assistant that can use tools to answer questions.
Available tools:
{tools_description}
Use this format:
Thought: [your reasoning about what to do next]
Action: [tool_name(arguments)]
Observation: [tool result]
... (repeat as needed)
Thought: I have enough information to answer
Final Answer: [your response]
Question: {question}Input: {example1_input}
Output: {example1_output}
Input: {example2_input}
Output: {example2_output}
Input: {user_input}
Output:Step 1 (Research): Research the topic: {input}
Step 2 (Analyze): Analyze these findings: {research}
Step 3 (Summarize): Summarize this analysis in 3 bullet points: {analysis}Bonnes pratiques
- Use hybrid search combining semantic and keyword matching for better retrieval accuracy
- Implement caching for deterministic prompts to reduce latency and costs
- Track key metrics like latency, token usage, and user satisfaction for continuous improvement
Éviter
- Using fixed-size chunking without considering document structure, which breaks context
- Skipping evaluation and monitoring, making it impossible to detect quality degradation
- Not implementing fallback strategies when primary LLM providers experience outages