machine-learning-ops-ml-pipeline
Build Production ML Pipelines with Multi-Agent MLOps
Machine learning teams struggle to productionize models with proper infrastructure and monitoring. This skill orchestrates specialized AI agents to design complete ML pipelines from data engineering through deployment and continuous monitoring.
Télécharger le ZIP du skill
Importer dans Claude
Allez dans Paramètres → Capacités → Skills → Importer un skill
Activez et commencez Ă utiliser
Tester
Utilisation de "machine-learning-ops-ml-pipeline". Build ML pipeline for fraud detection with real-time inference requirements
Résultat attendu:
- Data Pipeline: Kafka streaming ingestion with schema validation, feature computation using Flink, Redis for low-latency feature serving
- Model Training: XGBoost with time-series cross-validation, hyperparameter tuning via Optuna, MLflow experiment tracking
- Serving: KServe on Kubernetes with GPU acceleration, <50ms p99 latency SLO, automatic model warm-up
- Monitoring: Real-time drift detection using PSI, automated retraining triggers, Grafana dashboards for fraud metrics
Utilisation de "machine-learning-ops-ml-pipeline". Set up experiment tracking and model registry for computer vision team
Résultat attendu:
- Experiment Tracking: MLflow with custom logging for images, metrics, and model architectures. Integration with PyTorch Lightning callbacks.
- Model Registry: Staging workflow with automated validation tests, model comparison tools, and one-click deployment to staging environment.
- Collaboration: Shared dashboards for team visibility, experiment tagging for reproducibility, artifact lineage tracking.
Audit de sécurité
SûrThis skill is documentation-only containing markdown content and prompt templates for ML pipeline orchestration. All static analysis findings were false positives: backticks at line 27 are markdown formatting for file references, 'blocker' matches are not cryptographic code, and 'reconnaissance' match is a configuration option label. No executable code, network calls, file operations, or secret handling detected.
Score de qualité
Ce que vous pouvez construire
Data Science Team Lead
Orchestrate end-to-end ML pipeline development for a new churn prediction model, coordinating data engineers for feature pipelines, data scientists for model experiments, and MLOps engineers for production deployment on Kubernetes.
ML Platform Engineer
Design and implement standardized MLOps infrastructure with experiment tracking, model registry, automated retraining pipelines, and comprehensive monitoring for multiple ML teams across the organization.
Startup CTO
Build production-ready ML serving infrastructure from scratch with cost optimization, auto-scaling, blue-green deployments, and observability to support a new recommendation engine feature.
Essayez ces prompts
Design a basic ML pipeline for [use case] with data ingestion, model training, and API serving. Focus on essential components and provide a high-level architecture diagram.
Design a feature store for [domain] using Feast. Include data source connections, feature definitions, transformation logic, online/offline store configuration, and integration with training and serving pipelines.
Create a complete deployment strategy for a [model type] model on Kubernetes. Include KServe configuration, HPA autoscaling rules, Istio traffic management for canary releases, monitoring metrics, and rollback procedures.
Architect an enterprise MLOps platform for [organization size]. Cover multi-team experiment tracking with MLflow, centralized feature store, model registry with approval workflows, automated retraining triggers, drift detection, and cost allocation across teams.
Bonnes pratiques
- Version control all pipeline artifacts including data schemas, model configurations, and infrastructure code using Git and DVC for reproducibility
- Implement comprehensive validation gates at each pipeline stage: data quality checks, model performance thresholds, and infrastructure health checks before promotion
- Design for observability from the start with structured logging, distributed tracing, and business metrics correlation to enable rapid incident response
Éviter
- Training-serving skew: Using different feature computation logic in training versus production, leading to unexpected model performance degradation
- Manual model deployment: Relying on manual processes for model promotion instead of automated CI/CD pipelines with validation gates
- Monitoring blind spots: Only tracking system metrics without monitoring data drift, concept drift, or business KPI correlation