mlops-engineer
Build Production ML Pipelines
Building and maintaining ML pipelines requires expertise in multiple tools and cloud platforms. This skill provides expert guidance on implementing end-to-end MLOps workflows with experiment tracking, model versioning, and automated deployment.
下載技能 ZIP
在 Claude 中上傳
前往 設定 → 功能 → 技能 → 上傳技能
開啟並開始使用
測試它
正在使用「mlops-engineer」。 Design a basic ML pipeline using Kubeflow
預期結果:
- Pipeline Structure: Data Ingestion → Preprocessing → Training → Evaluation → Model Registration
- Each component runs as a Docker container with appropriate resource limits
- MLflow integrated for tracking metrics and artifacts across all stages
- Pipeline parameters defined for data paths, model hyperparameters, and thresholds
正在使用「mlops-engineer」。 How do I set up model versioning with MLflow?
預期結果:
- 1. Register model in MLflow Model Registry with unique version
- 2. Add model metadata including description and training parameters
- 3. Configure stage transitions: None → Staging → Production
- 4. Implement approval workflow for production promotions
- 5. Set up webhooks for notifications on model updates
安全審計
安全Prompt-only skill with no executable code. Static analysis scanned 0 files and detected 0 security issues. The skill provides MLOps guidance through text-based instructions only. No network requests, file system access, or external commands. Risk score: 0/100.
品質評分
你能建構什麼
Design ML Platform Architecture
Create a comprehensive MLOps platform design for an enterprise needing to deploy models at scale with experiment tracking and model versioning.
Implement Automated Retraining Pipeline
Build an automated pipeline that retrains models when data drift is detected, with approval workflows and rollback capabilities.
Configure Multi-Cloud ML Infrastructure
Set up ML infrastructure across AWS, Azure, and GCP with consistent tooling and disaster recovery capabilities.
試試這些提示
Design a basic ML pipeline using Kubeflow that includes data preprocessing, model training, and model evaluation stages. Include configuration for experiment tracking with MLflow.
Set up MLflow experiment tracking for a PyTorch training project. Include configuration for metrics logging, artifact storage, and model versioning.
Design a production ML deployment architecture on AWS SageMaker with auto-scaling, monitoring, and blue-green deployment capabilities. Include cost optimization strategies.
Design a complete MLOps platform architecture including: feature store, experiment tracking,/CD pipeline, monitoring model registry, CI, and drift detection. Specify tools and integration points for AWS or GCP.
最佳實務
- Start with experiment tracking before building complex pipelines to understand data and model behavior
- Implement monitoring and alerting from the initial deployment, not as an afterthought
- Use infrastructure as code (Terraform, CloudFormation) for reproducible ML environments
避免
- Deploying models without automated testing or validation gates
- Skipping data quality checks before model training leads to poor model performance
- Using single cloud provider without considering vendor lock-in for critical ML systems