Habilidades mlops-engineer
⚙️

mlops-engineer

Seguro

Build Production ML Pipelines

Building and maintaining ML pipelines requires expertise in multiple tools and cloud platforms. This skill provides expert guidance on implementing end-to-end MLOps workflows with experiment tracking, model versioning, and automated deployment.

Suporta: Claude Codex Code(CC)
🥉 74 Bronze
1

Baixar o ZIP da skill

2

Upload no Claude

Vá em Configurações → Capacidades → Skills → Upload skill

3

Ative e comece a usar

Testar

A utilizar "mlops-engineer". Design a basic ML pipeline using Kubeflow

Resultado esperado:

  • Pipeline Structure: Data Ingestion → Preprocessing → Training → Evaluation → Model Registration
  • Each component runs as a Docker container with appropriate resource limits
  • MLflow integrated for tracking metrics and artifacts across all stages
  • Pipeline parameters defined for data paths, model hyperparameters, and thresholds

A utilizar "mlops-engineer". How do I set up model versioning with MLflow?

Resultado esperado:

  • 1. Register model in MLflow Model Registry with unique version
  • 2. Add model metadata including description and training parameters
  • 3. Configure stage transitions: None → Staging → Production
  • 4. Implement approval workflow for production promotions
  • 5. Set up webhooks for notifications on model updates

Auditoria de Segurança

Seguro
v1 • 2/25/2026

Prompt-only skill with no executable code. Static analysis scanned 0 files and detected 0 security issues. The skill provides MLOps guidance through text-based instructions only. No network requests, file system access, or external commands. Risk score: 0/100.

0
Arquivos analisados
0
Linhas analisadas
0
achados
1
Total de auditorias
Nenhum problema de segurança encontrado
Auditado por: claude

Pontuação de qualidade

38
Arquitetura
100
Manutenibilidade
87
ConteĂşdo
50
Comunidade
100
Segurança
91
Conformidade com especificações

O Que VocĂŞ Pode Construir

Design ML Platform Architecture

Create a comprehensive MLOps platform design for an enterprise needing to deploy models at scale with experiment tracking and model versioning.

Implement Automated Retraining Pipeline

Build an automated pipeline that retrains models when data drift is detected, with approval workflows and rollback capabilities.

Configure Multi-Cloud ML Infrastructure

Set up ML infrastructure across AWS, Azure, and GCP with consistent tooling and disaster recovery capabilities.

Tente Estes Prompts

Basic ML Pipeline Setup
Design a basic ML pipeline using Kubeflow that includes data preprocessing, model training, and model evaluation stages. Include configuration for experiment tracking with MLflow.
Experiment Tracking Setup
Set up MLflow experiment tracking for a PyTorch training project. Include configuration for metrics logging, artifact storage, and model versioning.
Production Deployment Architecture
Design a production ML deployment architecture on AWS SageMaker with auto-scaling, monitoring, and blue-green deployment capabilities. Include cost optimization strategies.
Complete MLOps Platform
Design a complete MLOps platform architecture including: feature store, experiment tracking,/CD pipeline, monitoring model registry, CI, and drift detection. Specify tools and integration points for AWS or GCP.

Melhores Práticas

  • Start with experiment tracking before building complex pipelines to understand data and model behavior
  • Implement monitoring and alerting from the initial deployment, not as an afterthought
  • Use infrastructure as code (Terraform, CloudFormation) for reproducible ML environments

Evitar

  • Deploying models without automated testing or validation gates
  • Skipping data quality checks before model training leads to poor model performance
  • Using single cloud provider without considering vendor lock-in for critical ML systems

Perguntas Frequentes

What is MLOps?
MLOps (Machine Learning Operations) is the practice of deploying and maintaining ML models in production reliably and efficiently. It combines ML, DevOps, and data engineering to automate the ML lifecycle.
Which experiment tracking tool should I use?
MLflow is open-source and integrates widely. Weights & Biases offers excellent visualization. Choose based on team size, budget, and required features. Many teams start with MLflow and upgrade as needs grow.
How do I handle model updates in production?
Use canary deployments or blue-green strategies. Monitor new model performance closely. Implement automatic rollback if metrics degrade. Always maintain the previous working model version for quick recovery.
What is a feature store?
A feature store is a centralized repository for storing, managing, and serving ML features. It ensures consistency between training and inference, enables feature sharing across teams, and handles feature computation for both batch and real-time use cases.
How do I monitor ML models in production?
Monitor three key areas: data quality (distribution shifts), model performance (accuracy, latency), and business metrics. Use tools like Prometheus, Grafana, or cloud-native monitoring. Set up alerts for drift detection and performance degradation.
Can this skill help with on-premise ML deployment?
Yes. The skill covers Kubernetes-based deployments, Kubeflow, and container orchestration that work equally well on-premise or in cloud environments. It also addresses hybrid cloud architectures.

Detalhes do Desenvolvedor

Estrutura de arquivos

đź“„ SKILL.md