pytorch-lightning
Build neural networks with PyTorch Lightning
Also available from: davila7
This skill helps you organize PyTorch code into reusable LightningModules. It provides templates and documentation for configuring multi-GPU training, implementing data pipelines, and setting up experiment tracking with popular tools like W&B and TensorBoard.
Download the skill ZIP
Upload in Claude
Go to Settings → Capabilities → Skills → Upload skill
Toggle on and start using
Test it
Using "pytorch-lightning". Create a simple CNN LightningModule for image classification
Expected outcome:
- A LightningModule class with __init__, training_step, validation_step, and configure_optimizers
- Example CNN architecture using torch.nn layers
- Training loop that returns loss and logs metrics with self.log()
- Optimizer configuration with Adam and learning rate scheduler
Using "pytorch-lightning". Configure Trainer for GPU training with checkpointing
Expected outcome:
- Trainer configuration with accelerator='gpu', devices=2
- ModelCheckpoint callback to save best model based on validation loss
- EarlyStopping callback to halt training when metrics plateau
- Progress bar and logger configuration
Security Audit
SafeAll 843 static findings are false positives. The 'Ruby/shell backtick execution' alerts are markdown code blocks, 'weak cryptographic algorithm' alerts flag normal text like 'DDP/FSDP', and 'eval()' refers to PyTorch's model.eval() method. This is legitimate deep learning documentation with no malicious code.
Risk Factors
⚙️ External commands (4)
⚡ Contains scripts (2)
🌐 Network access (2)
Quality Score
What You Can Build
Organize research experiments
Structure PyTorch code into reusable LightningModules for cleaner experimentation and faster iteration.
Scale training to multiple GPUs
Configure distributed training across clusters with DDP, FSDP, or DeepSpeed for large model training.
Track experiments automatically
Integrate with W&B, TensorBoard, or MLflow to log metrics, hyperparameters, and model checkpoints.
Try These Prompts
Show me how to create a LightningModule for an image classifier with training_step, validation_step, and configure_optimizers methods.
How do I configure a Trainer for multi-GPU training using DDP strategy with 4 GPUs on a single node?
Create a LightningDataModule for loading image data with custom transforms for training, validation, and test sets.
Set up Weights & Biases logging with WandbLogger in PyTorch Lightning to track training metrics and hyperparameters.
Best Practices
- Use self.device instead of .cuda() for device-agnostic code that works on GPU and CPU
- Call self.save_hyperparameters() in __init__() to save configuration for reproducibility
- Use self.log() with sync_dist=True when logging metrics in distributed training
Avoid
- Do not manually call loss.backward() or optimizer.step() - let the Trainer handle optimization
- Avoid mixing research code (model architecture, loss computation) with engineering code (device management, checkpointing)
- Do not use .cuda() directly - use self.to(device) or rely on Lightning automatic device placement
Frequently Asked Questions
How do I install PyTorch Lightning?
What is the difference between DDP, FSDP, and DeepSpeed?
How do I debug my model quickly?
Can I use this skill for inference only?
How do I resume training from a checkpoint?
What loggers are supported?
Developer Details
Author
K-Dense-AILicense
Apache-2.0 license
Repository
https://github.com/K-Dense-AI/claude-scientific-skills/tree/main/scientific-skills/pytorch-lightningRef
main
File structure