hugging-face-cli
Manage Hugging Face Hub from the Terminal
Working with AI models and datasets on Hugging Face Hub requires multiple tools and manual steps. This skill streamlines the workflow by providing direct CLI access to download, upload, and manage ML resources through unified commands.
تنزيل ZIP المهارة
رفع في Claude
اذهب إلى Settings → Capabilities → Skills → Upload skill
فعّل وابدأ الاستخدام
اختبرها
استخدام "hugging-face-cli". Download a model to local directory
النتيجة المتوقعة:
Model meta-llama/Llama-3.2-1B-Instruct downloaded successfully to ./models directory. Total size: 2.1 GB across 15 files.
استخدام "hugging-face-cli". List cached repositories
النتيجة المتوقعة:
Cached repositories: gpt2 (1.2 GB), bert-base-uncased (440 MB), t5-base (890 MB). Total cache usage: 2.53 GB.
التدقيق الأمني
آمنStatic analysis detected 76 patterns in documentation content, but all are false positives. The skill file is markdown documentation showing usage examples for the official Hugging Face hf CLI tool, not executable code. No actual security risks exist - external command patterns are CLI documentation examples, network references are URLs in documentation, and cryptographic warnings do not match any actual crypto implementations.
عوامل الخطر
⚙️ الأوامر الخارجية (3)
🌐 الوصول إلى الشبكة (1)
درجة الجودة
ماذا يمكنك بناءه
ML Engineer Model Deployment
Download pre-trained models from Hugging Face Hub for local deployment and inference serving.
Researcher Dataset Management
Upload experimental datasets to private repositories and share with collaborators through versioned releases.
Developer Cache Optimization
Manage local model cache to optimize storage and quickly access frequently used models for development.
جرّب هذه الموجهات
Download the model meta-llama/Llama-3.2-1B-Instruct to my local models directory.
Upload my trained model from the ./output folder to my Hugging Face repository with the commit message 'Initial model release'.
Find datasets related to text classification with high download counts and show me their details.
Check my current cache usage, remove unused models, then run a GPU job to process my dataset using the specified image and command.
أفضل الممارسات
- Always use the --quiet flag when you only need the download path for scripting purposes
- Create private repositories for sensitive models and datasets before uploading proprietary content
- Use commit messages that clearly describe changes when uploading model updates
- Regularly prune detached cache revisions to free up disk space
تجنب
- Do not upload files containing API keys, credentials, or sensitive configuration data to public repositories
- Avoid downloading entire large models without checking available disk space first
- Do not share your HF_TOKEN in command history or commit messages
- Avoid running compute jobs without estimating costs using the --flavor option