Skills infsh-cli
📦

infsh-cli

Low Risk ⚙️ External commands🌐 Network access🔑 Env variables📁 Filesystem access

Run 150+ AI Models with Simple CLI Commands

Also available from: tool-belt,skillssh

Access cloud-based AI inference without managing GPU infrastructure. Execute image generation, video creation, LLM calls, and more through a unified command-line interface.

Supports: Claude Codex Code(CC)
📊 69 Adequate
1

Download the skill ZIP

2

Upload in Claude

Go to Settings → Capabilities → Skills → Upload skill

3

Toggle on and start using

Test it

Using "infsh-cli". infsh app run falai/flux-dev-lora --input '{"prompt": "a cat astronaut"}'

Expected outcome:

Task completed with generated image URL: https://cloud.inference.sh/generated/image-123.png

Using "infsh-cli". infsh app run tavily/search-assistant --input '{"query": "latest AI developments"}'

Expected outcome:

Search results returned with 5 relevant articles, summaries, and source URLs

Security Audit

Low Risk
v1 • 3/15/2026

Static analysis flagged 203 patterns across 5 files (596 lines), but all findings are false positives. The detected 'external_commands' are markdown documentation showing CLI usage examples, not actual code execution. URLs are legitimate service endpoints. The curl|sh installation pattern is standard for CLI tools. Network access to inference.sh APIs is the intended functionality. Environment variable usage for API keys follows security best practices.

5
Files scanned
596
Lines analyzed
8
findings
1
Total audits
Medium Risk Issues (1)
External Command Execution via CLI
The skill uses the infsh CLI tool which executes shell commands. This is inherent to the skill's purpose of running AI inference via command line. Users should verify the CLI source and understand it will make network requests to inference.sh APIs.
Low Risk Issues (3)
Network Requests to External APIs
The skill makes HTTP requests to inference.sh and related domains (cli.inference.sh, dist.inference.sh, cloud.inference.sh). This is expected behavior for a cloud AI inference client.
API Key Storage
The skill stores authentication credentials locally after login and supports INFSH_API_KEY environment variable. This is standard practice for CLI authentication.
Local File Access
The CLI supports uploading local files by accepting file paths in input parameters. Users should only provide files they intend to upload to the inference service.

Risk Factors

⚙️ External commands (2)
🌐 Network access (2)
🔑 Env variables (1)
📁 Filesystem access (1)
Audited by: claude

Quality Score

45
Architecture
100
Maintainability
87
Content
38
Community
79
Security
83
Spec Compliance

What You Can Build

Developer Prototyping AI Features

Quickly test different AI models for image generation, video creation, or text processing without setting up local GPU infrastructure

Content Creator Asset Generation

Generate marketing images, social media videos, and 3D models on-demand through simple CLI commands

Automation Engineer Workflow Integration

Integrate AI capabilities into CI/CD pipelines and automation scripts using environment variable authentication

Try These Prompts

Basic Image Generation
Generate an image using FLUX with the prompt: 'a futuristic city skyline at night with flying cars'
Video Creation from Text
Create a 5-second video using Veo 3.1 with the prompt: 'ocean waves crashing on rocky shore at sunset'
LLM Integration Test
Call Claude via OpenRouter to explain quantum computing in simple terms, then format the response as markdown
Batch Image Processing Pipeline
Set up a script that reads image paths from a JSON file, runs each through the Topaz upscaler, and saves the output URLs to a results file

Best Practices

  • Store API keys in environment variables (INFSH_API_KEY) rather than hardcoding in scripts
  • Use --no-wait flag for long-running tasks to avoid blocking your terminal
  • Generate sample input files with 'infsh app sample' to understand required input schema

Avoid

  • Do not commit API keys or authentication tokens to version control
  • Avoid running unverified CLI binaries from untrusted sources
  • Do not upload sensitive files containing personal or confidential information

Frequently Asked Questions

How do I install the inference.sh CLI?
Run 'curl -fsSL https://cli.inference.sh | sh' then authenticate with 'infsh login'. The installer detects your OS and downloads the correct binary.
What AI models are available through inference.sh?
Over 150 models including FLUX for images, Veo for video, Claude and Gemini for text, Tavily for search, and specialized models for 3D, audio, and Twitter automation.
How does billing work?
inference.sh uses a credit-based system. Each model consumption uses credits based on compute time and resources. Check your account dashboard for current pricing.
Can I use this in CI/CD pipelines?
Yes. Set the INFSH_API_KEY environment variable in your CI environment and run CLI commands non-interactively. No browser login required.
How do I upload local files for processing?
Provide a local file path in the input JSON. The CLI automatically uploads files before sending to the API. Supports absolute paths, relative paths, and home directory shortcuts.
What happens if a task fails?
The CLI returns an error message with the failure reason. Common errors include invalid input schema, app not found, or quota exceeded. Check 'infsh task get' for detailed status.

Developer Details