Skills deep-research
📦

deep-research

Safe

Execute autonomous AI-powered research tasks

Also available from: 199-biotechnologies,lumacoder,199-biotechnologies,21pounder,21pounder

Manual research is time-consuming and often incomplete. This skill automates multi-step research using Google Gemini Deep Research Agent to deliver comprehensive, cited reports in minutes.

Supports: Claude Codex Code(CC)
📊 71 Adequate
1

Download the skill ZIP

2

Upload in Claude

Go to Settings → Capabilities → Skills → Upload skill

3

Toggle on and start using

Test it

Using "deep-research". Research the history of Kubernetes

Expected outcome:

A comprehensive markdown report covering Kubernetes origins at Google, open source release in 2014, CNCF adoption, major version releases, enterprise adoption timeline, and current ecosystem state with citations.

Using "deep-research". Compare Python web frameworks with structured output

Expected outcome:

Structured report with Executive Summary, Comparison Table (features, performance, learning curve), individual framework analysis, and Recommendations based on use case scenarios.

Security Audit

Safe
v1 • 2/24/2026

The static analyzer detected pattern matches for external commands, network access, and environment variables. All findings are FALSE POSITIVES - the SKILL.md file is purely documentation showing command-line usage examples for a legitimate Python research tool. No executable code or malicious patterns exist. The skill uses Google Gemini API legitimately for research tasks.

1
Files scanned
115
Lines analyzed
0
findings
1
Total audits
No security issues found
Audited by: claude

Quality Score

38
Architecture
100
Maintainability
87
Content
33
Community
100
Security
91
Spec Compliance

What You Can Build

Market Analysis Reports

Generate comprehensive market research including size, trends, key players, and growth opportunities for business planning and investment decisions.

Technical Literature Reviews

Automate the process of reviewing academic papers, technical documentation, and research to synthesize findings into coherent summaries.

Competitive Intelligence

Research competitors, compare products and technologies, and generate comparison tables for informed decision-making.

Try These Prompts

Basic Research Query
Research the history and evolution of Kubernetes from its origins to present day, including key milestones and major adopters.
Comparative Analysis
Compare Python web frameworks (Django, Flask, FastAPI) across performance, ease of use, ecosystem, and best use cases. Include a comparison table.
Market Research with Streaming
Analyze the electric vehicle battery market including key manufacturers, technology trends, supply chain dynamics, and 5-year projections. Stream progress updates.
Multi-Session Research
Continue the previous research on AI chip market. Elaborate on point 2 regarding NVIDIA's competitive positioning and emerging competitors.

Best Practices

  • Set the GEMINI_API_KEY environment variable before running research tasks
  • Use the --stream flag for long-running research to monitor progress in real-time
  • Specify structured output format with --format for programmatic consumption of results

Avoid

  • Running research without estimating time and cost - always inform users of 2-10 minute duration and $2-5 cost
  • Using --no-wait without implementing a follow-up status check mechanism
  • Expecting instant results - deep research is inherently a multi-step process that takes time

Frequently Asked Questions

How do I get a Gemini API key?
Visit Google AI Studio at aistudio.google.com and create an API key. Free tier available with usage limits.
How long does a research task take?
Typically 2-10 minutes depending on query complexity. Simple queries complete faster while comprehensive market analysis takes longer.
What is the cost per research task?
Costs range from $2-5 per task based on complexity. Token usage averages 250k-900k input and 60k-80k output tokens.
Can I continue research from a previous session?
Yes, use the --continue flag with the previous interaction_id to build on earlier research with full context.
What output formats are supported?
Default is human-readable markdown. Use --json for structured data or --raw for unprocessed API responses.
How do I monitor long-running research tasks?
Use --stream for real-time progress updates or --status with the interaction_id to poll for current status.

Developer Details

File structure

📄 SKILL.md