์Šคํ‚ฌ deep-research
๐Ÿ”

deep-research

๋‚ฎ์€ ์œ„ํ—˜ ๐ŸŒ ๋„คํŠธ์›Œํฌ ์ ‘๊ทผ๐Ÿ“ ํŒŒ์ผ ์‹œ์Šคํ…œ ์•ก์„ธ์Šค

Conduct comprehensive deep research

๋˜ํ•œ ๋‹ค์Œ์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: sickn33,lumacoder,199-biotechnologies,21pounder,21pounder

Need thorough, citation-backed research across multiple sources? This skill delivers verified, professional-grade reports through an 8-phase methodology with source credibility scoring and automatic citation validation. Stop spending hours searching and synthesizingโ€”get comprehensive analysis ready for decisions.

์ง€์›: Claude Codex Code(CC)
๐Ÿ“Š 71 ์ ์ ˆํ•จ
1

์Šคํ‚ฌ ZIP ๋‹ค์šด๋กœ๋“œ

2

Claude์—์„œ ์—…๋กœ๋“œ

์„ค์ • โ†’ ๊ธฐ๋Šฅ โ†’ ์Šคํ‚ฌ โ†’ ์Šคํ‚ฌ ์—…๋กœ๋“œ๋กœ ์ด๋™

3

ํ† ๊ธ€์„ ์ผœ๊ณ  ์‚ฌ์šฉ ์‹œ์ž‘

ํ…Œ์ŠคํŠธํ•ด ๋ณด๊ธฐ

"deep-research" ์‚ฌ์šฉ ์ค‘์ž…๋‹ˆ๋‹ค. Use deep research to analyze quantum computing state of the art in 2025

์˜ˆ์ƒ ๊ฒฐ๊ณผ:

  • Executive Summary with 3-5 key findings (under 250 words)
  • Research scope, methodology, and assumptions documented
  • 4-8 detailed findings with 3+ source citations each
  • Synthesis of patterns and novel insights
  • Limitations, counterevidence, and areas of uncertainty
  • Actionable recommendations with timelines
  • Complete bibliography with all 15-30 sources verified
  • Methodology appendix with credibility scores

๋ณด์•ˆ ๊ฐ์‚ฌ

๋‚ฎ์€ ์œ„ํ—˜
v3 โ€ข 1/10/2026

This is a legitimate research skill with minimal risk. All Python scripts use standard library only. Network access is limited to DOI/URL verification for citation validation - a core feature. File system access is restricted to user-specific directories. No code execution, credential access, or data exfiltration patterns detected.

11
์Šค์บ”๋œ ํŒŒ์ผ
2,942
๋ถ„์„๋œ ์ค„ ์ˆ˜
2
๋ฐœ๊ฒฌ ์‚ฌํ•ญ
3
์ด ๊ฐ์‚ฌ ์ˆ˜

์œ„ํ—˜ ์š”์ธ

๐ŸŒ ๋„คํŠธ์›Œํฌ ์ ‘๊ทผ (1)
๐Ÿ“ ํŒŒ์ผ ์‹œ์Šคํ…œ ์•ก์„ธ์Šค (1)

ํ’ˆ์งˆ ์ ์ˆ˜

59
์•„ํ‚คํ…์ฒ˜
100
์œ ์ง€๋ณด์ˆ˜์„ฑ
81
์ฝ˜ํ…์ธ 
26
์ปค๋ฎค๋‹ˆํ‹ฐ
90
๋ณด์•ˆ
78
์‚ฌ์–‘ ์ค€์ˆ˜

๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๊ฒƒ

Technology evaluation

Compare frameworks, tools, or platforms with verified sources and structured analysis

Market research

Analyze industry trends, funding patterns, or competitive landscapes

Academic investigation

Synthesize scientific literature with citation tracking and credibility assessment

์ด ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์„ธ์š”

Quick overview
Use deep research in quick mode to explore [topic] and identify key sources
Standard analysis
Use deep research to analyze [specific question] with 15+ verified sources
Deep investigation
Use deep research in deep mode to compare [A vs B] with full triangulation and critique
UltraDeep report
Use deep research in ultradeep mode to generate a comprehensive report on [topic] covering all major aspects

๋ชจ๋ฒ” ์‚ฌ๋ก€

  • Specify your research mode (quick/standard/deep/ultradeep) based on time available and depth needed
  • Frame questions clearly with contextโ€”specific questions yield better results
  • Review Phase 1 (Scope) output to ensure research is aligned with your needs
  • Use citation numbers in reports to trace claims back to source material

ํ”ผํ•˜๊ธฐ

  • Using this skill for simple lookups that WebSearch can answer in one query
  • Asking for debugging helpโ€”this skill focuses on research, not code fixes
  • Expecting real-time updates on breaking newsโ€”research relies on indexed sources
  • Using for time-sensitive queries without specifying urgency in the research scope

์ž์ฃผ ๋ฌป๋Š” ์งˆ๋ฌธ

What modes are available and when should I use each?
Quick (2-5 min) for exploration, Standard (5-10 min) for most questions, Deep (10-20 min) for important decisions, UltraDeep (20-45 min) for comprehensive reports.
How many sources does a typical report include?
Quick mode: 10+ sources, Standard: 15-30, Deep: 25-40, UltraDeep: 30-50+. All sources are scored for credibility.
Can I use this skill with other Claude tools?
Yes. Use WebSearch and WebFetch first, then invoke deep research for synthesis. The skill integrates with Claude Code workflow.
Is my data safe when using citation verification?
Citation verification only checks DOI/URL accessibilityโ€”it does not send your research data externally. Only minimal metadata is exchanged.
Why did my report fail validation?
Common issues: missing bibliography entries, placeholders like TODO, broken internal links, or citations without matching sources. Fix and regenerate.
How does this compare to Claude Desktop research?
This skill adds 8-phase methodology, source credibility scoring, citation validation, multiple output formats, and unlimited-length reports via auto-continuation.

๊ฐœ๋ฐœ์ž ์„ธ๋ถ€ ์ •๋ณด

์ž‘์„ฑ์ž

199-biotechnologies

๋ผ์ด์„ ์Šค

MIT

์ฐธ์กฐ

main

ํŒŒ์ผ ๊ตฌ์กฐ

๐Ÿ“ reference/

๐Ÿ“„ methodology.md

๐Ÿ“ scripts/

๐Ÿ“„ citation_manager.py

๐Ÿ“„ md_to_html.py

๐Ÿ“„ research_engine.py

๐Ÿ“„ source_evaluator.py

๐Ÿ“„ validate_report.py

๐Ÿ“„ verify_citations.py

๐Ÿ“„ verify_html.py

๐Ÿ“ templates/

๐Ÿ“„ mckinsey_report_template.html

๐Ÿ“„ report_template.md

๐Ÿ“ tests/

๐Ÿ“ fixtures/

๐Ÿ“„ invalid_report.md

๐Ÿ“„ valid_report.md

๐Ÿ“„ ARCHITECTURE_REVIEW.md

๐Ÿ“„ AUTONOMY_VERIFICATION.md

๐Ÿ“„ COMPETITIVE_ANALYSIS.md

๐Ÿ“„ CONTEXT_OPTIMIZATION.md

๐Ÿ“„ QUICK_START.md

๐Ÿ“„ README.md

๐Ÿ“„ requirements.txt

๐Ÿ“„ SKILL.md

๐Ÿ“„ WORD_PRECISION_AUDIT.md