์Šคํ‚ฌ product-manager-toolkit
๐Ÿ“ฆ

product-manager-toolkit

์•ˆ์ „

Product Manager Toolkit

๋˜ํ•œ ๋‹ค์Œ์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: alirezarezvani,davila7

A comprehensive toolkit for product managers to prioritize features using RICE scoring, analyze customer interviews, and create professional PRD documents. Streamlines product discovery and planning workflows.

์ง€์›: Claude Codex Code(CC)
๐Ÿฅ‰ 73 ๋ธŒ๋ก ์ฆˆ
1

์Šคํ‚ฌ ZIP ๋‹ค์šด๋กœ๋“œ

2

Claude์—์„œ ์—…๋กœ๋“œ

์„ค์ • โ†’ ๊ธฐ๋Šฅ โ†’ ์Šคํ‚ฌ โ†’ ์Šคํ‚ฌ ์—…๋กœ๋“œ๋กœ ์ด๋™

3

ํ† ๊ธ€์„ ์ผœ๊ณ  ์‚ฌ์šฉ ์‹œ์ž‘

ํ…Œ์ŠคํŠธํ•ด ๋ณด๊ธฐ

"product-manager-toolkit" ์‚ฌ์šฉ ์ค‘์ž…๋‹ˆ๋‹ค. Feature list: Login improvement (reach 5000, impact high, confidence 80%, effort 3), Search redesign (reach 10000, impact medium, confidence 90%, effort 5), Notification system (reach 2000, impact massive, confidence 60%, effort 8)

์˜ˆ์ƒ ๊ฒฐ๊ณผ:

RICE Scores:
1. Search Redesign: Score 180 (Reach 10000 ร— Impact 0.5 ร— Confidence 0.9 รท Effort 5)
2. Login Improvement: Score 133 (Reach 5000 ร— Impact 0.5 ร— Confidence 0.8 รท Effort 3)
3. Notification System: Score 30 (Reach 2000 ร— Impact 1.0 ร— Confidence 0.6 รท Effort 8)

Recommendation: Focus on Search Redesign and Login Improvement for Q1.

"product-manager-toolkit" ์‚ฌ์šฉ ์ค‘์ž…๋‹ˆ๋‹ค. Customer said: 'The current dashboard is so slow, it takes forever to load. I really need real-time notifications when orders come in. The search function is useless - I can never find what I need. But I love the new reporting feature!'

์˜ˆ์ƒ ๊ฒฐ๊ณผ:

Pain Points (Severity: High):
- Slow dashboard loading
- Poor search functionality

Feature Requests (Priority: High):
- Real-time notifications for orders

Sentiment: Mixed (positive for reporting, negative for performance)

Key Quote: 'The current dashboard is so slow, it takes forever to load'

๋ณด์•ˆ ๊ฐ์‚ฌ

์•ˆ์ „
v1 โ€ข 2/24/2026

All 71 static findings evaluated as false positives. The skill contains legitimate Python CLI scripts for product management workflows. Documentation shows example commands in markdown code blocks (not actual execution). Market sizing references (TAM/SAM/SOM) were incorrectly flagged as Windows SAM. Generic terms triggered weak crypto flags. File operations are intentional CLI functionality for reading user-provided transcript files.

4
์Šค์บ”๋œ ํŒŒ์ผ
1,414
๋ถ„์„๋œ ์ค„ ์ˆ˜
6
๋ฐœ๊ฒฌ ์‚ฌํ•ญ
1
์ด ๊ฐ์‚ฌ ์ˆ˜

๋†’์€ ์œ„ํ—˜ ๋ฌธ์ œ (6)

False Positive: External Commands Flagged
Static scanner flagged markdown code blocks showing 'python scripts/...' as shell backtick execution. These are documentation examples, not actual command execution. Located in SKILL.md lines showing example CLI usage.
False Positive: Weak Cryptographic Algorithm
Scanner incorrectly flagged common words like 'Security', 'Design', 'Principles', 'High-Level' as weak crypto algorithms. No cryptographic code exists in this skill.
False Positive: Windows SAM Database
Line 27 of prd_templates.md contains 'TAM, SAM, SOM' which refers to market sizing (Total Addressable Market, Serviceable Available Market, Serviceable Obtainable Market), not Windows Security Account Manager.
False Positive: HTTP Client Library / Network Access
Scanner flagged 'requests.append()' as HTTP library usage. This is a Python list variable named 'requests' storing analysis results, not the HTTP requests library.
False Positive: Certificate/Key Files
Scanner flagged 'keys = prioritized[0].keys()' as certificate keys. This is simply extracting dictionary keys for CSV output formatting.
Legitimate: File Input Operations
Scripts accept file paths from command line arguments to read user-provided transcript files. This is expected CLI tool behavior.
๊ฐ์‚ฌ์ž: claude

ํ’ˆ์งˆ ์ ์ˆ˜

64
์•„ํ‚คํ…์ฒ˜
100
์œ ์ง€๋ณด์ˆ˜์„ฑ
87
์ฝ˜ํ…์ธ 
50
์ปค๋ฎค๋‹ˆํ‹ฐ
55
๋ณด์•ˆ
100
์‚ฌ์–‘ ์ค€์ˆ˜

๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๊ฒƒ

Feature Prioritization Workshop

Run a feature prioritization session using RICE scoring. Generate a CSV of features with scores, then use the script to rank and plan roadmap capacity.

Customer Interview Synthesis

After customer interviews, paste transcript into a text file and run the analyzer to extract actionable insights, sentiment, and feature requests.

PRD Documentation Creation

Use PRD templates to document new feature requirements. Choose from standard, one-page, or feature brief formats based on scope.

์ด ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์„ธ์š”

Prioritize My Feature List
Use the RICE prioritization method to help me score these features. For each feature, I need to estimate: Reach (users per quarter), Impact (massive/high/medium/low/minimal), Confidence (high/medium/low), and Effort (person-months). Here's my feature list: [paste features]
Analyze Customer Interview
I have a customer interview transcript. Please analyze it and extract: 1) Key pain points with severity, 2) Explicit feature requests, 3) Sentiment (positive/negative/neutral), 4) Jobs to be done, 5) Key quotes. Here's the transcript: [paste transcript]
Draft PRD from Discovery
Help me create a Product Requirements Document for a new feature. I have these inputs: Problem statement [describe], Target users [define], Success metrics [list], Timeline [specify]. Please structure this as a standard PRD using the template format.
Capacity Planning for Quarter
I have 20 prioritized features with RICE scores and effort estimates. Our team has capacity for 15 person-months this quarter. Which features should we commit to? Show me the portfolio breakdown of quick wins vs big bets.

๋ชจ๋ฒ” ์‚ฌ๋ก€

  • Gather cross-functional input when scoring RICE components to improve accuracy
  • Use customer interview analysis to validate pain points before prioritizing solutions
  • Start with one-page PRD for small features, use standard PRD for complex initiatives
  • Review and update RICE scores quarterly as confidence and reach data improves

ํ”ผํ•˜๊ธฐ

  • Do not skip confidence scoring - it prevents overconfidence in uncertain estimates
  • Do not use RICE for all decisions - combine with strategic fit and technical dependencies
  • Do not write PRDs without user research - validate problems first
  • Do not ignore effort estimates from engineering - use realistic team velocity

์ž์ฃผ ๋ฌป๋Š” ์งˆ๋ฌธ

What is RICE prioritization?
RICE is a scoring framework: Reach (users affected) ร— Impact (effect on user) ร— Confidence ( certainty of estimates) รท Effort (resources needed). Higher scores indicate higher priority.
Do I need to install any dependencies?
The scripts require Python 3.6 or higher. No external packages needed - they use only standard library modules.
Can I use this with Claude Code or Codex?
Yes. The toolkit works as a CLI companion. You can use Claude to draft features and PRDs, then run the Python scripts for numerical scoring and analysis.
How do I estimate Reach for a feature?
Reach = number of users who will benefit per quarter. For existing features, use analytics data. For new features, estimate based on user research and market size.
What file formats does the interview analyzer accept?
The script reads plain text (.txt) files. Paste your interview transcript into a text file and pass it as a command line argument.
Can I export results to JSON or CSV?
Yes. The rice_prioritizer.py supports --output json or csv flags. Default output is to console for easy copying.

๊ฐœ๋ฐœ์ž ์„ธ๋ถ€ ์ •๋ณด

์ž‘์„ฑ์ž

sickn33

๋ผ์ด์„ ์Šค

MIT

์ฐธ์กฐ

main

ํŒŒ์ผ ๊ตฌ์กฐ

๐Ÿ“ references/

๐Ÿ“„ prd_templates.md

๐Ÿ“ scripts/

๐Ÿ“„ customer_interview_analyzer.py

๐Ÿ“„ rice_prioritizer.py

๐Ÿ“„ SKILL.md