performance-testing-review-multi-agent-review
Orchestrate multi-agent code reviews with AI
Traditional code reviews miss critical issues by examining code from a single perspective. This tool coordinates multiple specialized AI agents to provide comprehensive analysis across security, architecture, performance, and quality dimensions.
์คํฌ ZIP ๋ค์ด๋ก๋
Claude์์ ์ ๋ก๋
์ค์ โ ๊ธฐ๋ฅ โ ์คํฌ โ ์คํฌ ์ ๋ก๋๋ก ์ด๋
ํ ๊ธ์ ์ผ๊ณ ์ฌ์ฉ ์์
ํ ์คํธํด ๋ณด๊ธฐ
"performance-testing-review-multi-agent-review" ์ฌ์ฉ ์ค์ ๋๋ค. Review src/api/auth.py for security issues
์์ ๊ฒฐ๊ณผ:
Security audit identified 2 critical issues: 1) Missing input validation on user credentials allowing injection attacks, 2) Weak password hashing using MD5. Recommended actions: Implement parameterized queries and upgrade to bcrypt.
"performance-testing-review-multi-agent-review" ์ฌ์ฉ ์ค์ ๋๋ค. Analyze performance of src/database/queries.py
์์ ๊ฒฐ๊ณผ:
Performance analysis found 3 optimization opportunities: 1) N+1 query pattern in user lookup, 2) Missing database indexes on frequently queried columns, 3) Inefficient loop processing large datasets. Estimated improvement: 60% reduction in query time.
๋ณด์ ๊ฐ์ฌ
์์ All 27 static analysis findings are false positives. The SKILL.md file is documentation containing Python pseudo-code examples in markdown blocks, not executable code. No actual command execution, cryptographic operations, or system reconnaissance patterns exist. Safe for publication.
ํ์ง ์ ์
๋ง๋ค ์ ์๋ ๊ฒ
Web Application Security Review
Automatically route web application code through security auditors and web architecture specialists for comprehensive vulnerability assessment.
Microservices Architecture Validation
Validate microservices designs through sequential review phases covering design patterns, implementation quality, and deployment readiness.
Performance-Critical Code Analysis
Identify performance bottlenecks by routing performance-sensitive code through specialized performance analyst agents.
์ด ํ๋กฌํํธ๋ฅผ ์ฌ์ฉํด ๋ณด์ธ์
Review the code at [file path] using the multi-agent review system. Focus on identifying any critical issues and provide a summary of findings.
Perform a security audit on [repository URL] using the security-auditor agent. Identify potential vulnerabilities and provide remediation recommendations.
Conduct a comprehensive architecture review of [project path] using sequential agents: architect-reviewer for design patterns, code-quality-reviewer for implementation, and devops-validator for deployment considerations.
Configure a custom review with parallel agents [security-auditor, performance-analyst] and sequential agent [architecture-reviewer] for [code snippet]. Weight security at 0.4, performance at 0.4, and architecture at 0.2.
๋ชจ๋ฒ ์ฌ๋ก
- Define clear review objectives before selecting agent types
- Use parallel execution for independent review dimensions to reduce turnaround time
- Review consolidated reports and escalate conflicting recommendations for human judgment
ํผํ๊ธฐ
- Running all agent types on every review without considering code characteristics
- Ignoring agent confidence scores when prioritizing findings
- Skipping sequential dependency phases when agent outputs build on each other
์์ฃผ ๋ฌป๋ ์ง๋ฌธ
What types of code can this tool review?
How does the multi-agent system resolve conflicting recommendations?
Can I customize which agents participate in a review?
How long does a typical review take?
Does this tool integrate with existing CI/CD pipelines?
What happens if an agent times out or fails?
๊ฐ๋ฐ์ ์ธ๋ถ ์ ๋ณด
์์ฑ์
sickn33๋ผ์ด์ ์ค
MIT
๋ฆฌํฌ์งํ ๋ฆฌ
https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/performance-testing-review-multi-agent-review์ฐธ์กฐ
main
ํ์ผ ๊ตฌ์กฐ
๐ SKILL.md