4-step-program
Implement AI Agent Quality Control Workflow
Stop accepting mediocre code reviews from AI agents. This workflow enforces a 4-step loop that guarantees 10/10 quality and 100% issue coverage before any code is presented. Agents must fix, review, iterate, and present with mandatory GitHub PR posting.
Download the skill ZIP
Upload in Claude
Go to Settings → Capabilities → Skills → Upload skill
Toggle on and start using
Test it
Using "4-step-program". Fix the database connection pooling issue (Issue #78)
Expected outcome:
- Created branch fix/db-pool-issue
- Implemented connection pool with max 20 connections
- with exponential backoff
- Added retry logic
- Updated configuration to handle pool exhaustion
- All tests passing
- Review posted to PR #156
- 100% issue coverage verified
- Code review score: 10/10
Using "4-step-program". Implement user profile editing feature (Issue #123)
Expected outcome:
- Backend API endpoint created at /api/users/:id
- Frontend form with client-side validation added
- Database migration for profile table completed
- Unit tests with 100% coverage passing
- PR #289 opened and linked to issue
- Code review score: 10/10 with zero suggestions
- All issue requirements fully implemented
Using "4-step-program". Refactor the authentication module (Issue #200)
Expected outcome:
- Extracted auth logic to separate service module
- Added OAuth2 provider support for Google and GitHub
- Implemented rate limiting on auth endpoints
- Updated all dependent services to use new module
- Backward compatibility maintained
- Review posted to PR #412
- All 4 requirements met with 100% coverage
Security Audit
SafeThis skill contains only workflow documentation with no executable code. The static scanner produced false positives by misinterpreting markdown syntax and documentation text. All patterns flagged as cryptographic algorithms, backtick execution, and reconnaissance are legitimate documentation formatting and GitHub CLI workflow instructions.
Risk Factors
🌐 Network access (10)
⚙️ External commands (59)
Quality Score
What You Can Build
Enforce Code Quality Standards
Ensure AI agents deliver production-ready code with mandatory reviews and complete issue coverage before human review.
Automate PR Quality Gates
Coordinate AI contributors through a rigorous quality loop that guarantees 10/10 reviews and full requirement implementation.
Standardize Agent Workflows
Implement consistent quality control across all AI agent tasks with documented workflows and clear acceptance criteria.
Try These Prompts
Fix the authentication bug in src/auth.ts (Issue #45). Requirements: 1) Session expires after 24h, 2) Refresh token works, 3) Error on dual expiration, 4) Redirect on failure. Success: ALL 4 implemented, tests pass. After reviewing with Skill(code-reviewer), POST review to GitHub and report with PR link.
Implement user profile editing (Issue #123). Requirements from issue: Backend API endpoint, Frontend form with validation, Database migration, Unit tests for all components. Success: ALL requirements implemented, 100% test coverage. Create PR, review with Skill(code-reviewer), POST review to GitHub.
Refactor payment service (Issue #89). Requirements: Extract payment logic to separate module, Add retry mechanism for failed payments, Implement circuit breaker pattern, Update all dependent services. Success: Zero breaking changes, all tests pass, performance improved. Review and POST to GitHub PR.
Handle authentication system overhaul (Issues #200-205). Requirements: Migrate to JWT tokens, Add OAuth providers, Implement rate limiting, Update all auth checks, Add audit logging. Success: All 5 issues closed, backward compatibility maintained. Create comprehensive PR with Skill(code-reviewer) review posted to GitHub.
Best Practices
- Always include ALL requirements from the issue in your delegation prompt
- Verify agent posted review to GitHub before accepting completion
- Use line-by-line requirement verification table for final coverage check
Avoid
- Never accept mostly done or partial issue coverage
- Do not skip the review step even if code looks correct
- Avoid presenting without clickable PR and issue URLs