Skills verification-before-completion
📦

verification-before-completion

Low Risk

Verify Before Claiming Completion

Also available from: Dimon94,Cycleaddict,Cygnusfear,DYAI2025,davila7,CodingCossack,ANcpLua,obra

This skill prevents AI agents from making false completion claims by requiring actual verification evidence before reporting success. It ensures tests pass, builds succeed, and work is genuinely complete.

Supports: Claude Codex Code(CC)
📊 69 Adequate
1

Download the skill ZIP

2

Upload in Claude

Go to Settings → Capabilities → Skills → Upload skill

3

Toggle on and start using

Test it

Using "verification-before-completion". The tests should pass now

Expected outcome:

Running test suite...

Results: 42 tests, 42 passed, 0 failed
Exit code: 0

All tests pass.

Using "verification-before-completion". The build completed successfully

Expected outcome:

Running build command...

> npm run build
Exit code: 0
Build completed successfully.

Output length: 2.3MB

Using "verification-before-completion". The bug is fixed

Expected outcome:

Running reproduction test...

Original failing input: {id: 123}
Expected: Error thrown
Actual: Error thrown

Bug is confirmed fixed.

Security Audit

Low Risk
v1 • 2/24/2026

This is a meta-skill providing verification guidelines. The static scanner flagged 13 external_commands patterns and 1 weak crypto pattern, but evaluation shows these are FALSE POSITIVES: the skill contains instructions for AI agents to run verification commands (tests, linters, builds) as text examples - not actual executable code. No cryptographic code exists. System/network reconnaissance flags refer to standard DevOps terms (VCS, git diff). This skill promotes honest verification practices and poses no security risk.

1
Files scanned
140
Lines analyzed
3
findings
1
Total audits

High Risk Issues (1)

Weak Cryptographic Algorithm Flag
Static scanner flagged line 3 for weak crypto algorithm. Investigation shows this is a FALSE POSITIVE - the line contains only description text about verification workflows with no cryptographic code.
Medium Risk Issues (1)
System Reconnaissance Flag
Static scanner flagged lines 71 and 111 for 'system reconnaissance'. These references are to standard DevOps concepts like 'Agent delegation' and 'VCS diff' - not actual system scanning.
Low Risk Issues (1)
Network Reconnaissance Flag
Static scanner flagged line 130 for 'network reconnaissance'. This refers to 'communication' about completion status, not network scanning.
Audited by: claude

Quality Score

38
Architecture
100
Maintainability
87
Content
50
Community
73
Security
91
Spec Compliance

What You Can Build

Test Verification Before Commit

Before committing code, require the AI to run test commands and show actual pass/fail counts from fresh execution.

Build Verification Before PR

Prevent premature PR creation by requiring build command output showing exit code 0 before claiming success.

Regression Test Validation

Ensure regression tests follow red-green pattern: pass after fix, fail without fix, pass after restore.

Try These Prompts

Basic Verification Request
Before claiming this work is complete, run the verification command and show me the actual output. Do not claim success without evidence.
Test Verification Template
Run the full test suite and report: total tests, passed, failed, and the exact error messages for any failures. Show the exit code.
Build Verification Template
Run the build command and show the exit code. Report any compilation errors or warnings from the output.
Requirements Checklist Template
Review the original requirements. For each requirement, state: met/unmet/gap. Cite specific evidence from the code or test output.

Best Practices

  • Always run fresh verification commands - never trust cached or previous results
  • Read the full output including exit codes before making any claims
  • Express actual status with evidence, not confidence or assumptions

Avoid

  • Claiming completion based on agent success reports without independent verification
  • Using words like 'should', 'probably', 'seems to' before verification
  • Relying on partial checks like linter passing when full build verification is needed

Frequently Asked Questions

Does this skill run tests for me?
No. This skill instructs the AI to run verification commands. You must configure the actual test and build commands for your project.
Can I use this with any testing framework?
Yes. The skill is framework-agnostic. It provides guidelines for verification regardless of your test tooling.
What if verification takes a long time?
The skill emphasizes that skipping verification to save time creates false confidence. The cost of verification is less than the cost of rework from false completion.
Does this work with Claude Code and Codex?
Yes. This skill works with Claude, Codex, and Claude Code. It guides the AI's behavior regardless of the specific tool.
How is this different from just running tests?
Running tests is insufficient. This skill requires: fresh execution, full output review, exit code checking, and evidence-based claims. Partial or cached results do not qualify.
Can this prevent all false completion claims?
No. The skill provides guidelines but cannot enforce compliance. It relies on the AI following instructions. Human oversight remains important.

Developer Details

File structure

📄 SKILL.md