azure-ai-contentsafety-java
Build content moderation apps with Azure AI
Moderate harmful content automatically in your applications. Azure AI Content Safety SDK for Java detects hate speech, violence, sexual content, and self-harm with configurable severity thresholds.
スキルZIPをダウンロード
Claudeでアップロード
設定 → 機能 → スキル → スキルをアップロードへ移動
オンにして利用開始
テストする
「azure-ai-contentsafety-java」を使用しています。 Analyze user comment for hate speech and violence
期待される結果:
Hate: Severity 2 (low confidence), Violence: Severity 0 (not detected). Content passes moderation threshold.
「azure-ai-contentsafety-java」を使用しています。 Check image for adult content
期待される結果:
Sexual category detected at Severity 6. Recommendation: Block content per policy threshold of 4.
「azure-ai-contentsafety-java」を使用しています。 Validate message against custom blocklist
期待される結果:
Blocklist hit detected: Item ID abc123 matched term variant. Content blocked per haltOnBlocklistHit setting.
セキュリティ監査
安全Static analyzer produced false positives due to misinterpreting Markdown documentation. All 33 external_commands findings are Java code examples in Markdown fences, not shell execution. Network findings are example URLs for configuration. Environment variable access is standard Azure SDK pattern. No actual security issues detected.
リスク要因
🔑 環境変数 (1)
🌐 ネットワークアクセス (2)
品質スコア
作れるもの
Social Platform Moderation
Automatically screen user-generated posts and comments for harmful content before publication. Configure severity thresholds to match community guidelines.
Customer Support Filtering
Detect abusive messages in support tickets and chat systems. Route flagged content to human reviewers while allowing legitimate messages through.
Marketplace Content Review
Scan product listings, reviews, and images for policy violations. Maintain custom blocklists for brand-specific prohibited terms.
これらのプロンプトを試す
Analyze this text for harmful content using Azure AI Content Safety: [paste text]. Return severity scores for each category.
Analyze [text] for hate and violence categories only. Use 8 severity levels output. Stop if blocklist hit occurs.
Check the image at [URL] for harmful content. Report categories with severity >= 4. Format results as a moderation decision.
Create a blocklist named [name] with description [desc]. Add these terms: [terms]. Then analyze [text] against this blocklist with halt on hit enabled.
ベストプラクティス
- Read Azure SDK documentation for authentication and client configuration before implementation
- Set severity thresholds based on your risk tolerance - severity 4+ is typical for strict moderation
- Allow 5 minutes for blocklist changes to propagate before testing new entries
- Cache analysis results for repeated content to reduce API calls and latency
回避
- Hardcoding credentials instead of using environment variables or DefaultAzureCredential
- Requesting all harm categories when only specific ones are needed - increases latency and cost
- Blocking severity 0 content - this creates false positives for borderline cases
- Synchronous API calls in high-throughput scenarios - use async patterns for batch processing
よくある質問
What Azure subscription do I need for Content Safety?
How do I authenticate the SDK?
Can I customize harm detection thresholds?
How quickly do blocklist updates take effect?
Does the SDK support async operations?
What image formats are supported?
開発者の詳細
作成者
sickn33ライセンス
MIT
リポジトリ
https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/azure-ai-contentsafety-java参照
main
ファイル構成
📄 SKILL.md