์Šคํ‚ฌ azure-ai-contentsafety-java
๐Ÿ“ฆ

azure-ai-contentsafety-java

์•ˆ์ „ ๐Ÿ”‘ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๐ŸŒ ๋„คํŠธ์›Œํฌ ์ ‘๊ทผ

Build content moderation apps with Azure AI

Moderate harmful content automatically in your applications. Azure AI Content Safety SDK for Java detects hate speech, violence, sexual content, and self-harm with configurable severity thresholds.

์ง€์›: Claude Codex Code(CC)
๐Ÿฅ‰ 74 ๋ธŒ๋ก ์ฆˆ
1

์Šคํ‚ฌ ZIP ๋‹ค์šด๋กœ๋“œ

2

Claude์—์„œ ์—…๋กœ๋“œ

์„ค์ • โ†’ ๊ธฐ๋Šฅ โ†’ ์Šคํ‚ฌ โ†’ ์Šคํ‚ฌ ์—…๋กœ๋“œ๋กœ ์ด๋™

3

ํ† ๊ธ€์„ ์ผœ๊ณ  ์‚ฌ์šฉ ์‹œ์ž‘

ํ…Œ์ŠคํŠธํ•ด ๋ณด๊ธฐ

"azure-ai-contentsafety-java" ์‚ฌ์šฉ ์ค‘์ž…๋‹ˆ๋‹ค. Analyze user comment for hate speech and violence

์˜ˆ์ƒ ๊ฒฐ๊ณผ:

Hate: Severity 2 (low confidence), Violence: Severity 0 (not detected). Content passes moderation threshold.

"azure-ai-contentsafety-java" ์‚ฌ์šฉ ์ค‘์ž…๋‹ˆ๋‹ค. Check image for adult content

์˜ˆ์ƒ ๊ฒฐ๊ณผ:

Sexual category detected at Severity 6. Recommendation: Block content per policy threshold of 4.

"azure-ai-contentsafety-java" ์‚ฌ์šฉ ์ค‘์ž…๋‹ˆ๋‹ค. Validate message against custom blocklist

์˜ˆ์ƒ ๊ฒฐ๊ณผ:

Blocklist hit detected: Item ID abc123 matched term variant. Content blocked per haltOnBlocklistHit setting.

๋ณด์•ˆ ๊ฐ์‚ฌ

์•ˆ์ „
v1 โ€ข 2/24/2026

Static analyzer produced false positives due to misinterpreting Markdown documentation. All 33 external_commands findings are Java code examples in Markdown fences, not shell execution. Network findings are example URLs for configuration. Environment variable access is standard Azure SDK pattern. No actual security issues detected.

1
์Šค์บ”๋œ ํŒŒ์ผ
288
๋ถ„์„๋œ ์ค„ ์ˆ˜
2
๋ฐœ๊ฒฌ ์‚ฌํ•ญ
1
์ด ๊ฐ์‚ฌ ์ˆ˜

์œ„ํ—˜ ์š”์ธ

๐Ÿ”‘ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ (1)
๐ŸŒ ๋„คํŠธ์›Œํฌ ์ ‘๊ทผ (2)
๊ฐ์‚ฌ์ž: claude

ํ’ˆ์งˆ ์ ์ˆ˜

38
์•„ํ‚คํ…์ฒ˜
100
์œ ์ง€๋ณด์ˆ˜์„ฑ
87
์ฝ˜ํ…์ธ 
50
์ปค๋ฎค๋‹ˆํ‹ฐ
100
๋ณด์•ˆ
91
์‚ฌ์–‘ ์ค€์ˆ˜

๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๊ฒƒ

Social Platform Moderation

Automatically screen user-generated posts and comments for harmful content before publication. Configure severity thresholds to match community guidelines.

Customer Support Filtering

Detect abusive messages in support tickets and chat systems. Route flagged content to human reviewers while allowing legitimate messages through.

Marketplace Content Review

Scan product listings, reviews, and images for policy violations. Maintain custom blocklists for brand-specific prohibited terms.

์ด ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์„ธ์š”

Basic Text Analysis
Analyze this text for harmful content using Azure AI Content Safety: [paste text]. Return severity scores for each category.
Configured Category Analysis
Analyze [text] for hate and violence categories only. Use 8 severity levels output. Stop if blocklist hit occurs.
Image Moderation from URL
Check the image at [URL] for harmful content. Report categories with severity >= 4. Format results as a moderation decision.
Blocklist Management Workflow
Create a blocklist named [name] with description [desc]. Add these terms: [terms]. Then analyze [text] against this blocklist with halt on hit enabled.

๋ชจ๋ฒ” ์‚ฌ๋ก€

  • Read Azure SDK documentation for authentication and client configuration before implementation
  • Set severity thresholds based on your risk tolerance - severity 4+ is typical for strict moderation
  • Allow 5 minutes for blocklist changes to propagate before testing new entries
  • Cache analysis results for repeated content to reduce API calls and latency

ํ”ผํ•˜๊ธฐ

  • Hardcoding credentials instead of using environment variables or DefaultAzureCredential
  • Requesting all harm categories when only specific ones are needed - increases latency and cost
  • Blocking severity 0 content - this creates false positives for borderline cases
  • Synchronous API calls in high-throughput scenarios - use async patterns for batch processing

์ž์ฃผ ๋ฌป๋Š” ์งˆ๋ฌธ

What Azure subscription do I need for Content Safety?
You need an Azure subscription with a Content Safety resource provisioned. Free tier includes 5,000 transactions per month for testing.
How do I authenticate the SDK?
Use either API key authentication with KeyCredential or Azure AD authentication with DefaultAzureCredential for production environments.
Can I customize harm detection thresholds?
Yes, set outputType to 8 severity levels and apply your own threshold filtering. The SDK returns numeric severity scores you can evaluate.
How quickly do blocklist updates take effect?
Blocklist changes typically propagate within 5 minutes. Plan accordingly when adding newly discovered problematic terms.
Does the SDK support async operations?
Yes, the Azure SDK supports asynchronous patterns. Use async client methods for non-blocking operations in high-throughput applications.
What image formats are supported?
The API accepts common image formats including PNG, JPEG, and BMP. Images can be provided as binary data or accessible blob URLs.

๊ฐœ๋ฐœ์ž ์„ธ๋ถ€ ์ •๋ณด

์ž‘์„ฑ์ž

sickn33

๋ผ์ด์„ ์Šค

MIT

์ฐธ์กฐ

main

ํŒŒ์ผ ๊ตฌ์กฐ

๐Ÿ“„ SKILL.md