Azure Event Hubs SDK for Rust
Stream Events with Azure Event Hubs in Rust
Building event-driven applications in Rust requires reliable event ingestion. This skill provides official Azure Event Hubs SDK patterns for producers and consumers.
スキルZIPをダウンロード
Claudeでアップロード
設定 → 機能 → スキル → スキルをアップロードへ移動
オンにして利用開始
テストする
「Azure Event Hubs SDK for Rust」を使用しています。 Send a batch of sensor readings to Event Hubs
期待される結果:
Successfully sent batch with 50 events to partition 2. Batch size: 4096 bytes. Sequence numbers: 1001-1050
「Azure Event Hubs SDK for Rust」を使用しています。 Receive events from partition 0 with checkpointing
期待される結果:
Received 25 events from partition 0. Last sequence number: 2847. Checkpoint saved to blob storage. Processing time: 120ms
セキュリティ監査
安全This skill contains documentation-only content for the Azure Event Hubs Rust SDK. No executable code, scripts, or dangerous patterns detected. Static analysis scanned 0 files with 0 risk factors. The skill provides guidance on using Azure's official Rust client library for event streaming.
品質スコア
作れるもの
Real-time Data Ingestion Pipeline
Build scalable event producers to ingest streaming data from IoT devices or application logs into Azure Event Hubs for downstream processing
Event-Driven Microservices
Implement consumer clients to process events from specific partitions enabling parallel consumption across microservice instances
Telemetry Collection System
Deploy Rust-based event collectors that batch and send telemetry data efficiently with proper error handling and retry logic
これらのプロンプトを試す
Create a Rust function that uses the Azure Event Hubs SDK to send a single event with a JSON payload to the configured event hub
Write Rust code that creates an event batch, adds multiple events with metadata, checks batch capacity, and sends the batch to Azure Event Hubs
Implement a Rust consumer that opens receivers for all partitions, receives events from each partition concurrently, and prints event metadata
Build a Rust application that uses ConsumerClient with Blob checkpoint store to track processing progress and enable failover recovery for distributed consumption
ベストプラクティス
- Reuse ProducerClient and ConsumerClient instances instead of creating new ones for each operation
- Use batch sends instead of individual events to improve throughput and reduce latency
- Implement checkpointing with Blob storage when running distributed consumers for reliable recovery
回避
- Creating new client instances for every event instead of reusing connections
- Sending events individually in a loop without batching when throughput matters
- Ignoring batch capacity limits which can cause event loss or send failures