azure-monitor-ingestion-java
0
总安装量
2
周安装量
#55938
全站排名
安装命令
npx skills add https://github.com/microsoft/skills --skill azure-monitor-ingestion-java
Agent 安装分布
opencode
2
claude-code
2
gemini-cli
1
github-copilot
1
codex
1
kimi-cli
1
Skill 文档
Azure Monitor Ingestion SDK for Java
Client library for sending custom logs to Azure Monitor using the Logs Ingestion API via Data Collection Rules.
Installation
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-monitor-ingestion</artifactId>
<version>1.2.11</version>
</dependency>
Or use Azure SDK BOM:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-sdk-bom</artifactId>
<version>{bom_version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-monitor-ingestion</artifactId>
</dependency>
</dependencies>
Prerequisites
- Data Collection Endpoint (DCE)
- Data Collection Rule (DCR)
- Log Analytics workspace
- Target table (custom or built-in: CommonSecurityLog, SecurityEvents, Syslog, WindowsEvents)
Environment Variables
DATA_COLLECTION_ENDPOINT=https://<dce-name>.<region>.ingest.monitor.azure.com
DATA_COLLECTION_RULE_ID=dcr-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
STREAM_NAME=Custom-MyTable_CL
Client Creation
Synchronous Client
import com.azure.identity.DefaultAzureCredential;
import com.azure.identity.DefaultAzureCredentialBuilder;
import com.azure.monitor.ingestion.LogsIngestionClient;
import com.azure.monitor.ingestion.LogsIngestionClientBuilder;
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
LogsIngestionClient client = new LogsIngestionClientBuilder()
.endpoint("<data-collection-endpoint>")
.credential(credential)
.buildClient();
Asynchronous Client
import com.azure.monitor.ingestion.LogsIngestionAsyncClient;
LogsIngestionAsyncClient asyncClient = new LogsIngestionClientBuilder()
.endpoint("<data-collection-endpoint>")
.credential(new DefaultAzureCredentialBuilder().build())
.buildAsyncClient();
Key Concepts
| Concept | Description |
|---|---|
| Data Collection Endpoint (DCE) | Ingestion endpoint URL for your region |
| Data Collection Rule (DCR) | Defines data transformation and routing to tables |
| Stream Name | Target stream in the DCR (e.g., Custom-MyTable_CL) |
| Log Analytics Workspace | Destination for ingested logs |
Core Operations
Upload Custom Logs
import java.util.List;
import java.util.ArrayList;
List<Object> logs = new ArrayList<>();
logs.add(new MyLogEntry("2024-01-15T10:30:00Z", "INFO", "Application started"));
logs.add(new MyLogEntry("2024-01-15T10:30:05Z", "DEBUG", "Processing request"));
client.upload("<data-collection-rule-id>", "<stream-name>", logs);
System.out.println("Logs uploaded successfully");
Upload with Concurrency
For large log collections, enable concurrent uploads:
import com.azure.monitor.ingestion.models.LogsUploadOptions;
import com.azure.core.util.Context;
List<Object> logs = getLargeLogs(); // Large collection
LogsUploadOptions options = new LogsUploadOptions()
.setMaxConcurrency(3);
client.upload("<data-collection-rule-id>", "<stream-name>", logs, options, Context.NONE);
Upload with Error Handling
Handle partial upload failures gracefully:
LogsUploadOptions options = new LogsUploadOptions()
.setLogsUploadErrorConsumer(uploadError -> {
System.err.println("Upload error: " + uploadError.getResponseException().getMessage());
System.err.println("Failed logs count: " + uploadError.getFailedLogs().size());
// Option 1: Log and continue
// Option 2: Throw to abort remaining uploads
// throw uploadError.getResponseException();
});
client.upload("<data-collection-rule-id>", "<stream-name>", logs, options, Context.NONE);
Async Upload with Reactor
import reactor.core.publisher.Mono;
List<Object> logs = getLogs();
asyncClient.upload("<data-collection-rule-id>", "<stream-name>", logs)
.doOnSuccess(v -> System.out.println("Upload completed"))
.doOnError(e -> System.err.println("Upload failed: " + e.getMessage()))
.subscribe();
Log Entry Model Example
public class MyLogEntry {
private String timeGenerated;
private String level;
private String message;
public MyLogEntry(String timeGenerated, String level, String message) {
this.timeGenerated = timeGenerated;
this.level = level;
this.message = message;
}
// Getters required for JSON serialization
public String getTimeGenerated() { return timeGenerated; }
public String getLevel() { return level; }
public String getMessage() { return message; }
}
Error Handling
import com.azure.core.exception.HttpResponseException;
try {
client.upload(ruleId, streamName, logs);
} catch (HttpResponseException e) {
System.err.println("HTTP Status: " + e.getResponse().getStatusCode());
System.err.println("Error: " + e.getMessage());
if (e.getResponse().getStatusCode() == 403) {
System.err.println("Check DCR permissions and managed identity");
} else if (e.getResponse().getStatusCode() == 404) {
System.err.println("Verify DCE endpoint and DCR ID");
}
}
Best Practices
- Batch logs â Upload in batches rather than one at a time
- Use concurrency â Set
maxConcurrencyfor large uploads - Handle partial failures â Use error consumer to log failed entries
- Match DCR schema â Log entry fields must match DCR transformation expectations
- Include TimeGenerated â Most tables require a timestamp field
- Reuse client â Create once, reuse throughout application
- Use async for high throughput â
LogsIngestionAsyncClientfor reactive patterns
Querying Uploaded Logs
Use azure-monitor-query to query ingested logs:
// See azure-monitor-query skill for LogsQueryClient usage
String query = "MyTable_CL | where TimeGenerated > ago(1h) | limit 10";