log0 Event API 1.0.0

AsyncAPI specification for all Kafka event contracts in the log0 platform.

log0 is a multi-tenant log intelligence and incident management platform. Logs enter at the ingestion-gateway and flow through a linear pipeline: ingestion → normalization → clustering → incident management → notification.

Every message is keyed by tenantId to guarantee per-tenant ordering within Kafka partitions. Each stage has a corresponding dead-letter queue (DLQ) topic so no event is silently lost on processing failure.

Servers

  • kafka://localhost:9092/kafkalocal

    Local Kafka broker started via docker-compose in docker/kafka/

    Security:
    security.protocol:PLAINTEXT

Operations

  • SEND raw-logs

    Receives every accepted log event from the ingestion-gateway. Kafka key = tenantId (ensures per-tenant ordering within partitions). On producer failure the event is routed to raw-logs-dlq instead.

    Publish a raw log event after validation

    Called by the ingestion-gateway after a POST /api/v1/logs request passes header and payload validation. The event is sent asynchronously; HTTP 202 is returned to the client immediately.

    Operation IDingestion-gateway/publishRawLog

    Available only on servers:

    Accepts the following message:

    Raw Log Event

    A validated log event as received by the ingestion-gateway

    Message IDRawLogEvent
    object

    Published to raw-logs by the ingestion-gateway immediately after a POST /api/v1/logs request passes validation. Carries the original log payload enriched with platform metadata (eventId, receivedAt).

    Examples

  • SEND raw-logs-dlq

    Dead-letter queue for the raw-logs pipeline. Receives events that could not be delivered or processed by the normalization-service. Kafka key = eventId of the original failed event. Multiple services publish here: ingestion-gateway (producer failure) and normalization-service (consumer processing failure).

    Route failed raw log events to the DLQ

    Published by the ingestion-gateway when the Kafka send to raw-logs fails.

    Operation IDingestion-gateway/publishRawLogDlq

    Available only on servers:

    Accepts the following message:

    Dead-Letter Queue Event

    Wraps any failed event with error context for replay or alerting

    Message IDDlqEvent
    object

    Universal dead-letter queue envelope. Published to raw-logs-dlq by any service that catches a processing error. Preserves the original event and error context for replay or alerting without losing data.

    Examples

  • RECEIVE raw-logs

    Receives every accepted log event from the ingestion-gateway. Kafka key = tenantId (ensures per-tenant ordering within partitions). On producer failure the event is routed to raw-logs-dlq instead.

    Consume raw log events for normalization

    Manual offset acknowledgment - the offset is committed only after the normalized event is successfully published to normalized-logs. On failure the event is forwarded to raw-logs-dlq and the offset is still committed to prevent partition stalls.

    Operation IDnormalization-service/consumeRawLog

    Available only on servers:

    Accepts the following message:

    Raw Log Event

    A validated log event as received by the ingestion-gateway

    Message IDRawLogEvent
    object

    Published to raw-logs by the ingestion-gateway immediately after a POST /api/v1/logs request passes validation. Carries the original log payload enriched with platform metadata (eventId, receivedAt).

    Examples

  • SEND normalized-logs

    Carries fully normalized and fingerprinted log events produced by the normalization-service. Kafka key = tenantId. On consumer failure in the clustering-service the event is routed to raw-logs-dlq.

    Publish a normalized and fingerprinted log event

    Operation IDnormalization-service/publishNormalizedLog

    Available only on servers:

    Accepts the following message:

    Normalized Log Event

    A log event after normalization and SHA-256 fingerprinting

    Message IDNormalizedLogEvent
    object

    Published to normalized-logs by the normalization-service after cleaning the raw event and generating a deterministic SHA-256 fingerprint. The fingerprint is the deduplication key used by the clustering-service.

    Examples

  • SEND raw-logs-dlq

    Dead-letter queue for the raw-logs pipeline. Receives events that could not be delivered or processed by the normalization-service. Kafka key = eventId of the original failed event. Multiple services publish here: ingestion-gateway (producer failure) and normalization-service (consumer processing failure).

    Route normalization failures to the DLQ

    Operation IDnormalization-service/publishNormalizedLogDlq

    Available only on servers:

    Accepts the following message:

    Dead-Letter Queue Event

    Wraps any failed event with error context for replay or alerting

    Message IDDlqEvent
    object

    Universal dead-letter queue envelope. Published to raw-logs-dlq by any service that catches a processing error. Preserves the original event and error context for replay or alerting without losing data.

    Examples

  • RECEIVE normalized-logs

    Carries fully normalized and fingerprinted log events produced by the normalization-service. Kafka key = tenantId. On consumer failure in the clustering-service the event is routed to raw-logs-dlq.

    Consume normalized log events for fingerprint-based clustering

    Groups events by (tenantId + fingerprint) within a configurable tumbling window (default 5 minutes). When occurrence count reaches the configured threshold (default 10) an IncidentEvent is published.

    Operation IDclustering-service/consumeNormalizedLog

    Available only on servers:

    Accepts the following message:

    Normalized Log Event

    A log event after normalization and SHA-256 fingerprinting

    Message IDNormalizedLogEvent
    object

    Published to normalized-logs by the normalization-service after cleaning the raw event and generating a deterministic SHA-256 fingerprint. The fingerprint is the deduplication key used by the clustering-service.

    Examples

  • SEND incident-events

    Published by the clustering-service when a fingerprint's occurrence count crosses the configured threshold within a 5-minute tumbling window. Kafka key = tenantId. Consumed by the incident-service to create or update incidents in PostgreSQL.

    Publish an incident event when the occurrence threshold is crossed

    Operation IDclustering-service/publishIncidentEvent

    Available only on servers:

    Accepts the following message:

    Incident Event

    Signals that a fingerprint's occurrence count crossed the threshold

    Message IDIncidentEvent
    object

    Published to incident-events by the clustering-service when a (tenantId + fingerprint) combination exceeds the occurrence threshold within a tumbling window. Triggers incident creation or update in PostgreSQL.

    Examples

  • SEND raw-logs-dlq

    Dead-letter queue for the raw-logs pipeline. Receives events that could not be delivered or processed by the normalization-service. Kafka key = eventId of the original failed event. Multiple services publish here: ingestion-gateway (producer failure) and normalization-service (consumer processing failure).

    Route clustering failures to the DLQ

    Operation IDclustering-service/publishClusteringDlq

    Available only on servers:

    Accepts the following message:

    Dead-Letter Queue Event

    Wraps any failed event with error context for replay or alerting

    Message IDDlqEvent
    object

    Universal dead-letter queue envelope. Published to raw-logs-dlq by any service that catches a processing error. Preserves the original event and error context for replay or alerting without losing data.

    Examples

  • RECEIVE incident-events

    Published by the clustering-service when a fingerprint's occurrence count crosses the configured threshold within a 5-minute tumbling window. Kafka key = tenantId. Consumed by the incident-service to create or update incidents in PostgreSQL.

    Consume incident events to create or update incidents in PostgreSQL

    Deduplicates by (tenantId + fingerprint) where status != RESOLVED. On new incident: triggers async AI summarization and publishes a NotificationEvent. On update: increments occurrence count only.

    Operation IDincident-service/consumeIncidentEvent

    Available only on servers:

    Accepts the following message:

    Incident Event

    Signals that a fingerprint's occurrence count crossed the threshold

    Message IDIncidentEvent
    object

    Published to incident-events by the clustering-service when a (tenantId + fingerprint) combination exceeds the occurrence threshold within a tumbling window. Triggers incident creation or update in PostgreSQL.

    Examples

  • SEND notification-events

    Published by the incident-service on incident lifecycle transitions (CREATED, ASSIGNED, RESOLVED). Kafka key = tenantId. Consumed by the notification-service to send Slack alerts.

    Publish a notification event on incident lifecycle transitions

    Operation IDincident-service/publishNotificationEvent

    Available only on servers:

    Accepts the following message:

    Notification Event

    Signals an incident lifecycle transition requiring a Slack alert

    Message IDNotificationEvent
    object

    Published to notification-events by the incident-service on incident lifecycle transitions. The notification-service consumes this to send formatted Slack Block Kit alerts.

    Examples

  • SEND raw-logs-dlq

    Dead-letter queue for the raw-logs pipeline. Receives events that could not be delivered or processed by the normalization-service. Kafka key = eventId of the original failed event. Multiple services publish here: ingestion-gateway (producer failure) and normalization-service (consumer processing failure).

    Route incident processing failures to the DLQ

    Operation IDincident-service/publishIncidentDlq

    Available only on servers:

    Accepts the following message:

    Dead-Letter Queue Event

    Wraps any failed event with error context for replay or alerting

    Message IDDlqEvent
    object

    Universal dead-letter queue envelope. Published to raw-logs-dlq by any service that catches a processing error. Preserves the original event and error context for replay or alerting without losing data.

    Examples

  • RECEIVE notification-events

    Published by the incident-service on incident lifecycle transitions (CREATED, ASSIGNED, RESOLVED). Kafka key = tenantId. Consumed by the notification-service to send Slack alerts.

    Consume notification events and send Slack alerts

    Posts formatted Slack Block Kit messages to the configured channel. On failure routes to notification-events-dlq (internal DLQ topic).

    Operation IDnotification-service/consumeNotificationEvent

    Available only on servers:

    Accepts the following message:

    Notification Event

    Signals an incident lifecycle transition requiring a Slack alert

    Message IDNotificationEvent
    object

    Published to notification-events by the incident-service on incident lifecycle transitions. The notification-service consumes this to send formatted Slack Block Kit alerts.

    Examples

Messages

  • #1Raw Log Event

    A validated log event as received by the ingestion-gateway

    Message IDRawLogEvent
    object

    Published to raw-logs by the ingestion-gateway immediately after a POST /api/v1/logs request passes validation. Carries the original log payload enriched with platform metadata (eventId, receivedAt).

  • #2Normalized Log Event

    A log event after normalization and SHA-256 fingerprinting

    Message IDNormalizedLogEvent
    object

    Published to normalized-logs by the normalization-service after cleaning the raw event and generating a deterministic SHA-256 fingerprint. The fingerprint is the deduplication key used by the clustering-service.

  • #3Incident Event

    Signals that a fingerprint's occurrence count crossed the threshold

    Message IDIncidentEvent
    object

    Published to incident-events by the clustering-service when a (tenantId + fingerprint) combination exceeds the occurrence threshold within a tumbling window. Triggers incident creation or update in PostgreSQL.

  • #4Notification Event

    Signals an incident lifecycle transition requiring a Slack alert

    Message IDNotificationEvent
    object

    Published to notification-events by the incident-service on incident lifecycle transitions. The notification-service consumes this to send formatted Slack Block Kit alerts.

  • #5Dead-Letter Queue Event

    Wraps any failed event with error context for replay or alerting

    Message IDDlqEvent
    object

    Universal dead-letter queue envelope. Published to raw-logs-dlq by any service that catches a processing error. Preserves the original event and error context for replay or alerting without losing data.

Schemas

  • object

    Published to raw-logs by the ingestion-gateway immediately after a POST /api/v1/logs request passes validation. Carries the original log payload enriched with platform metadata (eventId, receivedAt).

  • object

    Published to normalized-logs by the normalization-service after cleaning the raw event and generating a deterministic SHA-256 fingerprint. The fingerprint is the deduplication key used by the clustering-service.

  • object

    Published to incident-events by the clustering-service when a (tenantId + fingerprint) combination exceeds the occurrence threshold within a tumbling window. Triggers incident creation or update in PostgreSQL.

  • object

    Published to notification-events by the incident-service on incident lifecycle transitions. The notification-service consumes this to send formatted Slack Block Kit alerts.

  • object

    Universal dead-letter queue envelope. Published to raw-logs-dlq by any service that catches a processing error. Preserves the original event and error context for replay or alerting without losing data.