Skip to main content

Event Stream

Event Stream is a real-time event ingestion service that receives events and continuously makes them available in Apache Iceberg tables in near real time.


Overview

Event Stream provides a simple way to ingest events into your data lakehouse without managing complex infrastructure like Kafka. You send HTTP requests with JSON data, and the service handles writing to Iceberg tables on your behalf.

Key benefits:

  • Simple HTTP API - no client libraries required
  • Batch support for high throughput
  • Data available in Iceberg tables in near real-time
  • No need to manage message queues or streaming infrastructure

How It Works

  1. Create an Event Stream - Deploy an ingestion service via the IOMETE console
  2. Create an Iceberg table - Define your table schema with the fields you want to capture
  3. Send events - POST JSON data to your Event Stream endpoint
  4. Query your data - Events appear in your Iceberg table for analysis

Prerequisites

Before using Event Stream, you need a Personal Access Token (PAT) for authentication. Generate one from the IOMETE console under your account settings.


Creating an Event Stream

  1. Navigate to Event Streams in the IOMETE console
  2. Click New Event Stream button in the top right corner

The Event Streams list page displays all your existing streams with their status, namespace, and replica health at a glance.

Event Streams list | IOMETEEvent Streams list | IOMETE

General Settings

In the create form configure the basic settings for your Event Stream.

Event Stream create form | IOMETEEvent Stream create form | IOMETE
SettingDescription
NameA unique name for your Event Stream (e.g., sales-event-stream)
Resource BundleSelect who can access this resource
NamespaceKubernetes namespace for deployment
ResourcesNumber of replicas and CPU/memory allocation per replica
VolumePersistent storage for the Event Stream. If no volume is attached, all data will be lost permanently when the container is restarted or redeployed. Not recommended for production without a volume.
warning

When no volume is selected, a warning message appears: "No Volume is currently attached to this Event Stream. If the container is restarted or redeployed, all data will be lost permanently. This is not recommended for production."

Event Stream without volume warning | IOMETEEvent Stream without volume warning | IOMETE

Resource Tags (Optional)

Add custom key/value tags to categorize and organize your Event Stream resources. Tags help with filtering and resource management across your organization.

Resource tags | IOMETEResource tags | IOMETE

Review & Create

Before creating, review all your settings in a summary view. Verify the configuration is correct, then click Create to deploy your Event Stream.

Review and create | IOMETEReview and create | IOMETE

Deployment Status

After clicking Create, the Event Stream begins deploying. The status shows Starting with replicas ready count while pods are being provisioned.

Event Stream starting | IOMETEEvent Stream starting | IOMETE

Once at least 1 replica is running, the status changes to Active with showing replicas ready count. Your Event Stream is now ready to receive events.

Event Stream active | IOMETEEvent Stream active | IOMETE

Managing Event Streams

View Status and Logs

The Event Stream details page provides four tabs for monitoring:

  • Details - Overview of configuration, status, and resource allocation
  • Connect - Endpoint URL and code snippets for integration
  • Logs - Real-time container logs for debugging and monitoring
  • Kubernetes events - Deployment events and pod lifecycle information

The Logs tab shows real-time container output, useful for debugging ingestion issues or monitoring service health.

Event Stream logs | IOMETEEvent Stream logs | IOMETE

The Kubernetes events tab displays pod lifecycle events including volume claims and pod creation.

Kubernetes events | IOMETEKubernetes events | IOMETE

Configure

Click the Configure button to modify your Event Stream settings. The edit form allows you to change the resource bundle, namespace, resources, and volume configuration. After making changes, click Review & Save to apply them.

Configure Event Stream | IOMETEConfigure Event Stream | IOMETE

Scale

To adjust the number of replicas, select Scale from the three-dot menu. A modal dialog appears where you can set the desired replica count using the slider or input field, then click Scale to apply.

Scale Event Stream | IOMETEScale Event Stream | IOMETE

Terminate and Start

Click Terminate to stop the Event Stream. The status changes to Stopped and the service stops accepting events. The Start button appears to restart the service when needed.

Terminated Event Stream | IOMETETerminated Event Stream | IOMETE

Delete

To permanently remove an Event Stream, select Delete from the three-dot menu. This action cannot be undone.

Scale and delete options | IOMETEScale and delete options | IOMETE

Creating an Iceberg Table

Create a table to store your events. The table schema should match the JSON fields you plan to send.

CREATE TABLE analytics.events.sales (
event_id STRING,
customer_id STRING,
product_id STRING,
quantity INT,
price DOUBLE
);

Sending Events

Once your Event Stream is active, you can send events using any HTTP client.

Endpoint URL

Navigate to the Connect tab to find your endpoint URL and ready-to-use code snippets in multiple programming languages (cURL, Java, Kotlin, C#, JavaScript, Python, Go, Ruby).

Connect tab | IOMETEConnect tab | IOMETE

Request Format

ComponentValue
MethodPOST
Content-Typeapplication/json
Headerstoken - Your Personal Access Token
table - Target table in format catalog.database.table
BodyJSON array of events

Request Limits

  • Maximum request body size: 5 MB

Single Event

[{"event_id": "e001", "customer_id": "c123", "product_id": "p456", "quantity": 2, "price": 29.99}]

For better throughput, send multiple events in a single request:

[
{"event_id": "e001", "customer_id": "c123", "product_id": "p456", "quantity": 2, "price": 29.99},
{"event_id": "e002", "customer_id": "c124", "product_id": "p789", "quantity": 1, "price": 49.99},
{"event_id": "e003", "customer_id": "c125", "product_id": "p456", "quantity": 5, "price": 29.99}
]

Code Examples

cURL

curl -X POST "https://<your-domain>/data-plane/<namespace>/event-stream/<stream-name>/ingest" \
-H "Content-Type: application/json" \
-H "token: <your-access-token>" \
-H "table: analytics.events.sales" \
-d '[{"event_id": "e001", "customer_id": "c123", "product_id": "p456", "quantity": 2, "price": 29.99}]'

Python

import requests

endpoint = "https://<your-domain>/data-plane/<namespace>/event-stream/<stream-name>/ingest"
headers = {
"Content-Type": "application/json",
"token": "<your-access-token>",
"table": "analytics.events.sales"
}

events = [
{"event_id": "e001", "customer_id": "c123", "product_id": "p456", "quantity": 2, "price": 29.99},
{"event_id": "e002", "customer_id": "c124", "product_id": "p789", "quantity": 1, "price": 49.99}
]

response = requests.post(endpoint, json=events, headers=headers)
print(response.status_code)

Best Practices

Use Batch Requests

Instead of sending events one at a time, collect events on the client side and send them in batches. This significantly improves throughput.

# Recommended: Batch events
batch = []
batch_size = 100

for event in events:
batch.append(event)
if len(batch) >= batch_size:
send_to_event_stream(batch)
batch = []

# Send remaining events
if batch:
send_to_event_stream(batch)

Match JSON Fields to Table Schema

Ensure your JSON field names match your Iceberg table column names. Mismatched fields will be ignored.

Iceberg ColumnJSON Field
event_id"event_id": "value"
customer_id"customer_id": "value"

Store Flexible Data with String Columns

For complex or dynamic event fields (such as nested objects or fields with varying structures), define the Iceberg column as a string type. The Event Stream will automatically cast these values to strings, allowing you to store flexible data without strict schema requirements.

Use Persistent Volume for Production

When creating an Event Stream for production workloads, always attach a persistent volume. Without a volume, in-flight events may be lost if the service restarts.

Handle Errors Gracefully

Implement retry logic in your client for transient failures:

import time

def send_with_retry(events, max_retries=3):
for attempt in range(max_retries):
response = requests.post(endpoint, json=events, headers=headers)
if response.status_code == 200:
return response
time.sleep(2 ** attempt) # Exponential backoff
raise Exception("Failed after retries")

Troubleshooting

HTTP Status Codes

StatusError CodeCauseSolution
200OKSuccessEvent(s) ingested successfully
400BAD_REQUESTMissing table headerAdd table header with format catalog.database.table
400BAD_REQUESTInvalid table name formatUse format catalog.database.table (three parts separated by dots)
400BAD_REQUESTInvalid JSONVerify JSON syntax is valid
400BAD_REQUESTExpected JSON arrayWrap events in array: [{...}] even for single events
400BAD_REQUESTSchema mismatchEnsure JSON field types match table column types
401UNAUTHORIZEDMissing token headerAdd token header with your Personal Access Token
401UNAUTHORIZEDInvalid or expired tokenGenerate a new Personal Access Token
403FORBIDDENNo access to Event StreamRequest access to the Event Stream service
403FORBIDDENNo INSERT permissionRequest INSERT permission on the target table
404NOT_FOUNDTable not foundVerify the table exists in the specified catalog and database
429TOO_MANY_REQUESTSServer overloadedReduce request rate or increase replicas
500INTERNAL_SERVER_ERRORServer errorCheck Event Stream logs for details
503SERVICE_UNAVAILABLEService drainingWait for service to restart or check deployment status

Common Issues

IssueSolution
Events not appearing in tableCheck logs in the Event Stream details page; verify table name is correct
Service not startingReview Kubernetes events for resource issues (CPU/memory limits)
Slow ingestionIncrease batch size; add more replicas; attach persistent volume
Connection refusedVerify Event Stream status is Active with ready replicas (e.g., 2/2)