SQL Federation
SQL Acceleration
SQL Federation
SQL Acceleration

Write to Apache Iceberg Tables with SQL in Spice

Wyatt Wenzel
Product Marketing Lead

With the release of Spice v1.8, developers can now write directly to Apache Iceberg tables and catalogs using standard SQL INSERT_INTO statements. 

This feature extends Spice’s SQL federation capabilities beyond reads, enabling data ingestion, transformation, and pipeline workloads to write results back into Iceberg directly from the same runtime used for queries and acceleration.

What sets Spice apart from other query engines is its broader, application-focused feature set designed for modern data and AI workloads. Spice brings together federation, hybrid search, embedded LLM inference, and now native writes in one unified runtime - enabling teams to build complete, end-to-end workflows without the management overhead and performance concessions of using multiple systems. 

Iceberg write support is available in preview, with append-only operations and schema validation for secure and predictable data management.

From Read-Only Federation to Full Data Workflows

Data teams are standardizing on open table formats like Apache Iceberg to unify analytical and operational data across systems; Iceberg offers a consistent way to store, version, and manage data across different engines and clouds, helping teams avoid vendor lock-in while maintaining strong governance and interoperability.

Supporting Iceberg writes natively inside Spice means development teams can:

  • Direct Writes to Iceberg without ETL: Insert data directly into Iceberg from SQL queries.
  • Simplify ingestion paths: Load transformed or federated data into Iceberg without separate tools.
  • Enforce governance: Maintain schema validation and secure access through read_write permissions.

Paired with Spice’s built-in performance acceleration and federation, these write capabilities make it easier to use Iceberg not just as a storage solution, but as a queryable data layer for both operational and AI workloads

How It Works

Spice supports INSERT_INTO statements on Iceberg tables and catalogs explicitly marked as read_write.

Example Spicepod configuration:

catalogs:
  - from: iceberg:https://glue.ap-northeast-3.amazonaws.com/iceberg/v1/catalogs/111111/namespaces
    name: ice
    access: read_write

datasets:
  - from: iceberg:https://iceberg-catalog-host.com/v1/namespaces/my_namespace/tables/my_table
    name: iceberg_table
    access: read_write

And, here's an example SQL query:

-- Insert from another table
INSERT INTO iceberg_table
SELECT * FROM existing_table;

-- Insert with values
INSERT INTO iceberg_table (id, name, amount)
VALUES (1, 'John', 100.0), (2, 'Jane', 200.0);

-- Insert into catalog table
INSERT INTO ice.sales.transactions
VALUES (1001, '2025-01-15', 299.99, 'completed');

Support for updates, deletes, and merges will be added in future releases.

Now, let’s walk through an end-to-end workflow demonstrating how to execute Iceberg writes in Spice.

Write to Iceberg Tables with Spice Cookbook

Prerequisites: 

  • Access to an Iceberg catalog, or Docker to run an Iceberg catalog locally.
  • Spice is installed (see the Getting Started documentation).

Step 1: Create a new directory and initialize a Spicepod

mkdir iceberg-catalog-recipe
cd iceberg-catalog-recipe
spice init

Step 2. Run the Docker container for the Iceberg catalog

In a separate terminal, clone the cookbook repository and run the Docker container for the Iceberg catalog.

git clone https://github.com/spiceai/cookbook.git
cd cookbook/catalogs/iceberg
docker compose up -d

Step 3. Add the Iceberg Catalog Connector to your Spicepod

catalogs:
  - from: iceberg:http://localhost:8181/v1/namespaces
    # access: read_write
    name: ice
    params:
      iceberg_s3_endpoint: http://localhost:9000
      iceberg_s3_access_key_id: admin
      iceberg_s3_secret_access_key: password
      iceberg_s3_region: us-east-1

Step 4. Run Spice

spice run
2025/01/27 11:08:36 INFO Checking for latest Spice runtime release...
2025/01/27 11:08:37 INFO Spice.ai runtime starting...
2025-01-27T19:08:37.494155Z  INFO runtime::init::dataset: No datasets were configured. If this is unexpected, check the Spicepod configuration.
2025-01-27T19:08:37.494905Z  INFO runtime::init::catalog: Registering catalog 'ice' for iceberg
2025-01-27T19:08:37.499162Z  INFO runtime::metrics_server: Spice Runtime Metrics listening on 127.0.0.1:9090
2025-01-27T19:08:37.499174Z  INFO runtime::flight: Spice Runtime Flight listening on 127.0.0.1:50051
2025-01-27T19:08:37.500689Z  INFO runtime::http: Spice Runtime HTTP listening on 127.0.0.1:8090
2025-01-27T19:08:37.503376Z  INFO runtime::opentelemetry: Spice Runtime OpenTelemetry listening on 127.0.0.1:50052
2025-01-27T19:08:37.696469Z  INFO runtime::init::results_cache: Initialized results cache; max size: 128.00 MiB, item ttl: 1s
2025-01-27T19:08:37.697178Z  INFO runtime::init::catalog: Registered catalog 'ice' with 1 schema and 8 tables

Step 5. Query the Iceberg catalog

spice sql
sql> show tables;
+---------------+--------------+--------------+------------+
| table_catalog | table_schema | table_name   | table_type |
+---------------+--------------+--------------+------------+
| ice           | tpch_sf1     | lineitem     | BASE TABLE |
| ice           | tpch_sf1     | nation       | BASE TABLE |
| ice           | tpch_sf1     | orders       | BASE TABLE |
| ice           | tpch_sf1     | supplier     | BASE TABLE |
| ice           | tpch_sf1     | customer     | BASE TABLE |
| ice           | tpch_sf1     | partsupp     | BASE TABLE |
| ice           | tpch_sf1     | region       | BASE TABLE |
| ice           | tpch_sf1     | part         | BASE TABLE |
| spice         | runtime      | task_history | BASE TABLE |
| spice         | runtime      | metrics      | BASE TABLE |
+---------------+--------------+--------------+------------+

Run Pricing Summary Report Query (Q1). More information about TPC-H and all the queries involved can be found in the official TPC Benchmark H Standard Specification.

select
  l_returnflag,
  l_linestatus,
  sum(l_quantity) as sum_qty,
  sum(l_extendedprice) as sum_base_price,
  sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,
  sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,
  avg(l_quantity) as avg_qty,
  avg(l_extendedprice) as avg_price,
  avg(l_discount) as avg_disc,
  count(*) as count_order
from
  ice.tpch_sf1.lineitem
where
  l_shipdate <= date '1998-12-01' - interval '110' day
group by
  l_returnflag,
  l_linestatus
order by
  l_returnflag,
  l_linestatus
;

Output:

+--------------+--------------+-------------+-----------------+-------------------+---------------------+-----------+--------------+----------+-------------+
| l_returnflag | l_linestatus | sum_qty     | sum_base_price  | sum_disc_price    | sum_charge          | avg_qty   | avg_price    | avg_disc | count_order |
+--------------+--------------+-------------+-----------------+-------------------+---------------------+-----------+--------------+----------+-------------+
| A            | F            | 37734107.00 | 56586554400.73  | 53758257134.8700  | 55909065222.827692  | 25.522005 | 38273.129734 | 0.049985 | 1478493     |
| N            | F            | 991417.00   | 1487504710.38   | 1413082168.0541   | 1469649223.194375   | 25.516471 | 38284.467760 | 0.050093 | 38854       |
| N            | O            | 73416597.00 | 110112303006.41 | 104608220776.3836 | 108796375788.183317 | 25.502437 | 38249.282778 | 0.049996 | 2878807     |
| R            | F            | 37719753.00 | 56568041380.90  | 53741292684.6040  | 55889619119.831932  | 25.505793 | 38250.854626 | 0.050009 | 1478870     |
+--------------+--------------+-------------+-----------------+-------------------+---------------------+-----------+--------------+----------+-------------+

Time: 0.186233833 seconds. 10 rows.

Step 6. Write to Iceberg tables

To enable write operations to Iceberg tables, uncomment the access: read_write configuration and restart Spice.

6.1. Update the Spicepod configuration

Edit the spicepod.yaml file to uncomment the access line:

catalogs:
  - from: iceberg:http://localhost:8181/v1/namespaces
    access: read_write  # Uncomment this line
    name: ice
    params:
      iceberg_s3_endpoint: http://localhost:9000
      iceberg_s3_access_key_id: admin
      iceberg_s3_secret_access_key: password
      iceberg_s3_region: us-east-1

6.2. Restart Spice

Stop the current Spice instance (Ctrl+C) and restart it:

spice run

6.3. Insert data into Iceberg tables

Now you can write data to the Iceberg tables using SQL INSERT statements:

spice sql

Example: Insert a new region into the region table:

INSERT INTO ice.tpch_sf1.region (r_regionkey, r_name, r_comment) 
VALUES (5, 'ANTARCTICA', 'A cold and remote region');
+-------+
| count |
+-------+
| 1     |
+-------+

Example: Insert a new nation into the nation table:

INSERT INTO ice.tpch_sf1.nation (n_nationkey, n_name, n_regionkey, n_comment) 
VALUES (25, 'PENGUINIA', 5, 'A vibrant home for brave penguins in Antarctica');
+-------+
| count |
+-------+
| 1     |
+-------+

Verify the inserts by querying the tables:

SELECT * FROM ice.tpch_sf1.region WHERE r_regionkey = 5;
SELECT * FROM ice.tpch_sf1.nation WHERE n_nationkey = 25;

Step 7. View the Iceberg tables in MinIO

Navigate to http://localhost:9001 and login with admin and password. View the iceberg bucket to see the created Iceberg tables.

Step 8. Clean up

docker compose down --volumes --rmi local

Next steps with Iceberg writes in Spice

Iceberg write support is available in preview. See the Iceberg connector docs for configuration details and try the Iceberg Catalog Connector recipe to get started. 

Feedback is welcome as we round out support for Iceberg writes in upcoming releases! 

Work with Spice AI

Interested in working with Spice AI or looking to learn a little more about the work we do? We are always looking for our next big challenge. Book an introductory call via our Calendly. Take a deeper look at our enterprise offerings by visiting Spice.ai.

Visit Spice.ai
Share

Latest Articles

Search

Real-Time Hybrid Search Using RRF: A Hands-On Guide with Spice

Learn how to build hybrid search with Reciprocal Rank Fusion (RRF) directly in SQL using Spice - combining text, vector, and time-based relevance in one query for faster, more accurate results.

By
-
October 23, 2025
All Articles
SQL Federation
SQL Acceleration

Write to Apache Iceberg Tables with SQL in Spice

Spice v1.8 introduces native Apache Iceberg write support, enabling developers to insert data directly into Iceberg tables and catalogs using standard SQL.

By
Wyatt Wenzel
-
October 24, 2025
All Articles
SQL Federation
SQL Acceleration

Build Better Apps with Spice.ai SQL Query Federation & Acceleration

See how Spice.ai turns fragmented enterprise data into a unified, high-performance data layer. Federate and accelerate queries across operational and analytical systems to power faster, more intelligent applications with zero ETL.

By
Wyatt Wenzel
-
October 14, 2025
All Articles