Data Warehouse Sync (beta)
Last updated
Last updated
For teams that would like to access indexed subgraph data for analytics, we can ETL snapshots of your entities into your data warehouse (BigQuery, Snowflake, etc).
This allows you to define your data model once and use Satsuma for both product & analytics.
We will provide the same entities that exist in your subgraph schema. Every entity in your subgraph will be turned into a relational table.
Every table row represents a state of a subgraph entity for a specific block/time range. Any time that your subgraph changes, we’ll capture a new row. Every snapshot will have the following columns:
block_start_inclusive
(float)
block_end_exclusive
(float)
block_start_time_inclusive
(timestamp)
block_end_time_exclusive
(timestamp)
This allows you to answer complex historical queries about how subgraph entities are changing over time.
Example: syndicate_mainnet.syndicate_dao
Data in your warehouse will be updated every hour.
The easiest way to provide BigQuery access is with data sharing. We’ll host the data in our BigQuery warehouse, but you’ll have full access to query and join the data with other sources.
Provide us your email address, service account email address, or Google group email address.
We notify you when the datasets have been shared.
Set up access to the dataset in BigQuery:
Go to https://console.cloud.google.com/bigquery and make sure you create or select a project in the top left.
Click “+ Add Data” → “Pin a project” → “Enter project name”.
Enter the value <INSERT project here>
for the Project Name.
Run a query (with the fully qualified name) to make sure it’s working properly:
You’re all set! 🎉