1Flow Aampe Cohort Sync Aampe Event Stream Sync Accoil (Event Stream) Adjust Airship Amazon Kinesis (Cohort) Amazon Kinesis Data Stream Amazon Kinesis Firehose Amazon RedShift Amazon S3 Amazon S3 (Cohort) Appcues Appfit AppsFlyer Appsflyer (Cohort) Apxor Azure Blob Storage Batch Bento Bento (Cohort) Bing Ads (Cohort Sync) Bing Ads (Event Stream) Blitzllama Blueshift Cohort Branch Braze (Cohort Sync) Braze (Event Stream) Candu (Cohort) Candu Event Streaming Chameleon (Cohort) Clevertap (Cohort Sync) Clevertap (Event Stream) Cohort Webhooks CommandBar (Cohort) Cordial Customer.io (Cohort Sync) Customer.io (Event Stream) Enterpret Extole Facebook Ads Fivetran Fullstory Google Ads (Cohort Sync) Google Ads (Event Stream) Google Analytics 4 (iOS/Android) Google Analytics 4 (Web) Google BigQuery Google Cloud Storage Google Pub/Sub (Cohort Sync) Google Pub/Sub (Event Stream) Google Tag Manager Hotjar HubSpot (Cohort Sync) HubSpot (Event Stream) Humanic.ai (Cohort) Humanic.ai (Event stream) Infobip Insider Intercom (Cohort Sync) Intercom (Event Stream) Iterable (Cohort Sync) Iterable (Event Stream) Kameleoon Klaviyo Kochava (Install) Kochava (Post-install) Lantern LaunchDarkly Leanplum LinkedIn Ads Liveramp (Cohort) Mailchimp (Cohort) Mailchimp (Event stream) Marketo Marketo Static List Maze Meta Pixel Moengage (Cohort Sync) MoEngage (Event Stream) Moloco Movable Ink (Event Stream) Movable Ink (Profile API) Netcore Cloud Notifly (Cohort Sync) Notivize OneSignal Optimizely Planhat Plotline (Cohort Sync) Plotline (Event Stream) Productboard Pushwoosh (Cohort Sync) Pushwoosh (Event Stream Qualtrics Refiner (Cohort) Reforge Insights Salesforce Marketing Cloud (Event Streaming) Salesforce Marketing Cloud V1 Salesforce Marketing Cloud V2 Segment SendGrid Sleekflow Snapchat Snowflake Snowflake Data Share Split Statsig (Cohort Sync) Statsig (Event Stream) Talon.One TheTradeDesk TikTok Ads TikTok Ads (Event streaming) Toplyne Twitter Ads unitQ User.com Userflow Userlist (Cohort Sync) Userlist (Event Stream) VWO Webengage Webhooks Streaming Zeda.io

Snowflake Data Share

Get access to your Amplitude events through Snowflake's Data Share product.

Amplitude supports Snowflake’s Data Share integration to give customers access to their event data that lives within Amplitude. Amplitude’s integration supports sharing a Raw Events table and a Merged ID table.

Info

Amplitude's Snowflake Data Share Export is a paid add on to your Amplitude contract.

Limits

Snowflake supports data sharing only within the same region and cloud provider. Amplitude's Snowflake runs in US West (Oregon) on Amazon Web Services. To enable cross-region or cross-cloud data sharing, contact your Account Manager at Amplitude or reach out to Amplitude Support.

Amplitude supports one Snowflake Data Share destination per project for each data type (events and merged user tables). You can set up multiple destinations across your organization. Destinations in different projects don't need to connect to the same Snowflake account. For example, production projects can connect to your production Snowflake instance, staging projects to your staging instance, and development projects to your sandbox instance.

EU availability

Snowflake Data Share isn't available for Amplitude customers in the EU region.

Set up a recurring data export to Snowflake with Data Share

To set up a recurring export of your Amplitude data to Snowflake, follow these steps:

Note

You need admin/manager privileges in Amplitude, as well as a role that allows you to enable resources in Snowflake.

  1. In Amplitude Data, click Catalog and select the Destinations tab.

  2. In the Warehouse Destinations section, click Snowflake Data Share.

  3. Under Access Data via Snowflake Data Share, enter the following information:

    • Account Name: This is the account name on your Snowflake account. It's the first part of your Snowflake URL, after https:// and before 'snowflakecomputing.com'. For example, if your Snowflake URL is http://amplitude.snowflakecomputing.com, then you should enter amplitude.
    • Org Name: This is the name of your Snowflake organization.
  4. Choose which data to include in this export: Raw events every 5 minutes, Merged IDs every hour, or both. For events, you can also specify filtering conditions to only export events that meet certain criteria.

    Note

    The option you choose here reflects the interval after Amplitude ingests the data.

  5. Click Next, enter the name of this Snowflake export and click Finish.

When complete, Amplitude sends all future events to Snowflake with Data Share.

Backfill data

After the Share is set up between Amplitude and your Snowflake cluster, Amplitude only loads data from that point forward. To backfill historical data from a period before the connection, specify this in the request when setting up the share.

Warning

Contact your Amplitude Account Manager for pricing.

Remove Data Share from Amplitude

To remove the Amplitude data set made available through the Data Share, reach out to your Account Manager at Amplitude or fill out a support request here.

Snowflake export format

Schema Name Description
DB_{ORG_ID} Database
SCHEMA_{PROJECT_ID} Schema
EVENTS_{PROJECT_ID} Events Table
MERGE_IDS_{PROJECT_ID} Merge User Table

Event table

Event table schema

The Event table schema includes the following columns:

  • adid
  • amplitude_event_type
  • amplitude_id
  • app
  • city
  • client_event_time
  • client_upload_time
  • country
  • data
  • device_brand
  • device_carrier
  • device_family
  • device_id
  • device_manufacturer
  • device_model
  • device_type
  • dma
  • event_id
  • event_properties
  • event_time
  • event_type
  • followed_an_identify
  • group_properties
  • groups
  • idfa
  • ip_address
  • is_attribution_event
  • language
  • library
  • location_lat
  • location_lng
  • os_name
  • os_version
  • paying
  • platform
  • processed_time
  • region
  • sample_rate
  • server_upload_time
  • session_id
  • start_version
  • user_id
  • user_properties
  • uuid
  • version_name
  • amplitude_attribution_ids
  • server_received_time
  • global_user_properties
  • partner_id
  • plan
  • source_id
  • data_type

For more information, see the Event Table Schema section of the Snowflake Export documentation.

Event table clustering

The exported events table uses the following clustering keys (in order):

  1. TO_DATE(EVENT_TIME)
  2. TO_DATE(SERVER_UPLOAD_TIME)
  3. EVENT_TYPE
  4. AMPLITUDE_ID

This clustering optimizes query performance for time-based queries. Data Share provides read-only access to an Amplitude-owned table, so you can't modify the clustering keys. If you need custom clustering for different query patterns, use Snowflake Export instead for full table ownership and control.

Merged User table

Merged User table schema

The Merged User table schema contains the following:

  • amplitude_id
  • merge_event_time
  • merge_server_time
  • merged_amplitude_id

For more information, see the Merged User table schema section of the Snowflake Export documentation.

Merged User table clustering

Amplitude clusters the merged IDs table by DATE_TRUNC('HOUR', MERGE_SERVER_TIME). This optimizes queries that filter by when user merges occurred. Data Share provides read-only access to an Amplitude-owned table, so you can't modify the clustering keys. For custom clustering to optimize different query patterns, use Snowflake Export instead, which gives you full ownership and control over the table.

Was this page helpful?

July 23rd, 2024

Need help? Contact Support

Visit Amplitude.com

Have a look at the Amplitude Blog

Learn more at Amplitude Academy

© 2025 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.