Google Cloud Storage

Amplitude users can now export Amplitude event data and merged user data to their Google Cloud Storage (GCS) account. Google Cloud's bucket policies allow you to manage and programmatically export this data into a Google Cloud bucket. Using the Amplitude UI, you can set up recurring syncs as often as once per hour.

Create a GCS service account and set permissions

If you haven't already, create a service account for Amplitude within the Google Cloud console. This allows Amplitude to export your data to your Google Cloud project.

After you create a service account, generate and download the service account key file and upload it to Amplitude. Make sure you export Amplitude's account key in JSON format.

Add this service account as a member to the bucket you'd like to export data to. Give this member the storage admin role to make sure Amplitude has the necessary permissions to export the data to your bucket.

You can also create your own role, if you prefer.

Keep in mind that the export process requires, at a minimum, the following permissions:

  • storage.buckets.get
  • storage.objects.get
  • storage.objects.create
  • storage.objects.delete
  • storage.objects.list

Set up a recurring data export to GCS

To set up a recurring export of your Amplitude data to GCS, follow these steps:

Note

You need admin privileges in Amplitude, as well as a role that allows you to enable resources in GCS.

  1. In Amplitude Data, click Catalog and select the Destinations tab.
  2. In the Warehouse Destination section, click Google Cloud Storage.
  3. On the Getting Started tab, select the data you'd like to export. You can Export events ingested today and moving forward, Export all merged Amplitude IDs, or both. For events, you can also specify filtering conditions to only export events that meet certain criteria.

Note

You can export these two different data types to separate buckets. Complete the setup flow twice: once for each data type.

  1. Review the Event table and Merge IDs table schemas and click Next.
  2. In the Google Cloud Credentials For Amplitude section, upload the Service Account Key file. This file must be in JSON format.
  3. After the account service key is uploaded, fill out the Google Cloud bucket details in the Google Cloud Bucket Details section.
  4. Click Next. Amplitude attempts a test upload to check that the entered credentials work. If the upload is successful, click Finish to complete the GCS destination configuration and activation.

All future events/merged users are automatically sent to GCS. Amplitude exports files to your GCS account every hour.

Run a manual export

You can backfill historical data to GCS by manually exporting data.

  1. Go to the Google Cloud Storage export connection page created in the section above.
  2. Go to Backfills tab.
  3. Select the desired date range.
  4. Click Start Backfill.

If the backfill range overlaps with the range of previously exported data, Amplitude will de-duplicate overlapping data.

Exported data format

Raw event file and data format

Data is exported hourly as zipped archive JSON files, and partitioned by the hour with one or multiple files per hour. Each file contains one event JSON object per line.

File names have the following syntax, where the time represents when the data was uploaded to Amplitude servers in UTC (for example, server_upload_time):

projectID_yyyy-MM-dd_H#partitionInteger.json.gz

For example, the first partition of data uploaded to this project, on Jan 25, 2020, between 5 PM and 6 PM UTC, is in the file:

187520_2020-01-25_17#1.json.gz

Here is the exported data JSON object schema:

1{
2 "server_received_time": UTC ISO-8601 timestamp,
3 "app": int,
4 "device_carrier": string,
5 "$schema":int,
6 "city": string,
7 "user_id": string,
8 "uuid": UUID,
9 "event_time": UTC ISO-8601 timestamp,
10 "platform": string,
11 "os_version": string,
12 "amplitude_id": long,
13 "processed_time": UTC ISO-8601 timestamp,
14 "version_name": string,
15 "ip_address": string,
16 "paying": boolean,
17 "dma": string,
18 "group_properties": dict,
19 "user_properties": dict,
20 "client_upload_time": UTC ISO-8601 timestamp,
21 "$insert_id": string,
22 "event_type": string,
23 "library":string,
24 "amplitude_attribution_ids": string,
25 "device_type": string,
26 "device_manufacturer": string,
27 "start_version": string,
28 "location_lng": float,
29 "server_upload_time": UTC ISO-8601 timestamp,
30 "event_id": int,
31 "location_lat": float,
32 "os_name": string,
33 "amplitude_event_type": string,
34 "device_brand": string,
35 "groups": dict,
36 "event_properties": dict,
37 "data": dict,
38 "device_id": string,
39 "language": string,
40 "device_model": string,
41 "country": string,
42 "region": string,
43 "is_attribution_event": bool,
44 "adid": string,
45 "session_id": long,
46 "device_family": string,
47 "sample_rate": null,
48 "idfa": string,
49 "client_event_time": UTC ISO-8601 timestamp,
50 }

Merged Amplitude IDs file and data format

Data is exported hourly as zipped archive JSON files. Each file contains one merged Amplitude ID JSON object per line.

File names have the following syntax, where the time represents when the data was uploaded to Amplitude servers in UTC (for example server_upload_time):

-OrgID_yyyy-MM-dd_H.json.gz

For example, data uploaded to this project, on Jan 25, 2020, between 5 PM and 6 PM UTC, is in the file:

-189524_2020-01-25_17.json.gz

Merged ID JSON objects have the following schema:

1{
2 "scope": int,
3 "merge_time": long,
4 "merge_server_time": long,
5 "amplitude_id": long,
6 "merged_amplitude_id": long
7}
Was this page helpful?

Thanks for your feedback!

April 22nd, 2024

Need help? Contact Support

Visit Amplitude.com

Have a look at the Amplitude Blog

Learn more at Amplitude Academy

© 2024 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.