This article helps you:
See your experimental results
Understand and interpret those results
You’ve designed your experiment, rolled it out to your users, and given them enough time to interact with your new variants. Now it’s time to check if your hypothesis was correct.
In the Analysis card, you’ll be able to tell at a glance whether your experiment has yielded statistically-significant results, as well as what those results actually are. Amplitude Experiment takes the information you gave it during the design and rollout phases and plugs them in for you automatically, so there’s no repetition of effort. It breaks the results out by variant, and provides you with a convenient, detailed tabular breakdown.
Amplitude doesn't generate p-values or confidence intervals for experiments using binary metrics (for example, unique conversions) until each variant has 100 users and 25 conversions. Experiments using non-binary metrics need only to reach 100 users per variant.
On the Filter card, set criteria that updates the analysis on the page. Filter your experiment results with the following:
The date filter defaults to your experiment's start and end date. Adjust the range to scope experiment results to those specific dates.
The segment filter enables you to select predefined segments, or create one ad-hoc. Predefined segments include:
The Exclude users who variant jumped segment and and exclude testers and variant jumpers segment are available on experiment types other than multi-armed bandit.
These segments update in real-time.
Click +Create Segment to open the Segment builder, where you can define a new segment on the fly. Segments you create in one experiment are available across all other experiments, and appear in the All Saved Segments category.
Filter your experiment results based on user properties. For example, create a filter that excludes users from a specific country or geographic region, or users that have a specific account type on your platform.
Amplitude doesn't generate p-values or confidence intervals for experiments using binary metrics (for example, unique conversions) until each variant has 100 users and 25 conversions. Experiments using non-binary metrics need only to reach 100 users per variant.
When you expand a category, or click Guide, the Data Quality Guide opens in a side panel where you can address or dismiss issues
The Summary card describes your experiment's hypothesis and lets you know if it's reached statistical significance.
The Summary card displays a badge labeled Significant if the experiment reached statistical significance, and a badge labeled Not Significant if it didn't. This card can display several badges at once:
At the top of the Analysis card is an overview that explains how your experiment performed, broken down by metric and variant. Below that, a collection of experiment results charts, which you can analyze by metric, display information about:
For more information, review Dig deeper into experimentation data with Experiment Results
Click Open in Chart to open a copy of the Experiment Results in a new chart.
If needed, adjust the experiment’s confidence level. The default is 95%. You can also choose between a sequential test and a T-test.
Oftentimes, you run an experiment and want to know if the experiment effected different users differently. In other words, if there are heterogeneous treatment effects or not. One way to do this is to filter for Platform = iOS
and then Platform = Android
and then Platform = Web
. Grouping results by Platform
achieves the same result, with fewer clicks. For more information, review Group-bys: How Amplitude prunes and orders chart results for more information.
The Diagnostics card provides information about how your experiment is delivering. It shows charts about:
For more control, open any of these charts in the chart build.
You can receive notifications about your experiments and have them sent to either a dedicated Slack channel or to a unique webhook. Go to Integrate Slack and then Experiment Notifications to set up these notification alerts.
You can set up a notification for the following events:
Amplitude Experiment sends a notification to the editors of the experiment.
It’s important to remember that no experiment is a failure. Even if you didn’t get the results you were hoping for, you can still learn something from the process—even if your test didn’t reach stat sig. Use your results as a springboard to asking hard questions about the changes you made, the outcomes you saw, what your customers expect from your product, and how you can deliver that.
In general, the next step should be deciding whether to conduct another experiment that supports your hypothesis to gather more evidence, or to go ahead and implement the variant that delivered the best results. You can also export your experiment to the Experiment Analysis in Amplitude Analytics and conduct a deeper dive there, where you can segment your users there and hopefully generate more useful insights.
April 30th, 2024
Need help? Contact Support
Visit Amplitude.com
Have a look at the Amplitude Blog
Learn more at Amplitude Academy
© 2025 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.