Guide
This guide builds on the feature flags and analytics guides and shows you how to:
Create an experiment
Analyze its results
Ship a variant
Prerequisties
Create an experiment
Go to the Flags view in the dashboard and select a flag from the left sidebar.

Scroll to the bottom of the flag's targeting and click + Experiment. In the dropdown, select New experiment.

Enter a name for your experiment and click Create.

Click Insert, toggle the Test variant on, and click Save.

Now the flag will be enabled for 50% of users who enter the experiment and disabled for the other 50%.
Note that since the experiment is in the Default block, any targeting rules above it still apply. Users who match those rules exit the flag logic before they can enter the experiment — this ensures they don't contaminate your experiment data with incorrect exposures.
Hypertune experiments are inserted within feature flags so all rollout logic lives in one place. If you managed experiments in a separate flag, you'd need to:
Call both the main flag and the experiment flag from your code
Ensure the experiment flag is only called if the main flag is enabled
This adds complexity and risks logging incorrect exposures.
Hypertune also provides granular, real-time evaluation counts on your flag targeting, so you can visually confirm that users are matching rules, entering experiments, and being assigned variants correctly — all in a single view.

Analyze experiment results with a funnel
Go to the Analytics view in the dashboard and click the + button in the top-right of the sidebar. Select Funnel, enter a name, and click Create.

Click + Add funnel step, choose Exposure, select your experiment in the dropdown, and click Add.

You'll see the total number of users exposed to the Control and Test groups during the selected time range.

Click the + button to the right of the first step, choose Event, select your conversion event in the dropdown, and click Add.

Now you'll see, for each experiment group:
Total number of users exposed during the selected time range
Number of those users who completed the conversion event during the selected time range
Conversion rate
Uplift vs. the Control group, with a confidence interval
Statistical significance

Click Save so you can revisit this funnel and share it with your team.
Customize analysis
In the top bar, you can choose a confidence level for the frequentist analysis, or choose a Bayesian analysis which shows you the probability of each variant being the best.
Both methods automatically apply adjustments for:
Sequential testing — avoids the peeking problem, enabling you to view results at any time and ship a variant if it has a significant result
Multiple comparisons — adjusts for the family-wise error rate, enabling you to compare more than two variants (in a multi-arm test), to determine an overall winner

Often you only care about whether an experiment group is better than the Control group with statistical significance, i.e. a conversion uplift. However, if you also want to test for a conversion drop, you can switch from a One-sided analysis to a Two-sided analysis.

By default, the funnel shows results using data from the last 30 days, but you can change the time range.

Analyze experiment results with an impact analysis
Go to the Analytics view in the dashboard and click the + button in the top-right of the sidebar. Select Impact analysis, enter a name, and click Create.

Click + Experiment, select your experiment from the dropdown, and click Add.

Click + Metric, select your conversion event in the dropdown, and click Add.

Repeat for any other events you want to see the impact on, and optionally add filters to define more specific metrics.
For each experiment group, you'll see:
Total number of users exposed during the selected time range
For each conversion event:
Number of those users who completed the conversion event during the selected time range
Conversion rate
Uplift vs. the Control group, with a confidence interval
Statistical significance

This lets you see the impact of your experiment on multiple metrics in a single view.
Ship a variant
Once you're confident in the results, go to the Flags view and select the flag with the experiment. Scroll to the Experiment expression in the flag targeting. Click the options button (⋯) next to the variant you want to ship and select Ship variant.

The Experiment expression will be replaced with that variant. Click Save.

Next steps
Create an A/B/n test with more than two variants.
Run a multivariate test to explore combinations across features.
Set up an AI loop to automatically learn and shift traffic to the best variant for each unique user.
Add more steps to your funnel or configure each step with filters, breakdowns, segments, derived fields, and aggregations.
Last updated