Guide

This guide builds on the feature flags and analytics guides and shows you how to:

  • Create an experiment

  • Analyze its results

  • Ship a variant

Prerequisties

Create an experiment

Go to the Flags view in the dashboard and select a flag from the left sidebar.

Scroll to the bottom of the flag's targeting and click + Experiment. In the dropdown, select New experiment.

Enter a name for your experiment and click Create.

Click Insert, toggle the Test variant on, and click Save.

Now the flag will be enabled for 50% of users who enter the experiment and disabled for the other 50%.

Note that since the experiment is in the Default block, any targeting rules above it still apply. Users who match those rules exit the flag logic before they can enter the experiment — this ensures they don't contaminate your experiment data with incorrect exposures.

Hypertune experiments are inserted within feature flags so all rollout logic lives in one place. If you managed experiments in a separate flag, you'd need to:

  • Call both the main flag and the experiment flag in your code

  • Ensure the experiment flag is only called if the main flag is enabled

This adds complexity and risks logging incorrect exposures.

Hypertune also provides granular, real-time evaluation counts on your flag targeting, so you can visually confirm that users are matching rules, entering experiments, and being assigned variants correctly — all in a single view.

Analyze experiment results

Go to the Analytics view in the dashboard and click the + button in the top-right of the sidebar. Select Funnel, enter a name, and click Create.

Click + Add funnel step, choose Exposure, select your experiment in the dropdown, and click Add.

Now you'll see the total number of users exposed to the Control and Test groups.

Click the + button to the right of the first step, choose Event, select your conversion event in the dropdown, and click Add.

Now you'l see, for each experiment group:

  • Total number of users exposed

  • Number of users who completed the conversion event

  • Conversion rate

  • Uplift vs. the Control group, with a confidence interval

  • Statistical significance

Click Save so you can revisit this funnel and share it with your team.

Customize analysis

In the top bar, you can switch the confidence level, or view Bayesian probabilities of each group being better than the Control group.

Typically you only care about whether an experiment group is better than the Control group with statistical significance, i.e. a conversion uplift. However, if you also want to test for a conversion drop, you can switch from a One-sided analysis to a Two-sided analysis.

By default, the funnel shows results using data from the last 30 days, but you can change the time range.

Ship a variant

Once you're confident in the results, go to the Flags view and select the flag with the experiment. Scroll to the Experiment expression in the flag targeting. Click the options button (⋯) next to the variant you want to ship and select Ship variant.

The Experiment expression will be replaced with that variant. Click Save.

Next steps

  • Create an A/B/n test with more than two variants.

  • Run a multivariate test to explore combinations across features.

  • Set up an AI loop to automatically learn and shift traffic to the best variant for each unique user.

  • Enhance your funnel with filters, breakdowns, segments, derived fields, and aggregations.

Last updated