# Guide

This guide builds on the feature flags and analytics guides and shows you how to:

* Create an experiment
* Analyze its results
* Ship a variant

## Prerequisties

* [Set up Hypertune](https://docs.hypertune.com/getting-started/set-up-hypertune)
* [Feature flags guide](https://docs.hypertune.com/feature-flags/guide)
* [Analytics guide](https://docs.hypertune.com/analytics/guide)

## Create an experiment

Go to the **Flags** view in the dashboard and select a flag from the left sidebar.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FPjqjg1XLLCtQEAV1Vlxk%2Flocalhost_3000_projects_6715_main_draft_logic_setup%3D0%26selected_field_path%3Droot%26selected_split%3DR2DEaro0ebRNYp89vl0ES%20(6).png?alt=media&#x26;token=96b3c1b9-9f19-4d0c-ae40-48ce8b44dbe1" alt=""><figcaption></figcaption></figure>

Scroll to the bottom of the flag's targeting and click **+ Experiment**. In the dropdown, select **New experiment**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FoczMCYwINUEVetckTF17%2Flocalhost_3000_projects_6715_main_draft_logic_setup%3D0%26selected_field_path%3Droot%26selected_split%3DR2DEaro0ebRNYp89vl0ES%20(7).png?alt=media&#x26;token=7a13907a-6948-402e-a00f-a4d5bbc76db1" alt=""><figcaption></figcaption></figure>

Enter a name for your experiment and click **Create**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FdInC5oKrWqWg35HnQLbC%2Flocalhost_3000_projects_6715_main_draft_logic_setup%3D0%26selected_field_path%3Droot%253EanotherFlag%26selected_split%3DTfFtM3pWjRj3vycOFCRK2.png?alt=media&#x26;token=478e5a11-7b64-46cd-9ae9-391ef33bf48c" alt=""><figcaption></figcaption></figure>

Click **Insert**, toggle the **Test** variant on, and click **Save**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FmcXF2j2OAEVs7bIDKTmB%2Flocalhost_3000_projects_6715_main_draft_logic_setup%3D0%26selected_field_path%3Droot%253EanotherFlag%26selected_split%3DTfFtM3pWjRj3vycOFCRK2%20(4).png?alt=media&#x26;token=ac43c173-76cf-488a-8279-0ec52a00e442" alt=""><figcaption></figcaption></figure>

Now the flag will be enabled for 50% of users who enter the experiment and disabled for the other 50%.

Note that since the experiment is in the **Default** block, any targeting rules above it still apply. Users who match those rules exit the flag logic before they can enter the experiment — this ensures they don't contaminate your experiment data with incorrect exposures.

Hypertune experiments are inserted within feature flags so all rollout logic lives in one place. If you managed experiments in a separate flag, you'd need to:

* Call both the main flag and the experiment flag from your code
* Ensure the experiment flag is only called if the main flag is enabled

This adds complexity and risks logging incorrect exposures.

Hypertune also provides granular, real-time evaluation counts on your flag targeting, so you can visually confirm that users are matching rules, entering experiments, and being assigned variants correctly — all in a single view.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FLA7SxXe2ulA0d0MSMbTW%2Flocalhost_3000_projects_6715_main_draft_logic_setup%3D0%26selected_field_path%3Droot%253EanotherFlag%26selected_split%3DVmA8aYk24rPZCNPD9iuVD.png?alt=media&#x26;token=67e0f4f8-16f7-4c0f-83af-1c4f47ad2a31" alt=""><figcaption></figcaption></figure>

## Analyze experiment results with a funnel

Go to the **Analytics** view in the dashboard and click the **+** button in the top-right of the sidebar. Select **Funnel**, enter a name, and click **Create**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FGOrSpF0e4sCdo385kMtu%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_id%3D.png?alt=media&#x26;token=ed8b429f-11d7-499a-af4a-bb528121a7ff" alt=""><figcaption></figcaption></figure>

Click **+ Add funnel step**, choose **Exposure**, select your experiment in the dropdown, and click **Add**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FTLkPUgl1UU7Zjvrcj3Kz%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_id%3D47.png?alt=media&#x26;token=e8997a2e-acf6-425d-891e-fb647277c7b6" alt=""><figcaption></figcaption></figure>

You'll see the total number of users exposed to the Control and Test groups during the selected time range.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2Frvotiu6SeRNW9hjv1E9j%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_i%20(1).png?alt=media&#x26;token=d13d6d6e-1d10-4891-9fbf-499df31d2d98" alt=""><figcaption></figcaption></figure>

Click the **+** button to the right of the first step, choose **Event**, select your conversion event in the dropdown, and click **Add**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FitR9IAIt5TfIN8fKk7p7%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_i%20(2).png?alt=media&#x26;token=a2807086-e784-4f20-a411-a370be850d0a" alt=""><figcaption></figcaption></figure>

Now you'll see, for each experiment group:

* Total number of users exposed during the selected time range
* Number of those users who completed the conversion event during the selected time range
* Conversion rate
* Uplift vs. the Control group, with a confidence interval
* Statistical significance

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FfMf7XgYyLsenwrUa06Ll%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_i%20(3).png?alt=media&#x26;token=51c65d89-2748-4054-a724-e3c0ed09db67" alt=""><figcaption></figcaption></figure>

Click **Save** so you can revisit this funnel and share it with your team.

### Customize analysis

In the top bar, you can choose a confidence level for the frequentist analysis, or choose a Bayesian analysis which shows you the probability of each variant being the best.

Both methods automatically apply adjustments for:

* **Sequential testing** — avoids the **peeking problem**, enabling you to view results at any time and ship a variant if it has a significant result
* **Multiple comparisons** — adjusts for the **family-wise error rate**, enabling you to compare more than two variants (in a multi-arm test), to determine an overall winner

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2Fe5lwGVVqrYWeqX6iKReE%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_i%20(4).png?alt=media&#x26;token=3d5ae7a2-4b05-4c45-96a5-9d7232d47319" alt=""><figcaption></figcaption></figure>

Often you only care about whether an experiment group is better than the Control group with statistical significance, i.e. a conversion uplift. However, if you also want to test for a conversion drop, you can switch from a One-sided analysis to a Two-sided analysis.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FjyKxLHxQayG1cIaLwwd2%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_i%20(5).png?alt=media&#x26;token=9b2d6dbb-105d-499d-8bc1-d38c2c7ca1a1" alt=""><figcaption></figcaption></figure>

By default, the funnel shows results using data from the last 30 days, but you can change the time range.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FnOlaM8aCE6UGhNxalsus%2Flocalhost_3000_projects_6715_main_draft_analytics_setup%3D0%26selected_field_path%3Droot%253E--view--all%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_i%20(7).png?alt=media&#x26;token=a29935c1-eb4e-41cf-ac5d-87d8d4490d67" alt=""><figcaption></figcaption></figure>

## Analyze experiment results with an impact analysis

Go to the **Analytics** view in the dashboard and click the **+** button in the top-right of the sidebar. Select **Impact analysis**, enter a name, and click **Create**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FbXO0PvBIRGQXOI6tdKXY%2Flocalhost_3000_projects_2496_main_draft_analytics%20(1).png?alt=media&#x26;token=4afd9125-201e-4e16-a92a-0b0ad9b37c6f" alt=""><figcaption></figcaption></figure>

Click **+ Experiment**, select your experiment from the dropdown, and click **Add**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FbAW3EsA4ilzYGeck0Cxw%2Flocalhost_3000_projects_2496_main_draft_analytics%20(2).png?alt=media&#x26;token=9cf1d5a4-7a76-4a5b-be66-46959c59e431" alt=""><figcaption></figcaption></figure>

Click **+ Metric**, select your conversion event in the dropdown, and click **Add**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2Ff1aZ2diHLphrDajIrvfj%2Flocalhost_3000_projects_2496_main_draft_analytics%20(3).png?alt=media&#x26;token=39ac832f-66e2-477e-bfdf-cfdd76ce4cd1" alt=""><figcaption></figcaption></figure>

Repeat for any other events you want to see the impact on, and optionally add filters to define more specific metrics.

For each experiment group, you'll see:

* Total number of users exposed during the selected time range
* For each conversion event:
  * Number of those users who completed the conversion event during the selected time range
  * Conversion rate
  * Uplift vs. the Control group, with a confidence interval
  * Statistical significance

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2Fr8vxeBIiHV2GVrudpJmE%2Flocalhost_3000_projects_2496_main_draft_analytics%20(6).png?alt=media&#x26;token=542b183d-1d55-441f-85d2-582db356a67f" alt=""><figcaption></figcaption></figure>

This lets you see the impact of your experiment on multiple metrics in a single view.

## Ship a variant

Once you're confident in the results, go to the **Flags** view and select the flag with the experiment. Scroll to the Experiment expression in the flag targeting. Click the options button (⋯) next to the variant you want to ship and select **Ship variant**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2F0jtJE1fa2qdVuJ4rj1Ss%2Flocalhost_3000_projects_6715_main_draft_logic_setup%3D0%26selected_field_path%3Droot%253EanotherFlag%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_id%3D476%26se.png?alt=media&#x26;token=93fb4a22-a044-478c-9dcb-1468027c9434" alt=""><figcaption></figcaption></figure>

The Experiment expression will be replaced with that variant. Click **Save**.

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FZK5OYsZJE9i7tNdqc2bC%2Flocalhost_3000_projects_6715_main_draft_logic_setup%3D0%26selected_field_path%3Droot%253EanotherFlag%26selected_split%3DVmA8aYk24rPZCNPD9iuVD%26analytics_from%3D2025-09-08T23%253A00%253A00.000Z%26analytics_to%3D2025-10-08T22%253A59%253A59.999Z%26analytics_view_id%3D47%20(1).png?alt=media&#x26;token=12b2f456-833b-4860-89c5-2c642a5b5d2b" alt=""><figcaption></figcaption></figure>

## Next steps

* Create an [A/B/n test](https://docs.hypertune.com/concepts/a-b-n-tests) with more than two variants.
* Run a [multivariate test](https://docs.hypertune.com/concepts/multivariate-tests) to explore combinations across features.
* Set up an [AI loop](https://docs.hypertune.com/concepts/ai-loops) to automatically learn and shift traffic to the best variant for each unique user.
* Add more steps to your [funnel](https://docs.hypertune.com/concepts/funnels) or configure each step with filters, breakdowns, segments, derived fields, and aggregations.
