Experiments
An experiment is the underlying abstraction that powers:
You can create experiments in the Hypertune UI and then embed them anywhere in your flag targeting logic.
This means you can have a single feature flag with rules to enable the feature for specific users, e.g. employees, QA, beta users, etc, and a final default rule to A/B test the feature on everyone else. This lets you consolidate all of the logic for your rollout in a single flag, which helps you avoid rollout and experiment exposure errors that could invalidate your experiment results, and enables you to quickly update rollout logic from one place without code deployments.
It also means you can reuse a single A/B test across the logic of different feature flags to roll out and A/B test related features in sync.
Structure
Each experiment has a:
Type
Test
— for A/B/n tests, multivariate tests, and progressive rolloutsAI loop
— for AI loops
Name
Set of dimensions
Payload event type
Goal function — for AI loops only
Each dimension has a:
Name
Set of arms
Each arm has a:
Name
Traffic percentage — except for AI loops
Using experiments in flag targeting logic
Once created, you can embed experiments anywhere in your flag targeting logic with an Experiment
expression.
If the experiment has more than one dimension, you can select the one which is relevant for the flag. Then for each arm in the dimension, you can set the flag values, or nest more flag targeting logic.
You can also set the Unit ID for the experiment, typically context.user.id
. A hash of the Unit ID is used to determine the arm for Test
experiments so the same user will always end up in the same arm. For AI loop
experiments, the best arm for a user may change over time.
If the experiment has a payload event type, you can also set the value for the payload, or nest more logic for it.
When you evaluate a flag that uses an experiment, an exposure will be logged with the Unit ID, the arm the unit was assigned for each dimension, and the payload event.
Viewing experiment results
To view the results of your experiment, ensure you've set up the event types you want to track, then build a funnel to compare conversion rates across different arms.
Last updated