Guide
This guide builds on the SDK quickstart and shows you how to model, manage, and experiment on your AI configuration using Hypertune.
You'll learn how to:
Create custom object types to model your AI configuration
Create flags that use those custom object types
Access those flags to retrieve AI configuration in your code
Update and target your AI configuration
Run experiments on your AI configuration
Prerequisites
Create custom object types to model your AI configuration
Go to the Schema view in the dashboard. Click the + button in the top-right of the sidebar. Select Object, enter a name, and click Create.

By default, the new object type has no fields.
Click + Add to add a new field. Enter a name, set its type, and click Create.

Repeat for each field you want to add. You can switch to the code view to make this easier. Then, click Save.

Note how the example above has email
and tone
arguments on the prompt
flag. They are passed when calling the prompt
flag in your code, and can be referenced as variables when writing the prompt in the Hypertune dashboard, enabling you to create a prompt template.
Create flags for your AI configuration
Go to the Flags view in the dashboard. Click the + button in the top-right of the sidebar and select Flag.

Enter a name, set its type to the one you created earlier, and click Create.

Enter the initial configuration, then click Save.

Access flags to retrieve AI configuration
Regenerate the client:
npx hypertune
Then use the generated methods to access your flags:
import { waitUntil } from '@vercel/functions'
import { generateText } from 'ai'
import getHypertune from '@/lib/getHypertune'
export async function POST(req: Request) {
const hypertune = await getHypertune({ isRouteHandler: true })
const aiConfig = hypertune.emailAssistantAIConfig()
const { email, tone }: { email: string; tone: string } =
await req.json()
const { text } = await generateText({
model: aiConfig.model({ fallback: 'openai/gpt-4.1' }),
system: aiConfig.system({
fallback: `You are a professional assistant that drafts clear, polite, and concise email replies for a busy executive.`,
}),
prompt: aiConfig.prompt({
args: { email, tone },
fallback: `Write a reply to the following email:\n\n${email}\n\nThe tone should be ${tone} and the response should address all points mentioned.`,
}),
maxOutputTokens: aiConfig.maxOutputTokens({ fallback: 400 }),
temperature: aiConfig.temperature({ fallback: 0.5 }),
presencePenalty: aiConfig.presencePenalty({ fallback: 0.1 }),
frequencyPenalty: aiConfig.frequencyPenalty({
fallback: 0.3,
}),
maxRetries: aiConfig.maxRetries({ fallback: 5 }),
})
waitUntil(hypertune.flushLogs())
return Response.json({ text })
}
Update your AI configuration
Go to the Flags view in the dashboard, and select the flag with your AI configuration from the left sidebar.

Make your changes, then open the Diff view to review them. Click Save.

Target your AI configuration
Go to the Flags view in the dashboard, and select the flag with your AI configuration from the left sidebar. Click the arrow (>) to view each subfield in the sidebar, and select the field you want to add a targeting rule to.

Click + Rule, set your condition, provide alternate configuration for that condition, and click Save.

Experiment on your AI configuration
Go to the Flags view in the dashboard, and select the flag with your AI configuration from the left sidebar. Click the arrow (>) to view each subfield in the sidebar, and select the field you want to add a targeting rule to.

Click + Experiment. In the dropdown, select New experiment.

Enter a name for your experiment and click Create.

Click Insert and update the Test variant of your content.

Review your changes in the Diff view and click Save.

Once you've analyzed your experiment results and decided on a winning variant, go to the Flags view and select your flag. Click the options button (⋯) next to the variant you want to ship and select Ship variant, then click Save.

Next steps
Run a multivariate test to find the best combination of model choice, prompts, and settings to optimize key metrics like cost, latency, or user satisfaction.
Set up an AI loop to automatically optimize AI configuration for each unique user to optimize key metrics.
Extend your schema to support more complex AI configuration, e.g. prompt chains.
Last updated