# Overview

AI configuration lets you build, ship, and optimize AI features faster.

## Problem

Without AI configuration, engineers hardcode key parameters like model choice, prompts, and generation settings like temperature, max tokens, etc. This approach introduces several limitations:

* **Code coupling:** Updating model choice, prompts, and settings requires code changes and redeployments.
* **Limited iteration:** Domain experts can't live-test or refine prompts without engineering support.
* **No independent versioning:** Model choice, prompts, and settings aren't versioned separately from the codebase, making it hard to track or roll back changes.
* **No runtime flexibility:** You can't dynamically adjust model choice, prompts, or settings based on environment, user segment, or other conditions.
* **No experimentation:** There's no way to run experiments on model choice, prompts, and settings, to improve metrics like cost, latency, or user satisfaction.
* **No automation:** There's no way to automatically optimize model choice, prompts, and settings.

## Solution

Define your AI configuration in Hypertune as flags instead of hardcoded values:

```graphql
type Root {
  emailAssistantAIConfig: EmailAssistantAIConfig!
}

type EmailAssistantAIConfig {
  model: String!
  system: String!
  prompt(email: String!, tone: String!): String!
  maxOutputTokens: Int!
  temperature: Float!
  presencePenalty: Float!
  frequencyPenalty: Float!
  maxRetries: Int!
}
```

Then reference it in your code:

{% code title="app/api/completion/route.ts" %}

```typescript
import { waitUntil } from '@vercel/functions'
import { generateText } from 'ai'
import getHypertune from '@/lib/getHypertune'

export async function POST(req: Request) {
  const hypertune = await getHypertune({ isRouteHandler: true })

  const aiConfig = hypertune.emailAssistantAIConfig()

  const { email, tone }: { email: string; tone: string } =
    await req.json()

  const { text } = await generateText({
    model: aiConfig.model({ fallback: 'openai/gpt-4.1' }),
    system: aiConfig.system({
      fallback: `You are a professional assistant that drafts clear, polite, and concise email replies for a busy executive.`,
    }),
    prompt: aiConfig.prompt({
      args: { email, tone },
      fallback: `Write a reply to the following email:\n\n${email}\n\nThe tone should be ${tone} and the response should address all points mentioned.`,
    }),
    maxOutputTokens: aiConfig.maxOutputTokens({ fallback: 400 }),
    temperature: aiConfig.temperature({ fallback: 0.5 }),
    presencePenalty: aiConfig.presencePenalty({ fallback: 0.1 }),
    frequencyPenalty: aiConfig.frequencyPenalty({
      fallback: 0.3,
    }),
    maxRetries: aiConfig.maxRetries({ fallback: 5 }),
  })

  waitUntil(hypertune.flushLogs())

  return Response.json({ text })
}
```

{% endcode %}

This enables engineers and non-technical collaborators to tweak prompts and settings instantly from the Hypertune dashboard without any code changes or redeploys:

<figure><img src="https://2048905609-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FWa3rQLiu4JZhBRkiyoKz%2Fuploads%2FycLmRu9Yoy2cVrfKhOBl%2Flocalhost_3000_projects_3357_main_draft_logic_setup%3D0%26selected_schema_type%3D%257B%2522type%2522%253A%2522object%2522%252C%2522name%2522%253A%2522EmailAssistantAIConfig%2522%252C%2522selectedChildName%2522%253Anull%257D%26selected_field_path%3Droot%253EemailAssistantAIConfig%20(1).png?alt=media&#x26;token=ce92b18e-0871-4d0e-834e-4d9a699eba9b" alt=""><figcaption></figcaption></figure>

## Benefits

* **Instant updates:** Change model choice, prompts, and settings instantly at runtime without code changes or redeploys.
* **Collaborative iteration:** Domain experts can live-test and refine prompts without relying on engineering.
* **Independent versioning:** Track, compare, and roll back AI configuration changes easily.
* **Dynamic behaviour:** Adjust configuration by environment, user, or other conditions.
* **Experimentation:** Run experiments on model choice, prompts, and settings, to optimize cost, latency, or engagement.
* **Automated optimization:** Automatically optimize model choice, prompts, and settings.

## ROI

These benefits help teams:

* **Ship faster:** Launch and iterate on AI features without engineering bottlenecks.
* **Improve quality:** Continuously refine prompts and parameters for better results.
* **Drive outcomes:** Optimize AI behavior for metrics like conversion, retention, and revenue.
