Cut your LLM costs.

One API endpoint replaces 12 provider integrations. Intelligent routing saves 30–60% on every request.

Trusted by engineering teams shipping with AI

Model Direct Trovald
Opus 4.6 $15.00 $9.00 -40%
GPT-4.5 $75.00 $37.50 -50%
Gemini 3.1 $1.25 $0.75 -40%
Grok $3.00 $1.65 -45%

Works with all major LLM providers

and 4 more providers

Everything you need to optimize LLM spend

Drop-in proxy with intelligent routing, billing, and reliability built in.

Smart Routing

Classifies each request and routes to the most cost-effective model that meets your quality threshold. Budget, balanced, or premium.

Universal API

Send OpenAI, Anthropic, or Google-format requests. Trovald translates between all three—swap providers without changing code.

Transparent Billing

Prepaid wallet with full transaction history. See exactly what you spend per request, per model, per provider. No surprises.

Circuit Breaker

Automatic failover when a provider goes down. Your requests keep flowing while we route around the problem.

Shadow Mode

Test routing decisions without affecting production. See projected savings before you commit to any changes.

12 Providers

OpenAI, Anthropic, Google, Mistral, DeepSeek, Groq, xAI, Cohere, Together, Fireworks, Perplexity, and OpenRouter—all from one key.

How It Works

Go from direct provider calls to optimized routing in minutes.

1

Point your app

Swap your provider base URL for Trovald and use your tv_ API key. No SDK changes.

2

We classify

Each request is analyzed—chat, code, summarization—to determine the right quality tier.

3

We route

The request goes to the cheapest provider that meets quality requirements for that task.

4

You save

Same results, lower bill. Track every request and dollar saved in your dashboard.

Calculate your savings

See what intelligent routing could save your team.

Estimated monthly savings
$1,750
That's $21,000 per year

Frequently Asked Questions

Trovald is an intelligent LLM proxy that sits between your application and AI providers. It analyzes each request and routes it to the most cost-effective model that meets your quality requirements, saving you 30–60% on LLM costs without any code changes.
Each request is classified by task type (chat, code generation, summarization, etc.) and complexity. Trovald then selects the cheapest provider and model that can handle that task at the required quality level. You can control the quality threshold with budget, balanced, or premium modes.
Yes. Trovald acts as a transparent proxy—your requests are forwarded directly to the provider. We log metadata (model, tokens, cost) for billing, but request and response content is never stored or used for training. All traffic is encrypted in transit.
Change two lines of code: swap your provider API key for a Trovald tv_ key, and point your base URL to Trovald. Works with any OpenAI-compatible SDK in any language. Most teams integrate in under 2 minutes.
Trovald includes a built-in circuit breaker. When a provider experiences errors or high latency, requests are automatically rerouted to a healthy alternative. Your application stays online even when individual providers don't.
Trovald uses a prepaid wallet model with transparent per-request billing. There's a small markup on provider costs, but intelligent routing means your total spend is still significantly lower than going direct. A free tier is available to get started.

Integrate in 2 minutes

Change two lines. That's it.

// Before
const client = new OpenAI({
  apiKey: "sk-...",
});

// After — just change key & URL
const client = new OpenAI({
  apiKey: "tv_...",
  baseURL: "https://api.trovald.com/v1",
});
# Before
client = OpenAI(api_key="sk-...")

# After — just change key & URL
client = OpenAI(
    api_key="tv_...",
    base_url="https://api.trovald.com/v1",
)
# Before
curl https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer sk-..." \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o","messages":[...]}'

# After — just change URL & key
curl https://api.trovald.com/v1/chat/completions \
  -H "Authorization: Bearer tv_..." \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o","messages":[...]}'

API Documentation

Everything you need to integrate Trovald into your stack

How It Works

Trovald is a drop-in proxy that sits between your app and LLM providers. Point your existing SDK at our endpoint, and we handle the rest:

  • Intelligent routing across 12+ providers
  • Automatic failover and circuit breaking
  • Cost tracking and wallet management
  • Zero code changes beyond base URL

Authentication

Generate an API key from your dashboard with the tv_ prefix. Pass it as a Bearer token in the Authorization header:

  • Authorization: Bearer tv_...
  • Keys are scoped to your organization
  • Revoke keys instantly from the dashboard
  • All requests deducted from your wallet

Supported Providers

Use any model from any supported provider through a single unified API. We support OpenAI-compatible format for all providers:

  • OpenAI, Anthropic, Google Gemini
  • Mistral, DeepSeek, Groq, xAI
  • Together, Fireworks, Perplexity
  • OpenRouter, Cohere, and more

Routing Modes

Control how Trovald routes your requests. Configure from your dashboard settings:

  • Budget – maximize savings, lower-tier models
  • Default – balanced savings and quality
  • Premium – highest quality, minimal routing
  • Shadow – compare routes without switching

API Endpoints

POST /v1/chat/completions OpenAI-compatible chat (streaming supported)
POST /v1/messages Anthropic-compatible messages
GET /health Service health check
GET /health/providers Provider status and latency

Start saving on LLM costs today

Get Started Free

No credit card required. Free tier available.