Proxies for Mistral AI and La Plateforme
Mistral is the largest EU-based frontier model provider. Evaluating Mistral models from EU origins (FR, DE, NL) gives the authentic regional signal that US-cloud eval can't reproduce.
Updated 23 April 2026
Recommended exit classes
Why Mistral specifically benefits from proxy routing
Mistral's La Plateforme API is one of the EU's primary frontier AI surfaces. Three angles make proxy routing specifically useful:
-
EU-regional policy eval on an EU-deployed provider. Mistral's content policy reflects EU and French regulatory context more directly than US providers. Measuring the delta between US-cloud and FR/DE-residential origins is the point of the eval.
-
Mistral's Paris-region inference infrastructure. Mistral routes EU traffic to EU-deployed POPs. In-region eval is measuring the actual deployed surface; out-of-region eval is measuring a cross-border-routing fallback.
-
Multilingual eval with Mistral's European focus. French and German in particular — Mistral's training data and evaluation work weighted on European languages. Proxy routing that anchors eval to each target language's primary country gives the authentic signal.
Recommended configuration
import httpx
PROXY = "http://USER:PASS@gateway.squadproxy.com:7777"
def eval_mistral(prompt: str, country: str, model: str = "mistral-large-latest"):
return httpx.post(
"https://api.mistral.ai/v1/chat/completions",
json={
"model": model,
"messages": [{"role": "user", "content": prompt}],
},
headers={
"Authorization": f"Bearer {MISTRAL_API_KEY}",
"X-Squad-Class": "residential",
"X-Squad-Country": country,
"X-Squad-Session": "per-request",
},
proxies=PROXY,
timeout=120,
).json()
Mistral eval countries that matter
- FR: Mistral is Paris-based. FR-residentials give the authentic home-region eval origin.
- DE: largest EU market, GDPR-stringent, distinct from FR on regulatory surface.
- NL: Amsterdam-routed EU traffic anchor.
- GB: post-Brexit EU-adjacent English-working comparison.
- US: baseline for measuring EU-vs-US delta.
Mistral-specific notes
- Codestral — code-focused variant. Eval from developer- typical origins (US metros, EU tech hubs) for realistic deployed conditions.
- Mistral embed — Embeddings API. Datacenter is the right class.
- Fine-tuning workloads — training-data upload uses similar API surface. Datacenter is appropriate.
Plans that fit
See pricing. Mistral eval typically fits into Solo or Team plans; lab coordination around EU-deployed safety evaluation uses Lab plan.
Related
- LLM evaluation use case
- France country page — primary Mistral-anchor country
- ChatGPT API landing
- Claude API landing
Pricing
Pricing — plans sized for Mistral workloads
Every plan includes access to all 5 exit classes across our 10 focus countries — quotas vary by plan. The size you need scales with your eval cadence and concurrency.
Solo
For individual researchers running evaluation scripts and prototype RAG pipelines.
$149/ month
or $1,430/year (save 20%)
50 GB residential · unlimited datacenter · 200 concurrent sessions
- ✓Access to all 5 exit classes · 10 focus countries
- ✓50 GB residential · unlimited datacenter
- ✓5 static ISP IPs · 5 GB 4G mobile
- ✓1 seat · 200 concurrent sessions
- ✓Python + Node SDK + REST API
- ✓Per-request metering (not time-based)
- ✓Email support (24h response, business days)
- ✓Overage: $3/GB residential · $6/GB mobile
Best for
- Solo researchers
- Evaluation scripts
- Prototype RAG
Team
Most popularFor AI startups and mid-size labs splitting capacity between training and evaluation.
$699/ month
or $6,710/year (save 20%)
500 GB residential · unlimited datacenter · 1,000 concurrent sessions
- ✓Access to all 5 exit classes · 10 focus countries
- ✓500 GB residential · unlimited datacenter
- ✓25 static ISP IPs · 25 GB 4G mobile
- ✓10 seats ($29/mo per extra seat) · 1,000 concurrent sessions
- ✓City-level geo-routing + ASN targeting
- ✓99.9% uptime SLA
- ✓Priority Slack support (4h response, business hours)
- ✓Python + Node SDK + REST API + webhooks
- ✓Overage: $3/GB residential · $6/GB mobile
Best for
- AI startups
- Mid-size labs
- Model eval teams
Lab
For academic labs, eval consortia, and frontier model companies running sustained workloads.
$2,999/ month
or $28,790/year (save 20%)
2 TB residential · unlimited DC · 50 GB 4G + 20 GB 5G · 3,000 concurrent sessions
- ✓Access to all 5 exit classes · 10 countries on 4 continents
- ✓2 TB residential · unlimited datacenter
- ✓100 static ISP IPs · 50 GB 4G + 20 GB 5G mobile
- ✓50 seats ($19/mo per extra seat) · 3,000 concurrent sessions
- ✓Dedicated gateway lane (bypasses shared-pool queues on us-east-1 + eu-west-1)
- ✓99.95% uptime SLA
- ✓Dedicated Slack channel (1h response, business hours)
- ✓Custom BGP prefix on request (additional fees apply)
- ✓Overage: $2.50/GB residential · $5/GB mobile
Best for
- Academic labs
- Large eval consortia
- Frontier model companies
Enterprise
Custom contracts with dedicated infrastructure, volume pricing, and research-grade SLAs.
Custom pricing
Custom (from 5 TB/mo residential) · unlimited concurrent sessions
- ✓Volume pricing from 5 TB/mo residential
- ✓Dedicated BGP prefix + ASN announcement
- ✓Unlimited concurrent sessions · unlimited seats
- ✓99.99% uptime SLA with financial credits
- ✓Named Technical Account Manager + 24/7 on-call paging
- ✓Custom AUP, DPA, on-site deployment option
- ✓Research / academic discount (30–50% off Team or Lab)
- ✓Annual contract · wire, ACH, USDC/USDT/BTC settlement
Best for
- Frontier labs
- Eval consortia
- Enterprise AI
All plans include 14-day refund, single endpoint with regional failover, HTTP(S) + SOCKS5 on every exit class, access to all 5 exit classes and all 10 focus countries, and Python + Node SDKs. Concurrent sessions = simultaneous TCP sessions through the gateway. Overage warnings fire at 80% and 100%; traffic continues only if overage billing is enabled on your account.
Other API landings
Routing traffic for a different AI API?
For ChatGPT
Proxies for ChatGPT and the OpenAI Chat API
Regional evaluation of ChatGPT and the OpenAI Chat Completions API across 10 countries, with header-based exit-class routing and session continuity for multi-turn agent evaluation.
For Claude
Proxies for Claude and the Anthropic API
Regional Claude evaluation across 10 countries with header-based exit routing, session continuity for multi-turn agent benchmarks, and concurrency that handles eval fleets.
For Gemini
Proxies for Gemini and the Google AI API
Regional Gemini evaluation across 10 countries, with header-based exit-class routing and the concurrency headroom to run continuous eval fleets.
For HuggingFace Inference
Proxies for the HuggingFace Inference API and Endpoints
HF hosts inference for thousands of open-source models. Routing eval workloads through the HF inference surface with sensible rate distribution and regional anchoring keeps the eval consistent and within HF's rate budget.
For OpenAI
Proxies for the full OpenAI API surface (Chat, Embeddings, DALL-E, Realtime)
Chat, Embeddings, DALL-E, Realtime, Assistants — all covered by the same header-based gateway routing. Residential for regional eval, ISP for multi-turn Assistants, datacenter for bulk Embeddings.
Start routing Mistral traffic through SquadProxy
Real ASNs, real edge capacity, and an engineer who answers your Slack the first time.