Proxies for ChatGPT Operator: browser-agent configuration that works
ChatGPT Operator runs browser-based task execution for end users. Operator-style agents (Operator itself, open-source clones, custom GPTs with browsing) all share a proxy configuration shape. A working reference.
· Nathan Brecher · 4 min read
ChatGPT Operator, launched in early 2025 and matured through 2026, is OpenAI's production browser agent. Operator reads the target page's DOM and accessibility tree (rather than screenshots, unlike Claude Computer Use) and generates browser actions at the element level. The agent class — "Operator-style browser agents" — also covers custom GPTs with browsing tool, LangGraph agents using Playwright, and a long tail of open-source Operator variants.
This post is the proxy configuration pattern that works for Operator-style agents in production.
Operator vs. Computer Use: the proxy-layer difference
Both are browser agents. Both need long sessions and residential-grade authenticity. The practical differences for proxy routing:
- DOM-level actions are faster than pixel-level. An Operator session executes actions more quickly than a Computer Use session on equivalent tasks. Total session bandwidth is lower (no screenshot-heavy round trips), timing profile is slightly closer to traditional browser automation.
- Authentication handling differs. Operator has specific authentication-prompt handling that intercepts credential entry; Computer Use leaves it to the user-runtime entirely. For proxy configuration this doesn't change much, but it does mean Operator sessions can persist authentication state across tasks in ways the user-runtime has to be explicit about.
- Anti-bot signature is slightly different. DOM-inspection traffic patterns differ from screenshot-traffic patterns. Some anti-bot systems classify them differently.
Recommended proxy configuration for Operator
Same basic shape as the Computer Use post: ISP exit class, session-stable across the task lifetime, country matching the task's expected origin.
# Custom GPT with browsing, or LangGraph Operator-style agent
# Your runtime takes a proxy URL; we provide it.
PROXY_URL = "http://USER:PASS@gateway.squadproxy.com:7777"
def proxy_config_for_task(task_id: str, country: str = "us"):
return {
"server": PROXY_URL,
"headers": {
"X-Squad-Class": "isp",
"X-Squad-Country": country,
"X-Squad-Session": f"operator-{task_id}",
},
}
# Playwright example
from playwright.async_api import async_playwright
async def run_operator_task(task_id: str, prompt: str, country: str = "us"):
async with async_playwright() as p:
proxy = proxy_config_for_task(task_id, country)
browser = await p.chromium.launch(
proxy={"server": proxy["server"]},
)
context = await browser.new_context(
extra_http_headers=proxy["headers"],
)
# Your Operator-style agent loop runs here
...
Operator-specific considerations
Credential handling
Operator style agents routinely encounter authentication pages. The pattern that works: pre-authenticate in the runtime before handing off to the agent. The agent operates with session state already established; the proxy layer maintains the session continuity that keeps the authentication state valid.
Tab / window management
Operator-style agents sometimes open multiple tabs or windows.
Each tab shares the parent context's proxy settings — but if
your runtime creates new contexts per tab, make sure the proxy
is applied consistently or all tabs within the agent task share
the same session ID via X-Squad-Session.
Rate behaviour
Operator's DOM-action pace is faster than Computer Use. On rate-sensitive targets, this sometimes triggers faster throttling. The fix is typically not more proxies — it's slowing the agent down with explicit wait/delay instructions.
OAI's own infrastructure
Operator tasks that call back to OpenAI's APIs for intermediate reasoning use the OpenAI API layer, which is a separate path from the browser-action layer. Proxy the browser actions; use OpenAI's own regional routing for the reasoning calls unless you're also testing regional reasoning behaviour (in which case route the reasoning API through our ChatGPT API guide).
Operator against geoblocked targets
A common use case: Operator tasks that operate on behalf of users in regions where the target website geoblocks cloud IPs. Default Operator configuration (without proxy) uses OpenAI's cloud infrastructure and fails on geoblocked targets.
Adding an ISP proxy in the target country unblocks this reliably. For EU-targeted tasks, route through DE ISP or FR ISP. For UK-targeted tasks, GB ISP. For JP, JP ISP.
Cost shape at scale
Operator sessions at scale run cheaper than Computer Use on proxy bandwidth (no screenshot round trips). A typical Operator task uses ~5-15 MB of proxy bandwidth. At 1000 tasks/day, that's 5-15 GB/day of ISP bandwidth — well within the Team plan's envelope.
The OpenAI inference cost dwarfs the proxy cost for Operator at scale. See pricing for the proxy side.