GPT-5-Codex: The Complete Guide (Setup, Best Practices, and Why It Matters)
What you’re seeing is a glimpse into the near future: four Codex agents running in parallel, each building software I’ve assigned. There’s a new model in town—GPT‑5 Codex—and it’s changing how and where we code.
TL;DR
GPT‑5‑Codex is OpenAI’s newest coding model, tuned for “agentic” software work in Codex (terminal/IDE + cloud). It pairs with you for quick edits and can also run independently on long tasks (refactors, test‑driven bug fixes, code review). It’s available inside Codex today (default for cloud tasks & code review), not yet as a standalone API model. Install the CLI or IDE extension, connect GitHub for cloud, add an AGENTS.md to steer the agent, and keep internet access sandboxed/allow‑listed. (OpenAI)
What is GPT‑5‑Codex?
GPT‑5‑Codex is a variant of GPT‑5 optimized specifically for coding inside Codex. It was trained on real engineering workflows (building features, debugging, large‑scale refactors, and PR reviews) and is designed to both pair interactively and execute longer tasks on its own. In OpenAI’s announcement, it’s now the default model for cloud tasks and code review in Codex; you can also opt into it locally via the CLI/IDE. (OpenAI)
OpenAI also published a system‑card addendum outlining additional safety training and product‑level mitigations (sandboxing, configurable network access). (OpenAI)
Availability note: GPT‑5‑Codex isn’t a public API model yet; you won’t see gpt-5-codex in the API model list or ChatGPT model picker. It’s used inside Codex (web/CLI/IDE) and enabled by your ChatGPT plan. (OpenAI Help Center)
Why it’s a big deal (in practice)
Dynamic reasoning time: snappy for small edits, thinks longer for complex work; OpenAI reports hours‑long autonomous runs on large tasks. (OpenAI)
PR‑grade code review: navigates dependencies, runs tests to validate fixes, and posts structured comments in GitHub. (OpenAI)
Front‑end aware: accepts screenshots/images as input in the cloud to inspect and iterate on UI. (OpenAI)
Everywhere you code: terminal (CLI), IDE (VS Code/Cursor), web/cloud, GitHub, and even the ChatGPT iOS app. (OpenAI)
Safer by design: sandboxed environments, approval modes, and network allow‑lists to reduce prompt‑injection/exfiltration risks. (OpenAI Developers)
Pricing & availability
Codex (which uses GPT‑5‑Codex for cloud & review) is included in ChatGPT Plus, Pro, Business, Edu, and Enterprise. Local usage can also be billed via API key at standard API rates if you prefer usage‑based billing. See Codex pricing & limits and ChatGPT plan details:
/model gpt-5-codex medium # try "high" for complex tasks
(You can also choose older API models with --model, but GPT‑5‑Codex is recommended inside Codex.) (OpenAI Developers)
Approval mode (safety)
Default is Auto: edits & commands in the working directory without nagging, approvals required outside it or for network access.
Read‑only for planning/chat; Full access when you explicitly want it to run with network access. (OpenAI Developers)
First prompts to try
# Understand a repo
codex "Give me a high-level tour of this codebase; map key modules and list top 5 risks."
# Fix a failing test (paste logs)
codex "Use the stack trace below to find and fix the bug. Run tests until green."
# Multi-file refactor
codex "Thread a request_id through auth → handlers → logger; update tests and PR message."
(Attach images with -i if useful: codex -i screenshot.png "Explain this UI bug".) (OpenAI Developers)
Install the Codex IDE extension for VS Code/Cursor/Windsurf from the marketplace, sign in with your ChatGPT account, then choose GPT‑5‑Codex and set your reasoning level. Windows is supported experimentally (best via WSL). (OpenAI Developers)
Helpful tip: put Codex in the right sidebar and use the model/effort switcher directly under the chat input. (OpenAI Developers)
Start a cloud task from web/IDE/iOS, or even from GitHub comments. (OpenAI Developers)
Configure the environment: setup scripts, toolchain versions, secrets, and internet policy. (OpenAI Developers)
Example prompts for cloud:
Ask mode: “Document the request flow and output a Mermaid diagram.”
Code mode: “There’s a memory‑safety vuln in <pkg>—find and fix; add tests; prepare a PR.” (OpenAI Developers)
Network access is off by default; enable carefully with an allow‑list (e.g., package registries) and restrict HTTP methods to GET/HEAD/OPTIONS where possible. (OpenAI Developers)
Built‑in code review (GitHub)
Enable Code review in Codex settings for your repository, then mention @codex review on a PR to trigger a review. Codex analyzes the diff and codebase, runs tests to validate behavior, and posts comments in the PR. (OpenAI Developers)
Steerability with AGENTS.md (your agent’s playbook)
Add an AGENTS.md at the repo root to encode build commands, test/lint rules, coding conventions, and PR expectations. Codex (and other agents) will read this and follow your house style and workflows—think of it as a README for agents. (Agents)
Feature from spec “Implement <feature> across <files/modules>. Follow AGENTS.md. Write tests that cover <cases>. Keep public API stable. When finished, summarize changes and open a PR.”
Refactor at scale “Refactor <pattern> across the repo. Preserve behavior. Update docs and tests. Run pnpm lint && pnpm test until green. Provide a migration note.”
Bug fix from stack trace “Use this stack trace to reproduce and fix the bug. Add a regression test and explain root cause. Paste git diff in the chat before opening PR.”
Safety & controls you should actually use
Approval modes (CLI/IDE): keep Auto or Read‑only unless you need Full access; review proposed edits. (OpenAI Developers)
Cloud sandbox: configure images, setup scripts, secrets, and maintenance scripts; cache containers for speed. (OpenAI Developers)
Internet access: off by default; allow‑list domains and restrict HTTP verbs; beware prompt‑injection in untrusted content. (OpenAI Developers)
Model safeguards: see the system‑card addendum for GPT‑5‑Codex. (OpenAI)
Limitations & gotchas
Not an API model (yet): you can’t call gpt-5-codex directly in the API; use Codex clients. (OpenAI Help Center)
You won’t see it in ChatGPT’s model picker—it powers Codex experiences behind the scenes. (OpenAI Help Center)
Windows: supported via WSL for the best experience today. (OpenAI Developers)
Always review outputs: agentic code is powerful but not infallible—treat Codex as an additional reviewer, not a replacement. (OpenAI)
Copy‑paste workflows
A) Local pairing (CLI)
# 1) Install & sign in
npm i -g @openai/codex && codex
# 2) Switch model & effort
/model gpt-5-codex high
# 3) Drive a refactor
"Refactor auth middleware to thread request_id; update tests; run until green."
# 4) Review & commit
"Summarize changes, risks, and test coverage. Prepare a conventional-commit message."
Configure environment (setup script, versions, secrets), leave internet off to start.
Delegate: “Add pagination to /users, update API docs, and include unit/integration tests. Open a PR.” (OpenAI Developers)
C) Code review (GitHub) Comment on a PR: @codex review Optionally: @codex review for security vulnerabilities or …for outdated dependencies. (OpenAI Developers)
Is GPT‑5‑Codex available via the API? Not yet (as of 09/16/25). Use Codex (web/CLI/IDE). The Help Center calls out that gpt-5-codex isn’t currently an API model. (OpenAI Help Center)
Where do I turn it on? In Codex: it’s the default in cloud & code review, and you can select it locally in CLI/IDE via the model switcher. (OpenAI)
Is it safe for sensitive repos? Codex runs in a sandbox by default; keep internet access off or tightly allow‑listed and review changes. See the safety docs and system‑card addendum. (OpenAI Developers)