OpenAI no longer exposes raw reasoning traces

They used to do it via the /chat/completions API but the new Responses API doesn’t expose chain-of-thought (CoT). OpenAI is pushing Responses hard with its stateful features, but as a developer I still find it easier to control state myself using the completions API.

It looks deliberate: since GPT-5, OpenAI hasn’t exposed raw reasoning traces. The reasons are unclear, safety, security, or concerns about the reliability of traces are possible explanations but whatever the rationale, the traces are now a tightly guarded secret.

Google’s Gemini has moved the same way. Hiding raw reasoning in Gemini 2.5 Pro provoked backlash from developers who relied on CoT to build and debug agents.

Right now, among the major foundation-model providers, Anthropic appears to be the only one still offering raw reasoning traces via API.

As someone building agents especially for debugging losing raw CoT makes debugging harder, and pushes me to rely more on Anthropic models.