Route only the turns that truly need runtime.
If a turn only needs structured text generation, keep it off the code interpreter path. Containers should be reserved for genuine execution, analysis, or file-processing steps.
Sources
OpenAI tool-cost detail
This page answers the container-pricing question directly. It keeps the current container rate, the runtime trigger, and realistic workload patterns in one source-linked view so code interpreter cost does not disappear behind the model row.
Cost anatomy
If code execution happens in the path, the container line should be estimated alongside tokens rather than treated as a hidden part of the model row.
Workload examples
These examples isolate the runtime line so the container meter can be seen before model tokens or retrieval costs are layered back in.
| Scenario | Workload | Runtime meter | Model overlap | Third meter | Decision read | Sources |
|---|---|---|---|---|---|---|
| Low-frequency analyst sessions | 20 code-interpreter sessions during the month, one container each. | 20 x $0.03 per container per 20 minutes = $0.60 | Model tokens still matter, but runtime is small enough that the container line stays visible and manageable. | No extra hosted-tool assumption in this example. This is the cleanest runtime-only read. | At low frequency, container pricing is a visible but small add-on meter. | |
| Sustained multi-session operations | 400 code-interpreter turns during the month, one container each. | 400 x $0.03 per container per 20 minutes = $12.00 | Runtime still looks modest, but it now repeats often enough that the container line should be priced explicitly beside the model row. | If file search or web search also sits in the path, those meters stack separately. | Once the workflow runs all day, runtime stops being background noise and becomes a repeatable operations cost. | |
| Container-heavy automation | 25,000 code-execution jobs during the month, one container per job. | 25,000 x $0.03 per container per 20 minutes = $750.00 | Model tokens now sit underneath a large runtime line, so changing the model alone no longer solves the budget question. | Additional file search or web search usage would stack on top of the runtime bill rather than replacing it. | At automation scale, container count becomes the budget question before headline model pricing does. |
Control levers
Code interpreter cost control comes from how often runtime is invoked and whether it belongs in the path at all.
If a turn only needs structured text generation, keep it off the code interpreter path. Containers should be reserved for genuine execution, analysis, or file-processing steps.
Sources
A model swap can still help, but only after the container line has been isolated. Otherwise teams over-credit the cheaper model row for savings it cannot produce.
Sources
The real cost jump comes from repeated container starts across sessions or jobs. Start by measuring how often runtime is invoked before tuning prompts or token payloads.
Sources
Decision signals
Use these signals to decide whether code interpreter is just an add-on or already the main cost risk in the workflow.
If code execution is rare, container pricing is a visible but modest add-on. The model row and other tools still drive most of the estimate.
Sources
Frequent analysis turns or automation jobs make container count the real budget driver long before model-token differences fully show up.
Sources
If the workflow still starts the same number of containers, a cheaper model row may only trim a small part of the total bill.
Sources
Official sources
This page keeps the source set narrow so the cost brief can stay auditable instead of drifting into guesswork.
Source of record for the current code interpreter container price and the effective billing unit.
Documents the hosted tool path and runtime behavior that turn code interpreter usage into a container-cost question instead of a pure model-token question.
Continue the site
Use the groups below to move laterally through the decision, not back out into another doc hunt.
Related pages
Stay in the same decision neighborhood instead of backing out to search.
Model pricing, hosted-tool costs, and fit constraints that materially change the operating estimate.
Open pageTool-cost brief for file search pricing across storage, tool calls, and model-token exposure.
Open pageTool-cost brief for web search pricing across standard and preview search paths.
Open pageCompare pages
Open the pages that turn this topic into a side-by-side decision.
Replacement pages
Use the likely substitutes, migration targets, or fallback choices as the next click.
Source category pages
Trace the source families behind this page instead of opening random docs in isolation.