Provide a project directory and prompt, then watch run_codex.sh stream
output directly in the browser. This uses server-side streaming to relay stdout and stderr in real time.
Update the model used when no explicit selection is provided. Enter the full provider and model identifier
(for example, openrouter/openai/gpt-5-mini).