diff --git a/registry/coder-labs/modules/codex/README.md b/registry/coder-labs/modules/codex/README.md index 16786d023..62f33c47b 100644 --- a/registry/coder-labs/modules/codex/README.md +++ b/registry/coder-labs/modules/codex/README.md @@ -1,148 +1,103 @@ --- display_name: Codex CLI icon: ../../../../.icons/openai.svg -description: Run Codex CLI in your workspace with AgentAPI integration +description: Install and configure the Codex CLI in your workspace. verified: true -tags: [agent, codex, ai, openai, tasks, aibridge] +tags: [agent, codex, ai, openai, ai-gateway] --- # Codex CLI -Run Codex CLI in your workspace to access OpenAI's models through the Codex interface, with custom pre/post install scripts. This module integrates with [AgentAPI](https://github.com/coder/agentapi) for Coder Tasks compatibility. +Install and configure the [Codex CLI](https://github.com/openai/codex) in your workspace. Starting Codex is left to the caller (template command, IDE launcher, or a custom `coder_script`). ```tf module "codex" { source = "registry.coder.com/coder-labs/codex/coder" - version = "4.3.1" - agent_id = coder_agent.example.id + version = "5.0.0" + agent_id = coder_agent.main.id openai_api_key = var.openai_api_key - workdir = "/home/coder/project" } ``` -## Prerequisites - -- OpenAI API key for Codex access +> [!WARNING] +> If upgrading from v4.x.x of this module: v5 is a major refactor that drops support for [Coder Tasks](https://coder.com/docs/ai-coder/tasks) and [Boundary](https://coder.com/docs/ai-coder/agent-firewall). v5 also assumes npm is pre-installed; it no longer bootstraps Node.js. Keep using v4.x.x if you depend on them. ## Examples -### Run standalone +### Standalone mode with a launcher app ```tf -module "codex" { - count = data.coder_workspace.me.start_count - source = "registry.coder.com/coder-labs/codex/coder" - version = "4.3.1" - agent_id = coder_agent.example.id - openai_api_key = "..." - workdir = "/home/coder/project" - report_tasks = false +locals { + codex_workdir = "/home/coder/project" } -``` -### Usage with AI Bridge - -[AI Bridge](https://coder.com/docs/ai-coder/ai-bridge) is a Premium Coder feature that provides centralized LLM proxy management. To use AI Bridge, set `enable_aibridge = true`. Requires Coder version 2.30+ - -For tasks integration with AI Bridge, add `enable_aibridge = true` to the [Usage with Tasks](#usage-with-tasks) example below. - -#### Standalone usage with AI Bridge - -```tf module "codex" { - source = "registry.coder.com/coder-labs/codex/coder" - version = "4.3.1" - agent_id = coder_agent.example.id - workdir = "/home/coder/project" - enable_aibridge = true + source = "registry.coder.com/coder-labs/codex/coder" + version = "5.0.0" + agent_id = coder_agent.main.id + workdir = local.codex_workdir + openai_api_key = var.openai_api_key } -``` - -When `enable_aibridge = true`, the module: - -- Configures Codex to use the aibridge model_provider with `base_url` pointing to `${data.coder_workspace.me.access_url}/api/v2/aibridge/openai/v1` and `env_key` pointing to the workspace owner's session token -```toml -model_provider = "aibridge" - -[model_providers.aibridge] -name = "AI Bridge" -base_url = "https://example.coder.com/api/v2/aibridge/openai/v1" -env_key = "CODER_AIBRIDGE_SESSION_TOKEN" -wire_api = "responses" +resource "coder_app" "codex" { + agent_id = coder_agent.main.id + slug = "codex" + display_name = "Codex" + icon = "/icon/openai.svg" + open_in = "slim-window" + command = <<-EOT + #!/bin/bash + set -e + cd ${local.codex_workdir} + codex + EOT +} ``` -This allows Codex to route API requests through Coder's AI Bridge instead of directly to OpenAI's API. -Template build will fail if `openai_api_key` is provided alongside `enable_aibridge = true`. +### Usage with AI Gateway -### Usage with Tasks - -This example shows how to configure Codex with Coder tasks. +[AI Gateway](https://coder.com/docs/ai-coder/ai-gateway) is a Premium Coder feature that provides centralized LLM proxy management. Requires Coder >= 2.30.0. ```tf -resource "coder_ai_task" "task" { - count = data.coder_workspace.me.start_count - app_id = module.codex.task_app_id -} - -data "coder_task" "me" {} - module "codex" { - source = "registry.coder.com/coder-labs/codex/coder" - version = "4.3.1" - agent_id = coder_agent.example.id - openai_api_key = "..." - ai_prompt = data.coder_task.me.prompt - workdir = "/home/coder/project" - - # Optional: route through AI Bridge (Premium feature) - # enable_aibridge = true + source = "registry.coder.com/coder-labs/codex/coder" + version = "5.0.0" + agent_id = coder_agent.main.id + workdir = "/home/coder/project" + enable_ai_gateway = true } ``` -### Usage with Agent Boundaries - -This example shows how to configure the Codex module to run the agent behind a process-level boundary that restricts its network access. +When `enable_ai_gateway = true`, the module configures Codex to use the `aibridge` model provider in `config.toml` with the workspace owner's session token for authentication. -By default, when `enable_boundary = true`, the module uses `coder boundary` subcommand (provided by Coder) without requiring any installation. - -```tf -module "codex" { - source = "registry.coder.com/coder-labs/codex/coder" - version = "4.3.1" - agent_id = coder_agent.main.id - openai_api_key = var.openai_api_key - workdir = "/home/coder/project" - enable_boundary = true -} -``` +> [!CAUTION] +> `enable_ai_gateway = true` is mutually exclusive with `openai_api_key`. Setting both fails at plan time. > [!NOTE] -> For developers: The module also supports installing boundary from a release version (`use_boundary_directly = true`) or compiling from source (`compile_boundary_from_source = true`). These are escape hatches for development and testing purposes. +> If you provide a custom `base_config_toml`, the module writes it verbatim and does not inject `model_provider = "aibridge"` automatically. Add it to your config yourself: +> +> ```toml +> model_provider = "aibridge" +> ``` ### Advanced Configuration -This example shows additional configuration options for custom models, MCP servers, and base configuration. - ```tf module "codex" { source = "registry.coder.com/coder-labs/codex/coder" - version = "4.3.1" - agent_id = coder_agent.example.id - openai_api_key = "..." + version = "5.0.0" + agent_id = coder_agent.main.id workdir = "/home/coder/project" + openai_api_key = var.openai_api_key - codex_version = "0.1.0" # Pin to a specific version - codex_model = "gpt-4o" # Custom model + codex_version = "0.1.0" - # Override default configuration base_config_toml = <<-EOT sandbox_mode = "danger-full-access" approval_policy = "never" preferred_auth_method = "apikey" EOT - # Add extra MCP servers additional_mcp_servers = <<-EOT [mcp_servers.GitHub] command = "npx" @@ -152,61 +107,48 @@ module "codex" { } ``` -> [!WARNING] -> This module configures Codex with a `workspace-write` sandbox that allows AI tasks to read/write files in the specified workdir. While the sandbox provides security boundaries, Codex can still modify files within the workspace. Use this module _only_ in trusted environments and be aware of the security implications. - -## How it Works - -- **Install**: The module installs Codex CLI and sets up the environment -- **System Prompt**: If `codex_system_prompt` is set, writes the prompt to `AGENTS.md` in the `~/.codex/` directory -- **Start**: Launches Codex CLI in the specified directory, wrapped by AgentAPI -- **Configuration**: Sets `OPENAI_API_KEY` environment variable and passes `--model` flag to Codex CLI (if variables provided) -- **Session Continuity**: When `continue = true` (default), the module automatically tracks task sessions in `~/.codex-module/.codex-task-session`. On workspace restart, it resumes the existing session with full conversation history. Set `continue = false` to always start fresh sessions. - -## State Persistence +### Serialize a downstream `coder_script` after the install pipeline -AgentAPI can save and restore its conversation state to disk across workspace restarts. This complements `continue` (which resumes the Codex CLI session) by also preserving the AgentAPI-level context. Enabled by default, requires agentapi >= v0.12.0 (older versions skip it with a warning). - -To disable: +The module exposes the `scripts` output: an ordered list of `coder exp sync` names for the scripts this module creates (pre_install, install, post_install). Scripts that were not configured are absent. ```tf module "codex" { - # ... other config - enable_state_persistence = false + source = "registry.coder.com/coder-labs/codex/coder" + version = "5.0.0" + agent_id = coder_agent.main.id + openai_api_key = var.openai_api_key +} + +resource "coder_script" "post_codex" { + agent_id = coder_agent.main.id + display_name = "Run after Codex install" + run_on_start = true + script = <<-EOT + #!/bin/bash + set -euo pipefail + trap 'coder exp sync complete post-codex' EXIT + coder exp sync want post-codex ${join(" ", module.codex.scripts)} + coder exp sync start post-codex + + codex --version + EOT } ``` ## Configuration -### Default Configuration - -When no custom `base_config_toml` is provided, the module uses these secure defaults: - -```toml -sandbox_mode = "workspace-write" -approval_policy = "never" -preferred_auth_method = "apikey" - -[sandbox_workspace_write] -network_access = true -``` - -> [!NOTE] -> If no custom configuration is provided, the module uses secure defaults. The Coder MCP server is always included automatically. For containerized workspaces (Docker/Kubernetes), you may need `sandbox_mode = "danger-full-access"` to avoid permission issues. For advanced options, see [Codex config docs](https://github.com/openai/codex/blob/main/codex-rs/config.md). +When no custom `base_config_toml` is provided, the module uses a minimal default with `preferred_auth_method = "apikey"`. For advanced options, see [Codex config docs](https://github.com/openai/codex/blob/main/codex-rs/config.md). ## Troubleshooting -- Check installation and startup logs in `~/.codex-module/` -- Ensure your OpenAI API key has access to the specified model +Check the log files in `~/.coder-modules/coder-labs/codex/logs/` for detailed information. -> [!IMPORTANT] -> To use tasks with Codex CLI, ensure you have the `openai_api_key` variable set. [Tasks Template Example](https://registry.coder.com/templates/coder-labs/tasks-docker). -> The module automatically configures Codex with your API key and model preferences. -> workdir is a required variable for the module to function correctly. +```bash +cat ~/.coder-modules/coder-labs/codex/logs/install.log +cat ~/.coder-modules/coder-labs/codex/logs/pre_install.log +cat ~/.coder-modules/coder-labs/codex/logs/post_install.log +``` ## References - [Codex CLI Documentation](https://github.com/openai/codex) -- [AgentAPI Documentation](https://github.com/coder/agentapi) -- [Coder AI Agents Guide](https://coder.com/docs/tutorials/ai-agents) -- [AI Bridge](https://coder.com/docs/ai-coder/ai-bridge) diff --git a/registry/coder-labs/modules/codex/main.test.ts b/registry/coder-labs/modules/codex/main.test.ts index 13055867f..750a9ecd7 100644 --- a/registry/coder-labs/modules/codex/main.test.ts +++ b/registry/coder-labs/modules/codex/main.test.ts @@ -6,15 +6,67 @@ import { beforeAll, expect, } from "bun:test"; -import { execContainer, readFileContainer, runTerraformInit } from "~test"; import { - loadTestFile, + execContainer, + readFileContainer, + removeContainer, + runContainer, + runTerraformApply, + runTerraformInit, + TerraformState, +} from "~test"; +import { + extractCoderEnvVars, writeExecutable, - setup as setupUtil, - execModuleScript, - expectAgentAPIStarted, } from "../../../coder/modules/agentapi/test-util"; -import dedent from "dedent"; +import path from "path"; + +interface ModuleScripts { + pre_install?: string; + install: string; + post_install?: string; +} + +const SCRIPT_SUFFIXES = [ + "Pre-Install Script", + "Install Script", + "Post-Install Script", +] as const; + +const collectScripts = (state: TerraformState): ModuleScripts => { + const byDisplayName: Record = {}; + for (const resource of state.resources) { + if (resource.type !== "coder_script") continue; + for (const instance of resource.instances) { + const attrs = instance.attributes as Record; + const displayName = attrs.display_name as string | undefined; + const script = attrs.script as string | undefined; + if (displayName && script) { + byDisplayName[displayName] = script; + } + } + } + const scripts: Partial = {}; + for (const suffix of SCRIPT_SUFFIXES) { + const key = `Codex: ${suffix}`; + if (!(key in byDisplayName)) continue; + switch (suffix) { + case "Pre-Install Script": + scripts.pre_install = byDisplayName[key]; + break; + case "Install Script": + scripts.install = byDisplayName[key]; + break; + case "Post-Install Script": + scripts.post_install = byDisplayName[key]; + break; + } + } + if (!scripts.install) { + throw new Error("install script not found in terraform state"); + } + return scripts as ModuleScripts; +}; let cleanupFunctions: (() => Promise)[] = []; const registerCleanup = (cleanup: () => Promise) => { @@ -33,36 +85,90 @@ afterEach(async () => { }); interface SetupProps { - skipAgentAPIMock?: boolean; skipCodexMock?: boolean; moduleVariables?: Record; - agentapiMockScript?: string; } -const setup = async (props?: SetupProps): Promise<{ id: string }> => { +const setup = async ( + props?: SetupProps, +): Promise<{ + id: string; + coderEnvVars: Record; + scripts: ModuleScripts; +}> => { const projectDir = "/home/coder/project"; - const { id } = await setupUtil({ - moduleDir: import.meta.dir, - moduleVariables: { - install_codex: props?.skipCodexMock ? "true" : "false", - install_agentapi: props?.skipAgentAPIMock ? "true" : "false", - codex_model: "gpt-4-turbo", - workdir: "/home/coder", - ...props?.moduleVariables, - }, - registerCleanup, - projectDir, - skipAgentAPIMock: props?.skipAgentAPIMock, - agentapiMockScript: props?.agentapiMockScript, + const moduleDir = path.resolve(import.meta.dir); + const state = await runTerraformApply(moduleDir, { + agent_id: "foo", + workdir: projectDir, + install_codex: "false", + ...props?.moduleVariables, + }); + const scripts = collectScripts(state); + const coderEnvVars = extractCoderEnvVars(state); + + const id = await runContainer("codercom/enterprise-node:latest"); + registerCleanup(async () => { + if (process.env["DEBUG"] === "true" || process.env["DEBUG"] === "1") { + console.log(`Not removing container ${id} in debug mode`); + return; + } + await removeContainer(id); + }); + + await execContainer(id, ["bash", "-c", `mkdir -p '${projectDir}'`]); + await writeExecutable({ + containerId: id, + filePath: "/usr/bin/coder", + content: "#!/bin/bash\nexit 0\n", }); if (!props?.skipCodexMock) { await writeExecutable({ containerId: id, filePath: "/usr/bin/codex", - content: await loadTestFile(import.meta.dir, "codex-mock.sh"), + content: await Bun.file( + path.join(moduleDir, "testdata", "codex-mock.sh"), + ).text(), }); } - return { id }; + return { id, coderEnvVars, scripts }; +}; + +const runScripts = async ( + id: string, + scripts: ModuleScripts, + env?: Record, +) => { + const entries = env ? Object.entries(env) : []; + const envArgs = + entries.length > 0 + ? entries + .map( + ([key, value]) => `export ${key}="${value.replace(/"/g, '\\"')}"`, + ) + .join(" && ") + " && " + : ""; + const ordered: [string, string | undefined][] = [ + ["pre_install", scripts.pre_install], + ["install", scripts.install], + ["post_install", scripts.post_install], + ]; + for (const [name, script] of ordered) { + if (!script) continue; + const target = `/tmp/coder-utils-${name}.sh`; + await writeExecutable({ + containerId: id, + filePath: target, + content: script, + }); + const resp = await execContainer(id, ["bash", "-c", `${envArgs}${target}`]); + if (resp.exitCode !== 0) { + console.log(`script ${name} failed:`); + console.log(resp.stdout); + console.log(resp.stderr); + throw new Error(`coder-utils ${name} script exited ${resp.exitCode}`); + } + } }; setDefaultTimeout(60 * 1000); @@ -73,444 +179,231 @@ describe("codex", async () => { }); test("happy-path", async () => { - const { id } = await setup(); - await execModuleScript(id); - await expectAgentAPIStarted(id); + const { id, scripts } = await setup(); + await runScripts(id, scripts); + const installLog = await readFileContainer( + id, + "/home/coder/.coder-modules/coder-labs/codex/logs/install.log", + ); + expect(installLog).toContain("Skipping Codex installation"); }); test("install-codex-version", async () => { - const version_to_install = "0.10.0"; - const { id } = await setup({ + const version = "0.10.0"; + const { id, coderEnvVars, scripts } = await setup({ skipCodexMock: true, moduleVariables: { install_codex: "true", - codex_version: version_to_install, + codex_version: version, }, }); - await execModuleScript(id); - const resp = await execContainer(id, [ - "bash", - "-c", - `cat /home/coder/.codex-module/install.log`, - ]); - expect(resp.stdout).toContain(version_to_install); + await runScripts(id, scripts, coderEnvVars); + const installLog = await readFileContainer( + id, + "/home/coder/.coder-modules/coder-labs/codex/logs/install.log", + ); + expect(installLog).toContain(version); }); - test("check-latest-codex-version-works", async () => { - const { id } = await setup({ - skipCodexMock: true, - skipAgentAPIMock: true, + test("openai-api-key", async () => { + const apiKey = "test-api-key-123"; + const { coderEnvVars } = await setup({ moduleVariables: { - install_codex: "true", + openai_api_key: apiKey, }, }); - await execModuleScript(id); - await expectAgentAPIStarted(id); + expect(coderEnvVars["OPENAI_API_KEY"]).toBe(apiKey); }); test("base-config-toml", async () => { - const baseConfig = dedent` - sandbox_mode = "danger-full-access" - approval_policy = "never" - preferred_auth_method = "apikey" - - [custom_section] - new_feature = true - `.trim(); - const { id } = await setup({ + const baseConfig = [ + 'sandbox_mode = "danger-full-access"', + 'approval_policy = "never"', + 'preferred_auth_method = "apikey"', + "", + "[custom_section]", + "new_feature = true", + ].join("\n"); + const { id, scripts } = await setup({ moduleVariables: { base_config_toml: baseConfig, }, }); - await execModuleScript(id); + await runScripts(id, scripts); const resp = await readFileContainer(id, "/home/coder/.codex/config.toml"); expect(resp).toContain('sandbox_mode = "danger-full-access"'); expect(resp).toContain('preferred_auth_method = "apikey"'); expect(resp).toContain("[custom_section]"); - expect(resp).toContain("[mcp_servers.Coder]"); }); - test("codex-api-key", async () => { - const apiKey = "test-api-key-123"; - const { id } = await setup({ + test("additional-mcp-servers", async () => { + const additional = [ + "[mcp_servers.GitHub]", + 'command = "npx"', + 'args = ["-y", "@modelcontextprotocol/server-github"]', + 'type = "stdio"', + 'description = "GitHub integration"', + ].join("\n"); + const { id, scripts } = await setup({ moduleVariables: { - openai_api_key: apiKey, + additional_mcp_servers: additional, }, }); - await execModuleScript(id); + await runScripts(id, scripts); + const resp = await readFileContainer(id, "/home/coder/.codex/config.toml"); + expect(resp).toContain("[mcp_servers.GitHub]"); + expect(resp).toContain("GitHub integration"); + }); - const resp = await readFileContainer( - id, - "/home/coder/.codex-module/agentapi-start.log", - ); - expect(resp).toContain("OpenAI API Key: Provided"); + test("minimal-default-config", async () => { + const { id, scripts } = await setup(); + await runScripts(id, scripts); + const resp = await readFileContainer(id, "/home/coder/.codex/config.toml"); + expect(resp).toContain('preferred_auth_method = "apikey"'); }); test("pre-post-install-scripts", async () => { - const { id } = await setup({ + const { id, scripts } = await setup({ moduleVariables: { - pre_install_script: "#!/bin/bash\necho 'pre-install-script'", - post_install_script: "#!/bin/bash\necho 'post-install-script'", + pre_install_script: "#!/bin/bash\necho 'codex-pre-install-script'", + post_install_script: "#!/bin/bash\necho 'codex-post-install-script'", }, }); - await execModuleScript(id); + await runScripts(id, scripts); + const preInstallLog = await readFileContainer( id, - "/home/coder/.codex-module/pre_install.log", + "/home/coder/.coder-modules/coder-labs/codex/logs/pre_install.log", ); - expect(preInstallLog).toContain("pre-install-script"); + expect(preInstallLog).toContain("codex-pre-install-script"); + const postInstallLog = await readFileContainer( id, - "/home/coder/.codex-module/post_install.log", + "/home/coder/.coder-modules/coder-labs/codex/logs/post_install.log", ); - expect(postInstallLog).toContain("post-install-script"); + expect(postInstallLog).toContain("codex-post-install-script"); }); test("workdir-variable", async () => { - const workdir = "/tmp/codex-test-workdir"; - const { id } = await setup({ - skipCodexMock: false, + const workdir = "/home/coder/codex-test-folder"; + const { id, scripts } = await setup({ moduleVariables: { workdir, }, }); - await execModuleScript(id); - const resp = await readFileContainer( + await runScripts(id, scripts); + const installLog = await readFileContainer( id, - "/home/coder/.codex-module/install.log", + "/home/coder/.coder-modules/coder-labs/codex/logs/install.log", ); - expect(resp).toContain(workdir); - }); - - test("additional-mcp-servers", async () => { - const additional = dedent` - [mcp_servers.GitHub] - command = "npx" - args = ["-y", "@modelcontextprotocol/server-github"] - type = "stdio" - description = "GitHub integration" - - [mcp_servers.FileSystem] - command = "npx" - args = ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"] - type = "stdio" - description = "File system access" - `.trim(); - const { id } = await setup({ - moduleVariables: { - additional_mcp_servers: additional, - }, - }); - await execModuleScript(id); - const resp = await readFileContainer(id, "/home/coder/.codex/config.toml"); - expect(resp).toContain("[mcp_servers.GitHub]"); - expect(resp).toContain("[mcp_servers.FileSystem]"); - expect(resp).toContain("[mcp_servers.Coder]"); - expect(resp).toContain("GitHub integration"); - }); - - test("full-custom-config", async () => { - const baseConfig = dedent` - sandbox_mode = "read-only" - approval_policy = "untrusted" - preferred_auth_method = "chatgpt" - custom_setting = "test-value" - - [advanced_settings] - timeout = 30000 - debug = true - logging_level = "verbose" - `.trim(); - - const additionalMCP = dedent` - [mcp_servers.CustomTool] - command = "/usr/local/bin/custom-tool" - args = ["--serve", "--port", "8080"] - type = "stdio" - description = "Custom development tool" - - [mcp_servers.DatabaseMCP] - command = "python" - args = ["-m", "database_mcp_server"] - type = "stdio" - description = "Database query interface" - `.trim(); - - const { id } = await setup({ - moduleVariables: { - base_config_toml: baseConfig, - additional_mcp_servers: additionalMCP, - }, - }); - await execModuleScript(id); - const resp = await readFileContainer(id, "/home/coder/.codex/config.toml"); - - // Check base config - expect(resp).toContain('sandbox_mode = "read-only"'); - expect(resp).toContain('preferred_auth_method = "chatgpt"'); - expect(resp).toContain('custom_setting = "test-value"'); - expect(resp).toContain("[advanced_settings]"); - expect(resp).toContain('logging_level = "verbose"'); - - // Check MCP servers - expect(resp).toContain("[mcp_servers.Coder]"); - expect(resp).toContain("[mcp_servers.CustomTool]"); - expect(resp).toContain("[mcp_servers.DatabaseMCP]"); - expect(resp).toContain("Custom development tool"); - expect(resp).toContain("Database query interface"); - }); - - test("minimal-default-config", async () => { - const { id } = await setup({ - moduleVariables: { - // No base_config_toml or additional_mcp_servers - should use defaults - }, - }); - await execModuleScript(id); - const resp = await readFileContainer(id, "/home/coder/.codex/config.toml"); - - // Check default base config - expect(resp).toContain('sandbox_mode = "workspace-write"'); - expect(resp).toContain('approval_policy = "never"'); - expect(resp).toContain("[sandbox_workspace_write]"); - expect(resp).toContain("network_access = true"); - - // Check only Coder MCP server is present - expect(resp).toContain("[mcp_servers.Coder]"); - expect(resp).toContain("Report ALL tasks and statuses"); - - // Ensure no additional MCP servers - const mcpServerCount = (resp.match(/\[mcp_servers\./g) || []).length; - expect(mcpServerCount).toBe(1); + expect(installLog).toContain(workdir); }); - test("codex-system-prompt", async () => { - const prompt = "This is a system prompt for Codex."; - const { id } = await setup({ + test("codex-with-ai-gateway", async () => { + const { id, coderEnvVars, scripts } = await setup({ moduleVariables: { - codex_system_prompt: prompt, - }, - }); - await execModuleScript(id); - const resp = await readFileContainer(id, "/home/coder/.codex/AGENTS.md"); - expect(resp).toContain(prompt); - }); - - test("codex-system-prompt-skip-append-if-exists", async () => { - const prompt_1 = "This is a system prompt for Codex."; - const prompt_2 = "This is a system prompt for Goose."; - const prompt_3 = dedent` - This is a system prompt for Codex. - This is a system prompt for Gemini. - `.trim(); - const pre_install_script = dedent` - #!/bin/bash - mkdir -p /home/coder/.codex - echo -e "${prompt_3}" >> /home/coder/.codex/AGENTS.md - `.trim(); - - const { id } = await setup({ - moduleVariables: { - pre_install_script, - codex_system_prompt: prompt_2, - }, - }); - await execModuleScript(id); - const resp = await readFileContainer(id, "/home/coder/.codex/AGENTS.md"); - expect(resp).toContain(prompt_1); - expect(resp).toContain(prompt_2); - - // Re-run with a prompt that already exists, it should not append again - const { id: id_2 } = await setup({ - moduleVariables: { - pre_install_script, - codex_system_prompt: prompt_1, + enable_ai_gateway: "true", + model_reasoning_effort: "none", }, }); - await execModuleScript(id_2); - const resp_2 = await readFileContainer( - id_2, - "/home/coder/.codex/AGENTS.md", + await runScripts(id, scripts, coderEnvVars); + const configToml = await readFileContainer( + id, + "/home/coder/.codex/config.toml", ); - expect(resp_2).toContain(prompt_1); - const count = (resp_2.match(new RegExp(prompt_1, "g")) || []).length; - expect(count).toBe(1); - }); - - test("codex-ai-task-prompt", async () => { - const prompt = "This is a system prompt for Codex."; - const { id } = await setup({ - moduleVariables: { - ai_prompt: prompt, - }, - }); - await execModuleScript(id); - const resp = await execContainer(id, [ - "bash", - "-c", - `cat /home/coder/.codex-module/agentapi-start.log`, - ]); - expect(resp.stdout).toContain(prompt); + expect(configToml).toContain('model_provider = "aibridge"'); + expect(configToml).toContain('model_reasoning_effort = "none"'); + expect(configToml).toContain("[model_providers.aibridge]"); }); - test("start-without-prompt", async () => { - const { id } = await setup({ + test("workdir-trusted-project", async () => { + const workdir = "/home/coder/trusted-project"; + const { id, scripts } = await setup({ moduleVariables: { - codex_system_prompt: "", // Explicitly disable system prompt + workdir, }, }); - await execModuleScript(id); - const prompt = await execContainer(id, [ - "ls", - "-l", - "/home/coder/.codex/AGENTS.md", - ]); - expect(prompt.exitCode).not.toBe(0); - expect(prompt.stderr).toContain("No such file or directory"); + await runScripts(id, scripts); + const configToml = await readFileContainer( + id, + "/home/coder/.codex/config.toml", + ); + expect(configToml).toContain(`[projects."${workdir}"]`); + expect(configToml).toContain('trust_level = "trusted"'); }); - test("codex-continue-capture-new-session", async () => { - const { id } = await setup({ + test("no-workdir-no-project-section", async () => { + const { id, scripts } = await setup({ moduleVariables: { - continue: "true", - ai_prompt: "test task", + workdir: "", }, }); - - const workdir = "/home/coder"; - const expectedSessionId = "019a1234-5678-9abc-def0-123456789012"; - const sessionsDir = "/home/coder/.codex/sessions"; - const sessionFile = `${sessionsDir}/${expectedSessionId}.jsonl`; - - await execContainer(id, ["mkdir", "-p", sessionsDir]); - await execContainer(id, [ - "bash", - "-c", - `echo '{"id":"${expectedSessionId}","cwd":"${workdir}","created":"2024-10-24T20:00:00Z","model":"gpt-4-turbo"}' > ${sessionFile}`, - ]); - - await execModuleScript(id); - - await expectAgentAPIStarted(id); - - const trackingFile = "/home/coder/.codex-module/.codex-task-session"; - const maxAttempts = 30; - let trackingFileContents = ""; - for (let attempt = 0; attempt < maxAttempts; attempt++) { - const result = await execContainer(id, [ - "bash", - "-c", - `cat ${trackingFile} 2>/dev/null || echo ""`, - ]); - if (result.stdout.trim().length > 0) { - trackingFileContents = result.stdout; - break; - } - await new Promise((resolve) => setTimeout(resolve, 500)); - } - - expect(trackingFileContents).toContain(`${workdir}|${expectedSessionId}`); - - const startLog = await readFileContainer( + await runScripts(id, scripts); + const configToml = await readFileContainer( id, - "/home/coder/.codex-module/agentapi-start.log", + "/home/coder/.codex/config.toml", ); - expect(startLog).toContain("Capturing new session ID"); - expect(startLog).toContain("Session tracked"); - expect(startLog).toContain(expectedSessionId); + expect(configToml).not.toContain("[projects."); }); - test("codex-continue-resume-existing-session", async () => { - const { id } = await setup({ + test("ai-gateway-with-custom-base-config", async () => { + const baseConfig = [ + 'sandbox_mode = "danger-full-access"', + 'model_provider = "aibridge"', + ].join("\n"); + const { id, coderEnvVars, scripts } = await setup({ moduleVariables: { - continue: "true", - ai_prompt: "test prompt", + enable_ai_gateway: "true", + base_config_toml: baseConfig, }, }); - - const workdir = "/home/coder"; - const mockSessionId = "019a1234-5678-9abc-def0-123456789012"; - const trackingFile = "/home/coder/.codex-module/.codex-task-session"; - - await execContainer(id, ["mkdir", "-p", "/home/coder/.codex-module"]); - await execContainer(id, [ - "bash", - "-c", - `echo "${workdir}|${mockSessionId}" > ${trackingFile}`, - ]); - - await execModuleScript(id); - - const startLog = await execContainer(id, [ - "bash", - "-c", - "cat /home/coder/.codex-module/agentapi-start.log", - ]); - expect(startLog.stdout).toContain("Found existing task session"); - expect(startLog.stdout).toContain(mockSessionId); - expect(startLog.stdout).toContain("Resuming existing session"); - expect(startLog.stdout).toContain( - `Starting Codex with arguments: --model gpt-4-turbo resume ${mockSessionId}`, + await runScripts(id, scripts, coderEnvVars); + const configToml = await readFileContainer( + id, + "/home/coder/.codex/config.toml", ); - expect(startLog.stdout).not.toContain("test prompt"); + expect(configToml).toContain('model_provider = "aibridge"'); + expect(configToml).toContain("[model_providers.aibridge]"); }); - test("codex-with-aibridge", async () => { - const { id } = await setup({ + test("ai-gateway-custom-config-no-duplicate-provider", async () => { + const baseConfig = [ + 'model_provider = "aibridge"', + "", + "[model_providers.aibridge]", + 'name = "Custom AI Bridge"', + 'base_url = "https://custom.example.com"', + 'env_key = "CODER_AIBRIDGE_SESSION_TOKEN"', + 'wire_api = "responses"', + ].join("\n"); + const { id, coderEnvVars, scripts } = await setup({ moduleVariables: { - enable_aibridge: "true", - model_reasoning_effort: "none", + enable_ai_gateway: "true", + base_config_toml: baseConfig, }, }); - - await execModuleScript(id); + await runScripts(id, scripts, coderEnvVars); const configToml = await readFileContainer( id, "/home/coder/.codex/config.toml", ); - expect(configToml).toContain('model_provider = "aibridge"'); + const matches = configToml.match(/\[model_providers\.aibridge\]/g) || []; + expect(matches.length).toBe(1); + expect(configToml).toContain("Custom AI Bridge"); }); - test("boundary-enabled", async () => { - const { id } = await setup({ + test("install-codex-latest", async () => { + const { id, coderEnvVars, scripts } = await setup({ + skipCodexMock: true, moduleVariables: { - enable_boundary: "true", - boundary_config_path: "/tmp/test-boundary.yaml", + install_codex: "true", }, }); - // Write boundary config - await execContainer(id, [ - "bash", - "-c", - `cat > /tmp/test-boundary.yaml <<'EOF' -jail_type: landjail -proxy_port: 8087 -log_level: warn -allowlist: - - "domain=api.openai.com" -EOF`, - ]); - // Add mock coder binary for boundary setup - await writeExecutable({ - containerId: id, - filePath: "/usr/bin/coder", - content: `#!/bin/bash -if [ "$1" = "boundary" ]; then - if [ "$2" = "--help" ]; then - echo "boundary help" - exit 0 - fi - shift; shift; exec "$@" -fi -echo "mock coder"`, - }); - await execModuleScript(id); - await expectAgentAPIStarted(id); - // Verify boundary wrapper was used in start script - const startLog = await readFileContainer( + await runScripts(id, scripts, coderEnvVars); + const installLog = await readFileContainer( id, - "/home/coder/.codex-module/agentapi-start.log", + "/home/coder/.coder-modules/coder-labs/codex/logs/install.log", ); - expect(startLog).toContain("boundary"); + expect(installLog).toContain("Installed Codex CLI"); }); }); diff --git a/registry/coder-labs/modules/codex/main.tf b/registry/coder-labs/modules/codex/main.tf index b5f71cb3c..c4a0a3e3d 100644 --- a/registry/coder-labs/modules/codex/main.tf +++ b/registry/coder-labs/modules/codex/main.tf @@ -18,18 +18,6 @@ data "coder_workspace" "me" {} data "coder_workspace_owner" "me" {} -variable "order" { - type = number - description = "The order determines the position of app in the UI presentation. The lowest order is shown first and apps with equal order are sorted by name (ascending order)." - default = null -} - -variable "group" { - type = string - description = "The name of a group that this app belongs to." - default = null -} - variable "icon" { type = string description = "The icon to use for the app." @@ -38,58 +26,20 @@ variable "icon" { variable "workdir" { type = string - description = "The folder to run Codex in." -} - -variable "report_tasks" { - type = bool - description = "Whether to enable task reporting to Coder UI via AgentAPI" - default = true -} - -variable "subdomain" { - type = bool - description = "Whether to use a subdomain for AgentAPI." - default = false -} - -variable "cli_app" { - type = bool - description = "Whether to create a CLI app for Codex" - default = false -} - -variable "web_app_display_name" { - type = string - description = "Display name for the web app" - default = "Codex" + description = "Optional project directory. When set, the module pre-creates it if missing and adds it as a trusted project in Codex config.toml." + default = null } -variable "cli_app_display_name" { +variable "pre_install_script" { type = string - description = "Display name for the CLI app" - default = "Codex CLI" -} - -variable "enable_aibridge" { - type = bool - description = "Use AI Bridge for Codex. https://coder.com/docs/ai-coder/ai-bridge" - default = false - - validation { - condition = !(var.enable_aibridge && length(var.openai_api_key) > 0) - error_message = "openai_api_key cannot be provided when enable_aibridge is true. AI Bridge automatically authenticates the client using Coder credentials." - } + description = "Custom script to run before installing Codex." + default = null } -variable "model_reasoning_effort" { +variable "post_install_script" { type = string - description = "The reasoning effort for the model. One of: none, low, medium, high. https://platform.openai.com/docs/guides/latest-model#lower-reasoning-effort" - default = "" - validation { - condition = contains(["", "none", "minimal", "low", "medium", "high", "xhigh"], var.model_reasoning_effort) - error_message = "model_reasoning_effort must be one of: none, low, medium, high." - } + description = "Custom script to run after installing Codex." + default = null } variable "install_codex" { @@ -100,133 +50,82 @@ variable "install_codex" { variable "codex_version" { type = string - description = "The version of Codex to install." - default = "" # empty string means the latest available version -} - -variable "base_config_toml" { - type = string - description = "Complete base TOML configuration for Codex (without mcp_servers section). If empty, uses minimal default configuration with workspace-write sandbox mode and never approval policy. For advanced options, see https://github.com/openai/codex/blob/main/codex-rs/config.md" - default = "" -} - -variable "additional_mcp_servers" { - type = string - description = "Additional MCP servers configuration in TOML format. These will be merged with the required Coder MCP server in the [mcp_servers] section." + description = "The version of Codex to install. Empty string installs the latest available version." default = "" } variable "openai_api_key" { type = string - description = "OpenAI API key for Codex CLI" + description = "OpenAI API key for Codex CLI." + sensitive = true default = "" } -variable "install_agentapi" { - type = bool - description = "Whether to install AgentAPI." - default = true -} - -variable "agentapi_version" { +variable "base_config_toml" { type = string - description = "The version of AgentAPI to install." - default = "v0.12.1" -} + description = <<-EOT + Complete base TOML configuration for Codex (without mcp_servers section). + When empty, the module generates a minimal default: -variable "codex_model" { - type = string - description = "The model for Codex to use. Defaults to gpt-5.3-codex." - default = "gpt-5.4" -} + preferred_auth_method = "apikey" + # model_provider = "aibridge" (sets the default profile, when enable_ai_gateway = true) + # model_reasoning_effort = "" (sets the reasoning effort, when model_reasoning_effort is set) -variable "pre_install_script" { - type = string - description = "Custom script to run before installing Codex." - default = null -} + [projects.""] (when workdir is set) + trust_level = "trusted" -variable "post_install_script" { - type = string - description = "Custom script to run after installing Codex." - default = null -} - -variable "ai_prompt" { - type = string - description = "Initial task prompt for Codex CLI when launched via Tasks" + When non-empty, the value is written verbatim as the base of config.toml; + additional_mcp_servers and AI Gateway sections are still appended after it. + EOT default = "" } -variable "continue" { - type = bool - description = "Automatically continue existing sessions on workspace restart. When true, resumes existing conversation if found, otherwise runs prompt or starts new session. When false, always starts fresh (ignores existing sessions)." - default = true -} - -variable "enable_state_persistence" { - type = bool - description = "Enable AgentAPI conversation state persistence across restarts." - default = true -} - -variable "codex_system_prompt" { - type = string - description = "System instructions written to AGENTS.md in the ~/.codex directory" - default = "You are a helpful coding assistant. Start every response with `Codex says:`" -} - -variable "enable_boundary" { - type = bool - description = "Enable coder boundary for network filtering." - default = false -} - -variable "boundary_config_path" { +variable "additional_mcp_servers" { type = string - description = "Path to boundary config.yaml inside the workspace. If provided, exposed as BOUNDARY_CONFIG env var." + description = "Additional MCP servers configuration in TOML format." default = "" } -variable "boundary_version" { +variable "model_reasoning_effort" { type = string - description = "Boundary version. When use_boundary_directly is true, a release version should be provided or 'latest' for the latest release." - default = "latest" + description = "The reasoning effort for the model. One of: none, minimal, low, medium, high, xhigh. See https://platform.openai.com/docs/guides/latest-model#lower-reasoning-effort" + default = "" + validation { + condition = contains(["", "none", "minimal", "low", "medium", "high", "xhigh"], var.model_reasoning_effort) + error_message = "model_reasoning_effort must be one of: none, minimal, low, medium, high, xhigh." + } } -variable "compile_boundary_from_source" { +variable "enable_ai_gateway" { type = bool - description = "Whether to compile boundary from source instead of using the official install script." + description = "Use AI Gateway for Codex. https://coder.com/docs/ai-coder/ai-gateway" default = false -} -variable "use_boundary_directly" { - type = bool - description = "Whether to use boundary binary directly instead of coder boundary subcommand." - default = false + validation { + condition = !(var.enable_ai_gateway && length(var.openai_api_key) > 0) + error_message = "openai_api_key cannot be provided when enable_ai_gateway is true. AI Gateway automatically authenticates the client using Coder credentials." + } } resource "coder_env" "openai_api_key" { + count = var.openai_api_key != "" ? 1 : 0 agent_id = var.agent_id name = "OPENAI_API_KEY" value = var.openai_api_key } -resource "coder_env" "coder_aibridge_session_token" { - count = var.enable_aibridge ? 1 : 0 +# Authenticates the client against Coder's AI Gateway using the workspace +# owner's session token. Referenced by config.toml model_providers.aibridge. +resource "coder_env" "ai_gateway_session_token" { + count = var.enable_ai_gateway ? 1 : 0 agent_id = var.agent_id name = "CODER_AIBRIDGE_SESSION_TOKEN" value = data.coder_workspace_owner.me.session_token } locals { - workdir = trimsuffix(var.workdir, "/") - app_slug = "codex" - install_script = file("${path.module}/scripts/install.sh") - start_script = file("${path.module}/scripts/start.sh") - module_dir_name = ".codex-module" - latest_codex_model = "gpt-5.4" - aibridge_config = <<-EOF + workdir = var.workdir != null ? trimsuffix(var.workdir, "/") : "" + aibridge_config = <<-EOF [model_providers.aibridge] name = "AI Bridge" base_url = "${data.coder_workspace.me.access_url}/api/v2/aibridge/openai/v1" @@ -234,76 +133,33 @@ locals { wire_api = "responses" EOF -} - -module "agentapi" { - source = "registry.coder.com/coder/agentapi/coder" - version = "2.3.0" - - agent_id = var.agent_id - folder = local.workdir - web_app_slug = local.app_slug - web_app_order = var.order - web_app_group = var.group - web_app_icon = var.icon - web_app_display_name = var.web_app_display_name - cli_app = var.cli_app - cli_app_slug = var.cli_app ? "${local.app_slug}-cli" : null - cli_app_display_name = var.cli_app ? var.cli_app_display_name : null - module_dir_name = local.module_dir_name - install_agentapi = var.install_agentapi - agentapi_subdomain = var.subdomain - agentapi_version = var.agentapi_version - enable_state_persistence = var.enable_state_persistence - pre_install_script = var.pre_install_script - post_install_script = var.post_install_script - enable_boundary = var.enable_boundary - boundary_config_path = var.boundary_config_path - boundary_version = var.boundary_version - compile_boundary_from_source = var.compile_boundary_from_source - use_boundary_directly = var.use_boundary_directly - start_script = <<-EOT - #!/bin/bash - set -o errexit - set -o pipefail - - echo -n '${base64encode(local.start_script)}' | base64 -d > /tmp/start.sh - chmod +x /tmp/start.sh - ARG_OPENAI_API_KEY='${var.openai_api_key}' \ - ARG_REPORT_TASKS='${var.report_tasks}' \ - ARG_CODEX_MODEL='${var.codex_model}' \ - ARG_CODEX_START_DIRECTORY='${local.workdir}' \ - ARG_CODEX_TASK_PROMPT='${base64encode(var.ai_prompt)}' \ - ARG_CONTINUE='${var.continue}' \ - ARG_ENABLE_AIBRIDGE='${var.enable_aibridge}' \ - /tmp/start.sh - EOT - - install_script = <<-EOT - #!/bin/bash - set -o errexit - set -o pipefail - - echo -n '${base64encode(local.install_script)}' | base64 -d > /tmp/install.sh - chmod +x /tmp/install.sh - ARG_OPENAI_API_KEY='${var.openai_api_key}' \ - ARG_REPORT_TASKS='${var.report_tasks}' \ - ARG_CODEX_MODEL='${var.codex_model}' \ - ARG_LATEST_CODEX_MODEL='${local.latest_codex_model}' \ - ARG_INSTALL='${var.install_codex}' \ - ARG_CODEX_VERSION='${var.codex_version}' \ - ARG_BASE_CONFIG_TOML='${base64encode(var.base_config_toml)}' \ - ARG_ENABLE_AIBRIDGE='${var.enable_aibridge}' \ - ARG_AIBRIDGE_CONFIG='${base64encode(var.enable_aibridge ? local.aibridge_config : "")}' \ - ARG_ADDITIONAL_MCP_SERVERS='${base64encode(var.additional_mcp_servers)}' \ - ARG_CODER_MCP_APP_STATUS_SLUG='${local.app_slug}' \ - ARG_CODEX_START_DIRECTORY='${local.workdir}' \ - ARG_MODEL_REASONING_EFFORT='${var.model_reasoning_effort}' \ - ARG_CODEX_INSTRUCTION_PROMPT='${base64encode(var.codex_system_prompt)}' \ - /tmp/install.sh - EOT -} - -output "task_app_id" { - value = module.agentapi.task_app_id + install_script = templatefile("${path.module}/scripts/install.sh.tftpl", { + ARG_INSTALL = tostring(var.install_codex) + ARG_CODEX_VERSION = var.codex_version != "" ? base64encode(var.codex_version) : "" + ARG_WORKDIR = local.workdir != "" ? base64encode(local.workdir) : "" + ARG_BASE_CONFIG_TOML = var.base_config_toml != "" ? base64encode(var.base_config_toml) : "" + ARG_ADDITIONAL_MCP_SERVERS = var.additional_mcp_servers != "" ? base64encode(var.additional_mcp_servers) : "" + ARG_ENABLE_AI_GATEWAY = tostring(var.enable_ai_gateway) + ARG_AIBRIDGE_CONFIG = var.enable_ai_gateway ? base64encode(local.aibridge_config) : "" + ARG_MODEL_REASONING_EFFORT = var.model_reasoning_effort + }) + module_dir_name = ".coder-modules/coder-labs/codex" +} + +module "coder_utils" { + source = "registry.coder.com/coder/coder-utils/coder" + version = "0.0.1" + + agent_id = var.agent_id + module_directory = "$HOME/${local.module_dir_name}" + display_name_prefix = "Codex" + icon = var.icon + pre_install_script = var.pre_install_script + post_install_script = var.post_install_script + install_script = local.install_script +} + +output "scripts" { + description = "Ordered list of coder exp sync names for the coder_script resources this module creates, in run order (pre_install, install, post_install). Scripts that were not configured are absent from the list." + value = module.coder_utils.scripts } diff --git a/registry/coder-labs/modules/codex/main.tftest.hcl b/registry/coder-labs/modules/codex/main.tftest.hcl index 1237df5de..986823365 100644 --- a/registry/coder-labs/modules/codex/main.tftest.hcl +++ b/registry/coder-labs/modules/codex/main.tftest.hcl @@ -2,14 +2,8 @@ run "test_codex_basic" { command = plan variables { - agent_id = "test-agent" - workdir = "/home/coder" - openai_api_key = "test-key" - } - - assert { - condition = var.agent_id == "test-agent" - error_message = "Agent ID should be set correctly" + agent_id = "test-agent" + workdir = "/home/coder" } assert { @@ -21,167 +15,186 @@ run "test_codex_basic" { condition = var.install_codex == true error_message = "install_codex should default to true" } +} - assert { - condition = var.install_agentapi == true - error_message = "install_agentapi should default to true" - } +run "test_codex_with_api_key" { + command = plan - assert { - condition = var.report_tasks == true - error_message = "report_tasks should default to true" + variables { + agent_id = "test-agent" + workdir = "/home/coder" + openai_api_key = "test-key" } assert { - condition = var.continue == true - error_message = "continue should default to true" + condition = coder_env.openai_api_key[0].value == "test-key" + error_message = "OpenAI API key should be set correctly" } } -run "test_enable_state_persistence_default" { +run "test_codex_custom_options" { command = plan variables { - agent_id = "test-agent" - workdir = "/home/coder" - openai_api_key = "test-key" + agent_id = "test-agent" + workdir = "/home/coder/project" + icon = "/icon/custom.svg" + codex_version = "0.1.0" } assert { - condition = var.enable_state_persistence == true - error_message = "enable_state_persistence should default to true" + condition = var.icon == "/icon/custom.svg" + error_message = "Icon should be set to custom icon" } } -run "test_disable_state_persistence" { +run "test_ai_gateway_enabled" { command = plan variables { - agent_id = "test-agent" - workdir = "/home/coder" - openai_api_key = "test-key" - enable_state_persistence = false + agent_id = "test-agent" + workdir = "/home/coder" + enable_ai_gateway = true + } + + override_data { + target = data.coder_workspace_owner.me + values = { + session_token = "mock-session-token" + } + } + + assert { + condition = var.enable_ai_gateway == true + error_message = "AI Gateway should be enabled" + } + + assert { + condition = coder_env.ai_gateway_session_token[0].name == "CODER_AIBRIDGE_SESSION_TOKEN" + error_message = "CODER_AIBRIDGE_SESSION_TOKEN should be set" } assert { - condition = var.enable_state_persistence == false - error_message = "enable_state_persistence should be false when explicitly disabled" + condition = coder_env.ai_gateway_session_token[0].value == data.coder_workspace_owner.me.session_token + error_message = "Session token should use workspace owner's token" + } + + assert { + condition = length(coder_env.openai_api_key) == 0 + error_message = "OPENAI_API_KEY should not be created when ai_gateway is enabled" } } -run "test_codex_with_aibridge" { +run "test_ai_gateway_validation_with_api_key" { command = plan variables { - agent_id = "test-agent" - workdir = "/home/coder" - enable_aibridge = true + agent_id = "test-agent" + workdir = "/home/coder" + enable_ai_gateway = true + openai_api_key = "test-key" } - assert { - condition = var.enable_aibridge == true - error_message = "enable_aibridge should be set to true" - } + expect_failures = [ + var.enable_ai_gateway, + ] } -run "test_aibridge_disabled_with_api_key" { +run "test_ai_gateway_disabled_with_api_key" { command = plan variables { - agent_id = "test-agent" - workdir = "/home/coder" - openai_api_key = "test-key" - enable_aibridge = false + agent_id = "test-agent" + workdir = "/home/coder" + enable_ai_gateway = false + openai_api_key = "test-key-xyz" } assert { - condition = var.enable_aibridge == false - error_message = "enable_aibridge should be false" + condition = coder_env.openai_api_key[0].value == "test-key-xyz" + error_message = "OPENAI_API_KEY should use the provided API key" } assert { - condition = coder_env.openai_api_key.value == "test-key" - error_message = "OpenAI API key should be set correctly" + condition = length(coder_env.ai_gateway_session_token) == 0 + error_message = "Session token should not be set when ai_gateway is disabled" } } -run "test_custom_options" { +run "test_no_api_key_no_env" { command = plan variables { - agent_id = "test-agent" - workdir = "/home/coder/project" - openai_api_key = "test-key" - order = 5 - group = "ai-tools" - icon = "/icon/custom.svg" - web_app_display_name = "Custom Codex" - cli_app = true - cli_app_display_name = "Codex Terminal" - subdomain = true - report_tasks = false - continue = false - codex_model = "gpt-4o" - codex_version = "0.1.0" - agentapi_version = "v0.12.0" + agent_id = "test-agent" + workdir = "/home/coder" } assert { - condition = var.order == 5 - error_message = "Order should be set to 5" + condition = length(coder_env.openai_api_key) == 0 + error_message = "OPENAI_API_KEY should not be created when no API key is provided" } +} - assert { - condition = var.group == "ai-tools" - error_message = "Group should be set to 'ai-tools'" +run "test_codex_with_scripts" { + command = plan + + variables { + agent_id = "test-agent" + workdir = "/home/coder" + pre_install_script = "echo 'Pre-install script'" + post_install_script = "echo 'Post-install script'" } assert { - condition = var.icon == "/icon/custom.svg" - error_message = "Icon should be set to custom icon" + condition = var.pre_install_script == "echo 'Pre-install script'" + error_message = "Pre-install script should be set correctly" } assert { - condition = var.cli_app == true - error_message = "cli_app should be enabled" + condition = var.post_install_script == "echo 'Post-install script'" + error_message = "Post-install script should be set correctly" } +} - assert { - condition = var.subdomain == true - error_message = "subdomain should be enabled" +run "test_script_outputs_install_only" { + command = plan + + variables { + agent_id = "test-agent" + workdir = "/home/coder" } assert { - condition = var.report_tasks == false - error_message = "report_tasks should be disabled" + condition = length(output.scripts) == 1 && output.scripts[0] == "coder-labs-codex-install_script" + error_message = "scripts output should list only the install script when pre/post are not configured" } +} - assert { - condition = var.continue == false - error_message = "continue should be disabled" +run "test_script_outputs_with_pre_and_post" { + command = plan + + variables { + agent_id = "test-agent" + workdir = "/home/coder" + pre_install_script = "echo pre" + post_install_script = "echo post" } assert { - condition = var.codex_model == "gpt-4o" - error_message = "codex_model should be set to 'gpt-4o'" + condition = output.scripts == ["coder-labs-codex-pre_install_script", "coder-labs-codex-install_script", "coder-labs-codex-post_install_script"] + error_message = "scripts output should list pre_install, install, post_install in run order" } } -run "test_no_api_key_no_aibridge" { +run "test_workdir_optional" { command = plan variables { agent_id = "test-agent" - workdir = "/home/coder" - } - - assert { - condition = var.openai_api_key == "" - error_message = "openai_api_key should be empty when not provided" } assert { - condition = var.enable_aibridge == false - error_message = "enable_aibridge should default to false" + condition = var.workdir == null + error_message = "workdir should default to null when omitted" } } diff --git a/registry/coder-labs/modules/codex/scripts/install.sh b/registry/coder-labs/modules/codex/scripts/install.sh deleted file mode 100644 index 9a191a024..000000000 --- a/registry/coder-labs/modules/codex/scripts/install.sh +++ /dev/null @@ -1,228 +0,0 @@ -#!/bin/bash -source "$HOME"/.bashrc - -BOLD='\033[0;1m' - -command_exists() { - command -v "$1" > /dev/null 2>&1 -} -set -o errexit -set -o pipefail -set -o nounset - -ARG_BASE_CONFIG_TOML=$(echo -n "$ARG_BASE_CONFIG_TOML" | base64 -d) -ARG_ADDITIONAL_MCP_SERVERS=$(echo -n "$ARG_ADDITIONAL_MCP_SERVERS" | base64 -d) -ARG_CODEX_INSTRUCTION_PROMPT=$(echo -n "$ARG_CODEX_INSTRUCTION_PROMPT" | base64 -d) -ARG_ENABLE_AIBRIDGE=${ARG_ENABLE_AIBRIDGE:-false} -ARG_AIBRIDGE_CONFIG=$(echo -n "$ARG_AIBRIDGE_CONFIG" | base64 -d) - -echo "=== Codex Module Configuration ===" -printf "Install Codex: %s\n" "$ARG_INSTALL" -printf "Codex Version: %s\n" "$ARG_CODEX_VERSION" -printf "App Slug: %s\n" "$ARG_CODER_MCP_APP_STATUS_SLUG" -printf "Codex Model: %s\n" "${ARG_CODEX_MODEL:-"Default"}" -printf "Latest Codex Model: %s\n" "${ARG_LATEST_CODEX_MODEL}" -printf "Start Directory: %s\n" "$ARG_CODEX_START_DIRECTORY" -printf "Has Base Config: %s\n" "$([ -n "$ARG_BASE_CONFIG_TOML" ] && echo "Yes" || echo "No")" -printf "Has Additional MCP: %s\n" "$([ -n "$ARG_ADDITIONAL_MCP_SERVERS" ] && echo "Yes" || echo "No")" -printf "Has System Prompt: %s\n" "$([ -n "$ARG_CODEX_INSTRUCTION_PROMPT" ] && echo "Yes" || echo "No")" -printf "OpenAI API Key: %s\n" "$([ -n "$ARG_OPENAI_API_KEY" ] && echo "Provided" || echo "Not provided")" -printf "Report Tasks: %s\n" "$ARG_REPORT_TASKS" -printf "Enable Coder AI Bridge: %s\n" "$ARG_ENABLE_AIBRIDGE" -echo "======================================" - -set +o nounset - -function install_node() { - if ! command_exists npm; then - printf "npm not found, checking for Node.js installation...\n" - if ! command_exists node; then - printf "Node.js not found, installing Node.js via NVM...\n" - export NVM_DIR="$HOME/.nvm" - if [ ! -d "$NVM_DIR" ]; then - mkdir -p "$NVM_DIR" - curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash - [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" - else - [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" - fi - - nvm install --lts - nvm use --lts - nvm alias default node - - printf "Node.js installed: %s\n" "$(node --version)" - printf "npm installed: %s\n" "$(npm --version)" - else - printf "Node.js is installed but npm is not available. Please install npm manually.\n" - exit 1 - fi - fi -} - -function install_codex() { - if [ "${ARG_INSTALL}" = "true" ]; then - install_node - - if ! command_exists nvm; then - printf "which node: %s\n" "$(which node)" - printf "which npm: %s\n" "$(which npm)" - - mkdir -p "$HOME"/.npm-global - - npm config set prefix "$HOME/.npm-global" - - export PATH="$HOME/.npm-global/bin:$PATH" - - if ! grep -q "export PATH=$HOME/.npm-global/bin:\$PATH" ~/.bashrc; then - echo "export PATH=$HOME/.npm-global/bin:\$PATH" >> ~/.bashrc - fi - fi - - printf "%s Installing Codex CLI\n" "${BOLD}" - - if [ -n "$ARG_CODEX_VERSION" ]; then - npm install -g "@openai/codex@$ARG_CODEX_VERSION" - else - npm install -g "@openai/codex" - fi - printf "%s Successfully installed Codex CLI. Version: %s\n" "${BOLD}" "$(codex --version)" - fi -} - -write_minimal_default_config() { - local config_path="$1" - - ARG_OPTIONAL_TOP_LEVEL_CONFIG="" - - if [[ "${ARG_ENABLE_AIBRIDGE}" = "true" ]]; then - ARG_OPTIONAL_TOP_LEVEL_CONFIG='model_provider = "aibridge"' - fi - - if [[ "${ARG_MODEL_REASONING_EFFORT}" != "" ]]; then - ARG_OPTIONAL_TOP_LEVEL_CONFIG+=$'\n'"model_reasoning_effort = \"${ARG_MODEL_REASONING_EFFORT}\"" - fi - - cat << EOF > "$config_path" -# Minimal Default Codex Configuration -sandbox_mode = "workspace-write" -approval_policy = "never" -preferred_auth_method = "apikey" -${ARG_OPTIONAL_TOP_LEVEL_CONFIG} - -[sandbox_workspace_write] -network_access = true - -[notice.model_migrations] -"${ARG_CODEX_MODEL}" = "${ARG_LATEST_CODEX_MODEL}" - -[projects."${ARG_CODEX_START_DIRECTORY}"] -trust_level = "trusted" - -EOF -} - -append_mcp_servers_section() { - local config_path="$1" - - if [ "${ARG_REPORT_TASKS}" == "false" ]; then - ARG_CODER_MCP_APP_STATUS_SLUG="" - CODER_MCP_AI_AGENTAPI_URL="" - else - CODER_MCP_AI_AGENTAPI_URL="http://localhost:3284" - fi - - cat << EOF >> "$config_path" - -# MCP Servers Configuration -[mcp_servers.Coder] -command = "coder" -args = ["exp", "mcp", "server"] -env = { "CODER_MCP_APP_STATUS_SLUG" = "${ARG_CODER_MCP_APP_STATUS_SLUG}", "CODER_MCP_AI_AGENTAPI_URL" = "${CODER_MCP_AI_AGENTAPI_URL}" , "CODER_AGENT_URL" = "${CODER_AGENT_URL}", "CODER_AGENT_TOKEN" = "${CODER_AGENT_TOKEN}", "CODER_MCP_ALLOWED_TOOLS" = "coder_report_task" } -description = "Report ALL tasks and statuses (in progress, done, failed) you are working on." -type = "stdio" - -EOF - - if [ -n "$ARG_ADDITIONAL_MCP_SERVERS" ]; then - printf "Adding additional MCP servers\n" - echo "$ARG_ADDITIONAL_MCP_SERVERS" >> "$config_path" - fi -} - -append_aibridge_config_section() { - local config_path="$1" - - if [ -n "$ARG_AIBRIDGE_CONFIG" ]; then - printf "Adding AI Bridge configuration\n" - echo -e "\n# AI Bridge Configuration\n$ARG_AIBRIDGE_CONFIG" >> "$config_path" - fi -} - -function populate_config_toml() { - CONFIG_PATH="$HOME/.codex/config.toml" - mkdir -p "$(dirname "$CONFIG_PATH")" - - if [ -n "$ARG_BASE_CONFIG_TOML" ]; then - printf "Using provided base configuration\n" - echo "$ARG_BASE_CONFIG_TOML" > "$CONFIG_PATH" - else - printf "Using minimal default configuration\n" - write_minimal_default_config "$CONFIG_PATH" - fi - - append_mcp_servers_section "$CONFIG_PATH" - - if [ "$ARG_ENABLE_AIBRIDGE" = "true" ]; then - printf "AI Bridge is enabled\n" - append_aibridge_config_section "$CONFIG_PATH" - fi -} - -function add_instruction_prompt_if_exists() { - if [ -n "${ARG_CODEX_INSTRUCTION_PROMPT:-}" ]; then - AGENTS_PATH="$HOME/.codex/AGENTS.md" - printf "Creating AGENTS.md in .codex directory: %s\\n" "${AGENTS_PATH}" - - mkdir -p "$HOME/.codex" - - if [ -f "${AGENTS_PATH}" ] && grep -Fq "${ARG_CODEX_INSTRUCTION_PROMPT}" "${AGENTS_PATH}"; then - printf "AGENTS.md already contains the instruction prompt. Skipping append.\n" - else - printf "Appending instruction prompt to AGENTS.md in .codex directory\n" - echo -e "\n${ARG_CODEX_INSTRUCTION_PROMPT}" >> "${AGENTS_PATH}" - fi - - if [ ! -d "${ARG_CODEX_START_DIRECTORY}" ]; then - printf "Creating start directory '%s'\\n" "${ARG_CODEX_START_DIRECTORY}" - mkdir -p "${ARG_CODEX_START_DIRECTORY}" || { - printf "Error: Could not create directory '%s'.\\n" "${ARG_CODEX_START_DIRECTORY}" - exit 1 - } - fi - else - printf "AGENTS.md instruction prompt is not set.\n" - fi -} - -function add_auth_json() { - AUTH_JSON_PATH="$HOME/.codex/auth.json" - mkdir -p "$(dirname "$AUTH_JSON_PATH")" - AUTH_JSON=$( - cat << EOF -{ - "OPENAI_API_KEY": "${ARG_OPENAI_API_KEY}" -} -EOF - ) - echo "$AUTH_JSON" > "$AUTH_JSON_PATH" -} - -install_codex -codex --version -populate_config_toml -add_instruction_prompt_if_exists - -if [ "$ARG_ENABLE_AIBRIDGE" = "false" ]; then - add_auth_json -fi diff --git a/registry/coder-labs/modules/codex/scripts/install.sh.tftpl b/registry/coder-labs/modules/codex/scripts/install.sh.tftpl new file mode 100644 index 000000000..e76707a93 --- /dev/null +++ b/registry/coder-labs/modules/codex/scripts/install.sh.tftpl @@ -0,0 +1,121 @@ +#!/bin/bash + +set -euo pipefail + +BOLD='\033[0;1m' + +command_exists() { + command -v "$1" > /dev/null 2>&1 +} + +ARG_INSTALL='${ARG_INSTALL}' +ARG_CODEX_VERSION=$(echo -n '${ARG_CODEX_VERSION}' | base64 -d) +ARG_WORKDIR=$(echo -n '${ARG_WORKDIR}' | base64 -d) +ARG_BASE_CONFIG_TOML=$(echo -n '${ARG_BASE_CONFIG_TOML}' | base64 -d) +ARG_ADDITIONAL_MCP_SERVERS=$(echo -n '${ARG_ADDITIONAL_MCP_SERVERS}' | base64 -d) +ARG_ENABLE_AI_GATEWAY='${ARG_ENABLE_AI_GATEWAY}' +ARG_AIBRIDGE_CONFIG=$(echo -n '${ARG_AIBRIDGE_CONFIG}' | base64 -d) +ARG_MODEL_REASONING_EFFORT='${ARG_MODEL_REASONING_EFFORT}' + +echo "--------------------------------" +printf "codex_version: %s\n" "$${ARG_CODEX_VERSION}" +printf "workdir: %s\n" "$${ARG_WORKDIR}" +printf "enable_ai_gateway: %s\n" "$${ARG_ENABLE_AI_GATEWAY}" +echo "--------------------------------" + +function install_codex() { + if [ "$${ARG_INSTALL}" != "true" ]; then + echo "Skipping Codex installation as per configuration." + return + fi + + if [ -s "$HOME/.nvm/nvm.sh" ]; then + export NVM_DIR="$HOME/.nvm" + . "$NVM_DIR/nvm.sh" + fi + + if ! command_exists npm; then + echo "Error: npm is required to install Codex. Install Node.js/npm first or set install_codex = false." + exit 1 + fi + + if ! command_exists nvm; then + mkdir -p "$HOME/.npm-global" + npm config set prefix "$HOME/.npm-global" + export PATH="$HOME/.npm-global/bin:$PATH" + fi + + printf "%s Installing Codex CLI\n" "$${BOLD}" + + if [ -n "$${ARG_CODEX_VERSION}" ]; then + npm install -g "@openai/codex@$${ARG_CODEX_VERSION}" + else + npm install -g "@openai/codex" + fi + printf "%s Installed Codex CLI: %s\n" "$${BOLD}" "$(codex --version)" +} + +function write_minimal_default_config() { + local config_path="$1" + local optional_config="" + + if [ "$${ARG_ENABLE_AI_GATEWAY}" = "true" ]; then + optional_config='model_provider = "aibridge"' + fi + + if [ -n "$${ARG_MODEL_REASONING_EFFORT}" ]; then + optional_config+=$'\n'"model_reasoning_effort = \"$${ARG_MODEL_REASONING_EFFORT}\"" + fi + + cat << EOF > "$${config_path}" +preferred_auth_method = "apikey" +$${optional_config} + +EOF + + if [ -n "$${ARG_WORKDIR}" ]; then + cat << EOF >> "$${config_path}" +[projects."$${ARG_WORKDIR}"] +trust_level = "trusted" + +EOF + fi +} + +function populate_config_toml() { + local config_path="$HOME/.codex/config.toml" + mkdir -p "$(dirname "$${config_path}")" + + if [ -n "$${ARG_BASE_CONFIG_TOML}" ]; then + printf "Using provided base configuration\n" + echo "$${ARG_BASE_CONFIG_TOML}" > "$${config_path}" + else + printf "Using minimal default configuration\n" + write_minimal_default_config "$${config_path}" + fi + + if [ -n "$${ARG_ADDITIONAL_MCP_SERVERS}" ]; then + printf "Adding additional MCP servers\n" + echo "$${ARG_ADDITIONAL_MCP_SERVERS}" >> "$${config_path}" + fi + + if [ "$${ARG_ENABLE_AI_GATEWAY}" = "true" ] && [ -n "$${ARG_AIBRIDGE_CONFIG}" ]; then + if ! grep -q '\[model_providers\.aibridge\]' "$${config_path}" 2>/dev/null; then + printf "Adding AI Gateway configuration\n" + echo -e "\n$${ARG_AIBRIDGE_CONFIG}" >> "$${config_path}" + else + printf "AI Gateway provider already defined in config, skipping append\n" + fi + fi +} + +function setup_workdir() { + if [ -n "$${ARG_WORKDIR}" ] && [ ! -d "$${ARG_WORKDIR}" ]; then + echo "Creating workdir: $${ARG_WORKDIR}" + mkdir -p "$${ARG_WORKDIR}" + fi +} + +install_codex +populate_config_toml +setup_workdir diff --git a/registry/coder-labs/modules/codex/scripts/start.sh b/registry/coder-labs/modules/codex/scripts/start.sh deleted file mode 100644 index bac0cb459..000000000 --- a/registry/coder-labs/modules/codex/scripts/start.sh +++ /dev/null @@ -1,229 +0,0 @@ -#!/bin/bash - -source "$HOME"/.bashrc -set -o errexit -set -o pipefail - -command_exists() { - command -v "$1" > /dev/null 2>&1 -} - -if [ -f "$HOME/.nvm/nvm.sh" ]; then - source "$HOME"/.nvm/nvm.sh -else - export PATH="$HOME/.npm-global/bin:$PATH" -fi - -printf "Version: %s\n" "$(codex --version)" -set -o nounset -ARG_CODEX_TASK_PROMPT=$(echo -n "$ARG_CODEX_TASK_PROMPT" | base64 -d) -ARG_CONTINUE=${ARG_CONTINUE:-true} -ARG_ENABLE_AIBRIDGE=${ARG_ENABLE_AIBRIDGE:-false} - -echo "=== Codex Launch Configuration ===" -printf "OpenAI API Key: %s\n" "$([ -n "$ARG_OPENAI_API_KEY" ] && echo "Provided" || echo "Not provided")" -printf "Codex Model: %s\n" "${ARG_CODEX_MODEL:-"Default"}" -printf "Start Directory: %s\n" "$ARG_CODEX_START_DIRECTORY" -printf "Has Task Prompt: %s\n" "$([ -n "$ARG_CODEX_TASK_PROMPT" ] && echo "Yes" || echo "No")" -printf "Report Tasks: %s\n" "$ARG_REPORT_TASKS" -printf "Continue Sessions: %s\n" "$ARG_CONTINUE" -printf "Enable Coder AI Bridge: %s\n" "$ARG_ENABLE_AIBRIDGE" -echo "======================================" -set +o nounset - -SESSION_TRACKING_FILE="$HOME/.codex-module/.codex-task-session" - -find_session_for_directory() { - local target_dir="$1" - - if [ ! -f "$SESSION_TRACKING_FILE" ]; then - return 1 - fi - - local session_id - session_id=$(grep "^$target_dir|" "$SESSION_TRACKING_FILE" | cut -d'|' -f2 | head -1) - - if [ -n "$session_id" ]; then - echo "$session_id" - return 0 - fi - - return 1 -} - -store_session_mapping() { - local dir="$1" - local session_id="$2" - - mkdir -p "$(dirname "$SESSION_TRACKING_FILE")" - - if [ -f "$SESSION_TRACKING_FILE" ]; then - grep -v "^$dir|" "$SESSION_TRACKING_FILE" > "$SESSION_TRACKING_FILE.tmp" 2> /dev/null || true - mv "$SESSION_TRACKING_FILE.tmp" "$SESSION_TRACKING_FILE" - fi - - echo "$dir|$session_id" >> "$SESSION_TRACKING_FILE" -} - -find_recent_session_file() { - local target_dir="$1" - local sessions_dir="$HOME/.codex/sessions" - - if [ ! -d "$sessions_dir" ]; then - return 1 - fi - - local latest_file="" - local latest_time=0 - - while IFS= read -r session_file; do - local file_time - file_time=$(stat -c %Y "$session_file" 2> /dev/null || stat -f %m "$session_file" 2> /dev/null || echo "0") - local first_line - first_line=$(head -n 1 "$session_file" 2> /dev/null) - local session_cwd - session_cwd=$(echo "$first_line" | grep -o '"cwd":"[^"]*"' | cut -d'"' -f4) - - if [ "$session_cwd" = "$target_dir" ] && [ "$file_time" -gt "$latest_time" ]; then - latest_file="$session_file" - latest_time="$file_time" - fi - done < <(find "$sessions_dir" -type f -name "*.jsonl" 2> /dev/null) - - if [ -n "$latest_file" ]; then - local first_line - first_line=$(head -n 1 "$latest_file") - local session_id - session_id=$(echo "$first_line" | grep -o '"id":"[^"]*"' | cut -d'"' -f4) - if [ -n "$session_id" ]; then - echo "$session_id" - return 0 - fi - fi - - return 1 -} - -wait_for_session_file() { - local target_dir="$1" - local max_attempts=20 - local attempt=0 - - while [ $attempt -lt $max_attempts ]; do - local session_id - session_id=$(find_recent_session_file "$target_dir" 2> /dev/null || echo "") - if [ -n "$session_id" ]; then - echo "$session_id" - return 0 - fi - sleep 0.5 - attempt=$((attempt + 1)) - done - - return 1 -} - -validate_codex_installation() { - if command_exists codex; then - printf "Codex is installed\n" - else - printf "Error: Codex is not installed. Please enable install_codex or install it manually\n" - exit 1 - fi -} - -setup_workdir() { - if [ -d "${ARG_CODEX_START_DIRECTORY}" ]; then - printf "Directory '%s' exists. Changing to it.\\n" "${ARG_CODEX_START_DIRECTORY}" - cd "${ARG_CODEX_START_DIRECTORY}" || { - printf "Error: Could not change to directory '%s'.\\n" "${ARG_CODEX_START_DIRECTORY}" - exit 1 - } - else - printf "Directory '%s' does not exist. Creating and changing to it.\\n" "${ARG_CODEX_START_DIRECTORY}" - mkdir -p "${ARG_CODEX_START_DIRECTORY}" || { - printf "Error: Could not create directory '%s'.\\n" "${ARG_CODEX_START_DIRECTORY}" - exit 1 - } - cd "${ARG_CODEX_START_DIRECTORY}" || { - printf "Error: Could not change to directory '%s'.\\n" "${ARG_CODEX_START_DIRECTORY}" - exit 1 - } - fi -} - -build_codex_args() { - CODEX_ARGS=() - - if [[ -n "${ARG_CODEX_MODEL}" ]]; then - CODEX_ARGS+=("--model" "${ARG_CODEX_MODEL}") - fi - - if [ "$ARG_CONTINUE" = "true" ]; then - existing_session=$(find_session_for_directory "$ARG_CODEX_START_DIRECTORY" 2> /dev/null || echo "") - - if [ -n "$existing_session" ]; then - printf "Found existing task session for this directory: %s\n" "$existing_session" - printf "Resuming existing session...\n" - CODEX_ARGS+=("resume" "$existing_session") - else - printf "No existing task session found for this directory\n" - printf "Starting new task session...\n" - - if [ -n "$ARG_CODEX_TASK_PROMPT" ]; then - if [ "${ARG_REPORT_TASKS}" == "true" ]; then - PROMPT="Complete the task at hand in one go. Every step of the way, report your progress using coder_report_task tool with proper summary and statuses. Your task at hand: $ARG_CODEX_TASK_PROMPT" - else - PROMPT="Your task at hand: $ARG_CODEX_TASK_PROMPT" - fi - CODEX_ARGS+=("$PROMPT") - fi - fi - else - printf "Continue disabled, starting fresh session\n" - - if [ -n "$ARG_CODEX_TASK_PROMPT" ]; then - if [ "${ARG_REPORT_TASKS}" == "true" ]; then - PROMPT="Complete the task at hand in one go. Every step of the way, report your progress using Coder.coder_report_task tool with proper summary and statuses. Your task at hand: $ARG_CODEX_TASK_PROMPT" - else - PROMPT="Your task at hand: $ARG_CODEX_TASK_PROMPT" - fi - CODEX_ARGS+=("$PROMPT") - fi - fi -} - -capture_session_id() { - if [ "$ARG_CONTINUE" = "true" ] && [ -z "$existing_session" ]; then - printf "Capturing new session ID...\n" - new_session=$(wait_for_session_file "$ARG_CODEX_START_DIRECTORY" || echo "") - - if [ -n "$new_session" ]; then - store_session_mapping "$ARG_CODEX_START_DIRECTORY" "$new_session" - printf "✓ Session tracked: %s\n" "$new_session" - printf "This session will be automatically resumed on next restart\n" - else - printf "⚠ Could not capture session ID after 10s timeout\n" - fi - fi -} - -start_codex() { - printf "Starting Codex with arguments: %s\n" "${CODEX_ARGS[*]}" - # AGENTAPI_BOUNDARY_PREFIX is set by the agentapi module's main.sh when - # enable_boundary=true. It points to a wrapper script that runs the command - # through coder boundary, sandboxing only the agent process. - if [ -n "${AGENTAPI_BOUNDARY_PREFIX:-}" ]; then - printf "Starting with coder boundary enabled\n" - agentapi server --type codex --term-width 67 --term-height 1190 -- \ - "${AGENTAPI_BOUNDARY_PREFIX}" codex "${CODEX_ARGS[@]}" & - else - agentapi server --type codex --term-width 67 --term-height 1190 -- codex "${CODEX_ARGS[@]}" & - fi - capture_session_id -} - -validate_codex_installation -setup_workdir -build_codex_args -start_codex diff --git a/registry/coder-labs/modules/codex/testdata/codex-mock.sh b/registry/coder-labs/modules/codex/testdata/codex-mock.sh index fe8f3806c..a73b70b5c 100644 --- a/registry/coder-labs/modules/codex/testdata/codex-mock.sh +++ b/registry/coder-labs/modules/codex/testdata/codex-mock.sh @@ -1,38 +1,9 @@ #!/bin/bash -# Handle --version flag if [[ "$1" == "--version" ]]; then - echo "HELLO: $(bash -c env)" echo "codex version v1.0.0" exit 0 fi -set -e - -SESSION_ID="" -IS_RESUME=false - -while [[ $# -gt 0 ]]; do - case $1 in - resume) - IS_RESUME=true - SESSION_ID="$2" - shift 2 - ;; - *) - shift - ;; - esac -done - -if [ "$IS_RESUME" = false ]; then - SESSION_ID="019a1234-5678-9abc-def0-123456789012" - echo "Created new session: $SESSION_ID" -else - echo "Resuming session: $SESSION_ID" -fi - -while true; do - echo "$(date) - codex-mock (session: $SESSION_ID)" - sleep 15 -done +echo "codex invoked with: $*" +exit 0