feat: refactor MCP, fix Codex errors, reorganize AI agents documentation#2711
feat: refactor MCP, fix Codex errors, reorganize AI agents documentation#2711andrii-harbour wants to merge 4 commits intomainfrom
Conversation
andrii-harbour
commented
Apr 2, 2026
- Removed outdated GDPval benchmark command from evals section in AGENTS.md.
- Updated the structure of the docs.json file to categorize AI agents under "MCP" and "Agents" groups, adding new pages for MCP and skills.
- Introduced new documentation files for best practices, debugging, eval results, integrations, and skills, providing comprehensive guidance on using SuperDoc tools with LLMs.
- Added detailed instructions on how to use the MCP server and its debugging features, enhancing the overall documentation for better user experience.
…ion and add new content - Removed outdated GDPval benchmark command from evals section in AGENTS.md. - Updated the structure of the docs.json file to categorize AI agents under "MCP" and "Agents" groups, adding new pages for MCP and skills. - Introduced new documentation files for best practices, debugging, eval results, integrations, and skills, providing comprehensive guidance on using SuperDoc tools with LLMs. - Added detailed instructions on how to use the MCP server and its debugging features, enhancing the overall documentation for better user experience.
|
Preview deployment for your docs. Learn more about Mintlify Previews.
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b0e5686817
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| for (const relPath of tempFiles) { | ||
| // Skip manually maintained files that live alongside generated artifacts | ||
| if (relPath === '__init__.py' || relPath === 'system-prompt.md') continue; | ||
| if (relPath === '__init__.py' || relPath.startsWith('prompt-templates/')) continue; |
There was a problem hiding this comment.
Normalize prompt-template skip check for Windows paths
collectFiles builds relPath with path.join, so on Windows the value is prompt-templates\... instead of prompt-templates/.... The new relPath.startsWith('prompt-templates/') guard will not match there, causing sdk-generate --check to treat manual template files as drift/stale and fail despite correct generated output. Make this check separator-agnostic (e.g., normalize separators or inspect the first path segment).
Useful? React with 👍 / 👎.
* feat(evals): add extractDocxText utility for benchmark text extraction
* feat(evals): add benchmarkMetrics assertion for Level 3 benchmark
* feat(evals): add Claude Code benchmark provider for Level 3
* feat(evals): add Codex benchmark provider for Level 3
* feat(evals): add 18 benchmark tasks for Level 3 agent comparison
* feat(evals): add benchmark report generator for Level 3
* feat(evals): add Level 3 benchmark Promptfoo config with 10 conditions
* fix(evals): fix providers and assertions for Level 3 benchmark
- Fix cwd ENOENT: create stateDir before passing to SDK query()
- Fix Claude Code provider: clean up, remove pathToClaudeCodeExecutable hacks
- Fix Codex provider: match real SDK API (command_execution items, approvalPolicy)
- Fix test assertions: match actual fixture content
- contract.docx -> report-with-formatting.docx for heading tasks
- [Employee Name] -> [Candidate Name] for employment-offer.docx
- Fix $150M collateral check (XML extraction splits as "1 50")
- Upgrade @anthropic-ai/claude-agent-sdk to ^0.2.87
* fix(evals): fix sandbox writes, add useClaudeSettings, MCP support
- Copy fixture into stateDir so agents can write within their sandbox
- Add stateDir fallback for output file detection
- Add useClaudeSettings option to inherit local Claude Code config
(MCP servers, skills, CLAUDE.md) via settingSources
- Add CC-local condition for testing with user's own Claude Code setup
- Wire superdocMcp config to attach SuperDoc MCP server via mcpServers
- Add preeval:benchmark script to build MCP server before runs
- Add model, maxTurns, systemPrompt config options
* test(evals): add e2e smoke test for Level 3 benchmark providers
Standalone test script that verifies both providers end-to-end:
- Claude baseline read/edit (without SuperDoc)
- Claude superdoc-skill with MCP (superdoc_open → get_content → close)
- Claude local with useClaudeSettings
- Codex baseline read/edit (without SuperDoc)
- Codex with SuperDoc MCP
Run: node evals/scripts/smoke-test-benchmark.mjs --claude --codex
* feat(evals): enforce SuperDoc MCP usage via system prompt and AGENTS.md
- Add system prompt for superdoc conditions instructing agents to use
SuperDoc MCP tools exclusively, not raw unzip/XML
- Write AGENTS.md in working directory reinforcing SuperDoc tool usage
- Restrict CC-superdoc-skill allowedTools to Read/Glob/Grep (no Bash)
so agents cannot fall back to raw DOCX manipulation
- Add prompt reinforcement for Codex superdoc conditions
- Verified: Claude superdoc-skill read + edit both use MCP exclusively
(superdoc_open → search → edit → save → close, zero Bash calls)
* fix(evals): pass OPENAI_API_KEY to Codex SDK, update smoke tests
- Pass process.env.OPENAI_API_KEY to new Codex({ apiKey }) so the SDK
uses API key auth instead of relying on codex login session
- Add Claude edit + MCP tests to smoke test script
- Verified: Codex baseline read + edit pass with API key auth
- Known: Codex MCP calls fail due to rmcp protocol incompatibility
in the Codex CLI (serde error on tool calls, Transport closed)
* fix(mcp,evals): fix stdout corruption killing Codex MCP transport
Root cause: console.debug('[super-editor] Telemetry: enabled') in
Editor.ts writes to stdout when superdoc_open initializes the editor.
The Codex CLI's Rust MCP client (rmcp) parses stdout as JSON-RPC and
dies with "serde error expected value at line 1 column 2" on the
non-JSON line, closing the transport.
Fixes:
- Redirect all console methods (log/info/debug/warn) to stderr in
the MCP server entry point, before any imports run
- Add mcp_auto_approve config for Codex to auto-approve MCP tool calls
(approval_policy=never only covers shell commands, not MCP)
- Add stdio wrapper script for transport debugging (logs raw bytes)
- Use runStreamed() in Codex provider to capture full MCP event lifecycle
- Pass minimal env to prevent other stdout pollution from deps
- Add preflight check for MCP server build artifact
* refactor(evals): trim benchmark to 6 compact tasks for v1
Reduce from 18 to 6 tasks (3 reading + 3 editing) for faster iteration.
Full suite: 12 runs in 3 minutes, 100% pass rate on Codex baseline +
superdoc-skill conditions.
Tasks: extract headings, extract entities, extract financials,
replace entity, insert section, fill placeholders.
* fix(evals): fix report generator to extract metrics from parsed output
* feat(evals): improve benchmark report with full AC metrics
- Add per-task detail table with every metric per condition
- Add input/output token breakdown (not just total)
- Add p95 latency alongside median
- Add estimated cost per task (based on model token pricing)
- Add comprehensive recommendation with latency, token, cost, steps,
and collateral comparisons between conditions
- Fix task description extraction from vars.task fallback
* feat(evals): split benchmark metrics into individual Promptfoo columns
Replace single benchmarkMetrics assertion with separate per-metric
assertions (steps, latency, tokens, path), each with its own metric
tag. Promptfoo displays these as individual columns with actual numeric
values instead of a single "efficiency 1.00" score.
Columns visible in UI: correctness, collateral, steps, latency, tokens, path
* fix(evals): create superdoc CLI wrapper on PATH for superdoc-cli condition
The superdocOnPath flag was a no-op because the SuperDoc CLI was never
installed as a binary on PATH. Now creates a shell wrapper script in
the stateDir's bin/ that delegates to apps/cli/dist/index.js, and
prepends it to the agent's PATH.
Finding: even with superdoc on PATH, Codex doesn't discover or use it
without explicit instruction. All superdoc-cli runs fall back to raw
unzip/XML. This is valid benchmark data.
* feat(evals): enforce SuperDoc usage and fail when agents don't use it
- benchmarkPath assertion now FAILS when superdoc-skill or superdoc-cli
conditions don't use SuperDoc (was always passing before)
- Add AGENTS.md + prompt hint for superdoc-cli condition telling agents
the CLI exists on PATH with common commands
- Split MCP and CLI AGENTS.md templates in both providers
- Verified: all 3 Codex conditions use correct path
(baseline=raw, superdoc-skill=MCP, superdoc-cli=CLI)
* feat(evals): add _summary field for readable Promptfoo cell previews
Add a _summary line at the top of provider JSON output showing
path | steps | latency | tokens at a glance. Promptfoo renders the
start of the output in each table cell, so this gives immediate
visibility without clicking into the detail view.
* feat(evals): add derivedMetrics and weight:0 for info-only metrics
- Add derivedMetrics: avg_latency, avg_steps, avg_tokens,
superdoc_usage_pct - computed per provider after evaluation
- Set weight: 0 on steps/latency/tokens assertions so they report
values without affecting pass/fail score
- Only correctness, collateral, and path drive pass/fail
- Click "Show Charts" in Promptfoo UI for visual comparison
* feat(evals): add unit labels to metric names for self-documenting UI
* revert(evals): restore original metric names
* feat(evals): add Anthropic vendor DOCX skill to benchmark matrix
Add the Anthropic DOCX skill (from anthropics/skills repo) as the
vendor condition. When vendorSkill: true, the skill is installed as
AGENTS.md in the working directory, teaching agents to use unzip/XML
for reading and docx-js for creation.
This completes the benchmark matrix:
- baseline: no skill, agent figures it out
- vendor: Anthropic's DOCX skill (unzip + docx-js)
- superdoc-skill: SuperDoc MCP server
- superdoc-cli: SuperDoc CLI on PATH
- choice: all available, agent picks
* refactor(evals): clean up benchmark config to 4 conditions × 2 agents
* fix(evals): use CLAUDE.md instead of AGENTS.md for Claude Code provider
Claude Agent SDK reads CLAUDE.md (not AGENTS.md) for project context.
Write vendor skill and CLI instructions as CLAUDE.md in the stateDir,
and enable settingSources: ['project'] so the SDK loads it.
* feat(docs): document Level 3 DOCX agent benchmark in CLAUDE.md
* docs(evals): add guide for reading Level 3 benchmark results
* docs(evals): add PRD for benchmark v2 document fidelity scoring
* Revert "docs(evals): add PRD for benchmark v2 document fidelity scoring"
This reverts commit 85108ac.
* feat(evals): add DOCX fidelity checker utility
* feat(evals): add v2 fixture documents with rich formatting
Creates 4 DOCX fixtures designed to be fragile under raw XML edits:
- consulting-agreement.docx: bold defined terms, italic refs, 6 heading sections, $250k indemnification cap, net 45 payment terms
- pricing-proposal.docx: 4-row pricing table with shaded header, right-aligned prices, US Letter page size
- contract-redlines.docx: 3 tracked insertions + 2 deletions by Jane Editor, 2 reviewer comments by Bob Reviewer
- policy-manual.docx: 3-level nested numbered list (1./1.1/a)), header/footer with page numbers, page breaks between sections
Adds create-v2-fixtures.mjs generator script and docx@9.6.1 dev dependency.
* feat(evals): add benchmarkFidelity and benchmarkDiff assertions
* feat(evals): add 6 fidelity-sensitive v2 benchmark tasks
* feat(evals): add benchmark v2 with document fidelity scoring
New capabilities:
- docx-fidelity.mjs: OOXML structural checker (formatting, styles,
numbering, tracked changes, comments, tables, XML diff)
- benchmarkFidelity assertion: runs fidelity checks on output DOCX
- benchmarkDiff assertion: measures XML change ratio (surgical vs rewrite)
New fixtures (all synthetic names):
- consulting-agreement.docx: bold terms, italic refs, numbered sections
- pricing-proposal.docx: table with alignment and styled header
- contract-redlines.docx: existing tracked changes and comments
- policy-manual.docx: 3-level nested numbered lists
6 new fidelity tasks (CEO examples):
- Mixed formatting replace (bold preservation)
- Table cell edit (structure preservation)
- Tracked changes edit (annotation survival)
- Nested list insert (numbering continuation)
- Multi-step workflow (heading style check)
- Edit with existing annotations (comment survival)
92 tests total: 69 checks.cjs + 23 docx-fidelity
* fix(evals): fix 3 fidelity assertion bugs found in first v2 run
1. outputFile pointed to unedited fixture copy instead of localDocPath
(the file the agent actually edits in stateDir)
2. Comment IDs in fidelity checks used "0","1" but fixture has "1","2"
3. Table cell text used exact match instead of includes
4. Remove overly strict paragraphStyle check on multi-step task
* feat(evals): redesign v2 tasks around proven SuperDoc advantages
Category A — Structural creation (SuperDoc proven):
- Create heading with Heading1 style
- Create table with borders and data rows
Category B — Formatting (SuperDoc proven):
- Make specific text bold
- Replace text preserving formatting
Category C — Complex edits (track improvement):
- Tracked change replacement
- Add comment to clause
* fix(evals): stop loading user MCP servers, reduce token cost 30%
Remove settingSources which loaded ALL user MCP servers (43 Linear,
5 Excalidraw, Gmail, etc.) adding ~4000 tokens per turn. Pass
CLAUDE.md content as systemPrompt instead.
Result: 30% cost reduction ($0.97 -> $0.68 for NDA creation).
* docs(evals): add benchmark findings and next steps document
* fix(evals): set settingSources: [] for SDK isolation mode
* docs(evals): add MCP efficiency analysis with prioritized fixes
* refactor(evals): update provider labels in benchmark configuration for clarity
Changed labels for several providers in the promptfooconfig.benchmark.yaml file to better reflect their functionality, including renaming 'CC-vendor' to 'CC-with-docx-skill', 'CC-superdoc-skill' to 'CC-superdoc-mcp', and others for consistency and improved understanding.
|
@andrii-harbour let me review this one before merging please |
…rkflows (#2722) * feat(sdk): update tool definitions for efficient multi-block workflows - superdoc_edit: emphasize markdown insert for multi-section creation - superdoc_create: direct to markdown/mutations for multiple items - superdoc_mutations: document create steps and batch format pattern - superdoc_format: direct to mutations for multi-item formatting - superdoc_search: clarify ref lifecycle within vs across batches - system-prompt: add efficient document creation workflow * feat(evals,sdk): add efficient workflow patterns to all agent touchpoints - Update provider SUPERDOC_SYSTEM_PROMPT with markdown insert and mutations batch examples (what CC actually reads as system prompt) - Update Codex AGENTS.md with same efficient patterns - Update MCP header prompt with "when to use which tool" guide - Increase CC maxTurns from 20 to 35 (both CC failures were at 21) - Regenerate SDK artifacts and rebuild MCP server * feat(evals): enable tool search to reduce token overhead * docs(ai): add markdown insert pattern and formatting guidance * docs(ai): add efficient patterns to MCP how-to-use guide * fix(evals): remove debug console.log that dumped every SDK message * feat(document-api): add alignment field to StyleApplyStep and StyleApplyInput types * fix(document-api): keep inline required on StyleApplyInput, guard optional inline in step executors * feat(document-api): add alignment to format.apply step JSON schema * feat(super-editor): support alignment in format.apply mutation step * docs(sdk): update tool descriptions to show alignment inside format.apply step * feat(document-api): add scope: block to format.apply for full-paragraph formatting * feat(document-api): allow placement and BlockNodeAddress target for markdown inserts * chore: regenerate SDK artifacts and docs from updated contract * feat(evals): add new NDA documents and implement interactive DOCX output reviewer * fix: address PR review — minProperties, RichContentInsertInput type, deduplicate alignment constant * Revert "fix: address PR review — minProperties, RichContentInsertInput type, deduplicate alignment constant" This reverts commit 4c04ebd. * fix(document-api): add minProperties, type export, shared alignment constant * docs(sdk): require fontSize on headings after markdown insert * docs(sdk): context-driven formatting guidance for markdown inserts * docs(sdk): only set properties explicitly present in document blocks * feat(super-editor): resolve default fontSize in get_content blocks response * fix(super-editor): fallback to 10pt default when styles omit fontSize * fix(super-editor): resolve fontSize per-block via style chain in get_content * test(super-editor): add fontSize style chain resolution tests for blocks.list * docs(sdk): guide agents to match uppercase title conventions