Skip to content

[pull] develop from baserow:develop#234

Merged
pull[bot] merged 4 commits intocode:developfrom
baserow:develop
Apr 1, 2026
Merged

[pull] develop from baserow:develop#234
pull[bot] merged 4 commits intocode:developfrom
baserow:develop

Conversation

@pull
Copy link
Copy Markdown

@pull pull Bot commented Apr 1, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

…4970)

* feat(assistant): add theme catalog, apply_theme, and theme templates

Add 20 new theme templates and theme infrastructure:
- THEME_CATALOG with 22 themes, ThemeName type, _load_theme_data(), apply_theme()
- BuilderItemCreate.theme field with auto-apply on creation
- set_theme() tool for changing themes on existing applications
- Unit tests and eval tests for theme functionality

* refactor(assistant): migrate theme evals to EvalChecklist pattern

Replace removed assert_no_tool_errors with EvalChecklist + count_tool_errors
in test_agent_creates_app_with_theme and test_agent_changes_theme. Remove
stale test_navigate_to_workspace test.

* refactor(assistant): move theme code from core/types to builder/themes

Theme catalog, apply_theme, and related helpers are a builder concern,
not core. Move them to tools/builder/themes.py so core never imports
from builder. Also make apply_theme return bool so set_theme can report
errors instead of silently claiming success, and fix "Thank youb" typo
across all 23 theme templates.

* refactor (AI field): remove langchain and use pydantic-ai (#5017)

* refactor(ai): replace langchain with pydantic-ai for LLM calls and structured output

Replace langchain dependency with pydantic-ai as the LLM abstraction
layer. Each GenerativeAIModelType now implements get_ai_model() returning
a pydantic-ai Model instance, and the base class provides prompt() and
prompt_structured() using Agent.run_sync().

- Remove langchain, langchain-openai, and direct openai/anthropic/mistralai
  deps (pydantic-ai-slim extras install them)
- Add prompt_structured() with PromptedOutput for Pydantic model output
  (e.g. BaserowFormulaModel)
- Add output_choices parameter to prompt() for constrained choice selection
  with fuzzy matching, compatible with all models including those without
  tool support
- Remove custom output_parsers.py (StrictEnumOutputParser, JsonOutputParser
  replacements)
- Update AI field job flow to use prompt(output_choices=...) for choice
  output and prompt_with_files fallback for file-based prompts

* refactor(ai): replace deprecated Assistants API with pydantic-ai multi-modal content

Replace OpenAI's deprecated Assistants API (file_search, threads, polling)
with pydantic-ai's native BinaryContent support. Files are now read from
storage and passed directly in the prompt as multi-modal content — no
upload/delete lifecycle needed.

- Add content parameter to prompt() for multi-modal file attachments
- Remove GenerativeAIWithFilesModelType abstract class and its upload/
  delete/prompt_with_files methods
- Remove get_client() from BaseOpenAIGenerativeAIModelType (no longer
  needed without file operations)
- Replace AIFileManager.upload_files_from_file_field() with
  get_file_contents() returning BinaryContent objects
- Simplify job_types.py: file+choice paths merge into single prompt()
  call with content and output_choices parameters
- Remove format_prompt/parse_output hooks (no longer needed)
- Remove FileId type alias and AIFileError exception
- Works with any provider that supports documents (OpenAI, Anthropic,
  Mistral, Google) instead of only OpenAI

* refactor(ai): unify prompt API into single method with output_type parameter

Merge prompt_structured() into prompt() via the output_type parameter:
- None (default): plain text response
- list[str]: choice selection with fuzzy matching
- Pydantic BaseModel/TypedDict: structured output via PromptedOutput

Remove prompt_structured(), output_choices parameter, and dead code
(format_prompt, parse_output hooks, AIFileError, FileId type).

* refactor(ai): clean file handling API with model type owning processing logic

Redesign file handling so the model type owns all file processing decisions
(embed vs upload, size limits, count limits) while the manager only reads
metadata from storage and provides a lazy reader callback.

- Add supports_files flag and prepare_files() to GenerativeAIModelType base
- OpenAI implementation: embeds images inline (BinaryContent, 50MB/500 limit),
  uploads documents (UploadedFile via Responses API)
- Use OpenAIResponsesModel instead of OpenAIChatModel for OpenAI provider
  (supports all file types via input_file/input_image)
- AIFileManager.prepare_file_content() collects metadata, passes lazy
  read_file callback — no data read until model type requests it
- Remove is_file_compatible/get_max_file_size from public API (moved into
  prepare_files)
- Add type hints throughout

* fix(assistant): fix generate_formula tests and grid view row_height

- Add output_type param to TestGenerativeAIModelTypePromptError.prompt()
- Return BaserowFormulaModel in test mocks instead of raw JSON string
- Use correct ORM field name row_height_size in grid view from_django_orm

* docs: add AI field test plan

* refactor(ai): improve docstrings, fix prompt signature and empty prompt handling

* refactor(ai): merge ERROR_OUTPUT_PARSER into ERROR_GENERATIVE_AI_PROMPT

The distinction between output parsing errors and prompt errors didn't
provide actionable info to the user. Both now surface as a single
ERROR_GENERATIVE_AI_PROMPT error code.

* refactor(ai): move value generation into AIFieldHandler with AIFile abstraction

Introduce AIFile dataclass that wraps serialized UserFile dicts with
lazy read_content(). Move the core generation logic from
AIValueGenerator._generate_value_for into AIFieldHandler.generate_value_with_ai
so the handler owns the full flow: prompt resolution, file preparation,
AI call, file cleanup, and choice resolution.

- prepare_files() now takes list[AIFile] and returns only processed files
- cleanup_files() mirrors prepare_files() for provider file cleanup
- delete_file() takes AIFile instead of raw file_id string
- AIFieldEmptyPromptError raised when prompt resolves to empty
- Delete AIFileManager (absorbed into handler + model type)

* docs: add AI field architecture overview

* fix(ai): handle missing files and partial upload cleanup in prepare_files

- OpenAI prepare_files() now catches per-file errors (missing files,
  upload failures) and skips them instead of failing the whole generation
- Handler cleanup uses ai_files instead of prepared so files uploaded
  before a mid-prepare failure are still cleaned up

* fix(ai): normalize LLM responses before choice matching

Strip quotes, markdown bold, backticks, and trailing punctuation from
model responses before fuzzy matching. Use case-insensitive comparison
so responses like "POSITIVE" correctly match "Positive".

* address feedback

* Address feedback
…4980)

* feat(mcp): refactor tools to service layer and add database/table/field CRUD

Replace internal API request pattern with a dedicated service module for
MCP operations. Simplify tool architecture from dynamic per-table tools
to static tools with explicit table_id parameters. Add new tools for
managing databases, tables, fields, and batch row operations.

* fix(mcp): address Copilot review feedback

- Let exceptions propagate from MCPTool.call() so call_tool() logs them
- Fix N+1 queries in get_table_schema using enhance_field_queryset hook
- Fetch fields for all tables in a single specific_iterator pass
- Clamp list_rows pagination to safe bounds (ROW_PAGE_SIZE_LIMIT)
- Fix update_fields docstring to reflect type-change support
- Fix module path in utils.py comment
- Update "Adding a new tool" docs to match current MCPTool pattern

* fix: comment risky tools; waiting for a proper permission check

* feat(mcp): add enabled flag to MCPTool and disable risky tools

Add `enabled` attribute to MCPTool base class so tools can be registered
but hidden from MCP clients. Disable database/table/field CRUD tools
until users can control tool availability through the UI.

Also add docs/testing/mcp-test-plan.md with manual testing instructions.

* fix(mcp): replace list[dict] with typed Pydantic models for field/row specs

Validate required keys (name+type for field create, id for field update,
id for row update) at the schema level instead of relying on KeyError
from dict.pop() in the service layer.

Uses extra="allow" so type-specific options and field values pass through.

* fix: wrap update_rows in a transaction

* address feedback
…#5093)

* skill: add the silk-profiler skill to investigate backend bottlenecks

* address feedback
@pull pull Bot locked and limited conversation to collaborators Apr 1, 2026
@pull pull Bot added the ⤵️ pull label Apr 1, 2026
@pull pull Bot merged commit f37d3d4 into code:develop Apr 1, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant