feat(variable-schema): add refactored db tables#1010
Conversation
* add migration for new tables variable and variable_value
📝 WalkthroughWalkthroughUnified variable model: removed Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant APIRoute
participant DB as Database
participant Resolver as VariableResolver
participant Release as ReleaseEngine
Client->>APIRoute: request variables (list/upsert/delete)
APIRoute->>DB: List/Upsert/Delete on `variable` / `variable_value` (scope filtered)
DB-->>APIRoute: rows (with JSON-aggregated values)
APIRoute->>APIRoute: unmarshal values array → flattenVariableValue → toClientVariableValue
APIRoute-->>Client: response { variable, values }
Release->>Resolver: resolve variables for deployment
Resolver->>DB: ListVariablesWithValuesByDeploymentID / ByResourceID
DB-->>Resolver: aggregated rows
Resolver->>Resolver: unmarshal → flattenVariableValue per value
Resolver->>Resolver: effectiveValue: filter by selector, sort by priority DESC, pick first
Resolver-->>Release: resolved variable map
sequenceDiagram
participant Old as Old Schema
participant Backfill
participant New as New Schema
Old->>Backfill: read deployment_variable / resource_variable rows
Backfill->>New: INSERT INTO variable (scope='deployment'|'resource', ...)
Backfill->>New: INSERT INTO variable_value (kind='literal'|'ref', priority, resourceSelector, ...)
New-->>Backfill: idempotent inserts (ON CONFLICT DO NOTHING)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
* this migrates the variable data and refactors all variable tables into a single place for different types ie deployments, resources, job_agents, this does not update job agent reconciler to use varirables yet but the architecture is there
e4e38fc to
1630415
Compare
* short-circuit celEnv.Compile when options.CEL == ""
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/workspace-engine/svc/controllers/desiredrelease/getters_postgres_test.go (1)
114-128:⚠️ Potential issue | 🟠 MajorUpdate cleanup code to use new
variable/variable_valuetables; legacy tables no longer exist.The fixture cleanup (lines 114–128) targets
deployment_variable_value,deployment_variable, andresource_variable, which were dropped in migration 0139. The test fixture now inserts into the unifiedvariableandvariable_valuetables (as seen in the test assertions at lines 264+). The DELETEs will fail silently (errors are discarded), leaving test data behind and risking test isolation issues.Replace these cleanup statements:
DELETE FROM deployment_variable_value WHERE ...DELETE FROM deployment_variable WHERE ...DELETE FROM resource_variable WHERE ...with cleanup for the new schema:
DELETE FROM variable_value WHERE variable_id IN (SELECT id FROM variable WHERE scope = 'deployment' AND deployment_id = $1 OR scope = 'resource' AND resource_id = $2) DELETE FROM variable WHERE scope = 'deployment' AND deployment_id = $1 OR scope = 'resource' AND resource_id = $2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/workspace-engine/svc/controllers/desiredrelease/getters_postgres_test.go` around lines 114 - 128, The cleanup currently deletes legacy tables deployment_variable_value, deployment_variable, and resource_variable (using pool.Exec with cleanCtx and f.deploymentID / f.resourceID); update these to remove rows from the new unified tables by deleting from variable_value where variable_id is from variable rows for deployment or resource scope, then delete from variable for scope='deployment' with deployment_id = f.deploymentID OR scope='resource' with resource_id = f.resourceID; keep using pool.Exec(cleanCtx, ...) and the same parameter order (deployment id then resource id) so the fixture teardown removes entries created by the test instead of silently failing against dropped tables.apps/api/src/routes/v1/workspaces/deployment-variables.ts (2)
37-49:⚠️ Potential issue | 🔴 CriticalScope the deployment-variable list to
workspaceId.This endpoint is workspace-scoped, but the query only filters by
deploymentId. A caller with a deployment ID from another workspace can list its variables unless the deployment ownership is verified first.🛡️ Proposed fix
- const { deploymentId } = req.params; + const { workspaceId, deploymentId } = req.params; const limit = req.query.limit ?? 100; const offset = req.query.offset ?? 0; + + const dep = await db + .select({ id: deployment.id }) + .from(deployment) + .where( + and( + eq(deployment.id, deploymentId), + eq(deployment.workspaceId, workspaceId), + ), + ) + .then(takeFirstOrNull); + + if (dep == null) + throw new ApiError("Deployment not found", 404); const allVariables = await db .select()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/v1/workspaces/deployment-variables.ts` around lines 37 - 49, The query selecting deployment-scoped variables only filters by deploymentId; add workspace scoping to prevent cross-workspace access by retrieving the current workspaceId from the request/auth context (e.g., req.params.workspaceId or the authenticated workspace id on the request) and include a filter like eq(variable.workspaceId, workspaceId) in the where clause (or alternatively join the deployment table and assert eq(deployment.workspaceId, workspaceId)) so that the db.select().from(variable).where(...) call enforces both eq(variable.deploymentId, deploymentId) and the workspace ownership check.
134-154:⚠️ Potential issue | 🟠 MajorAlign the upsert conflict target with the route identity.
This route is keyed by
variableId, but the upsert conflicts on(deploymentId, key). If the same key already exists under a different ID, the response returns the requestedvariableIdeven though that row was not created; if the same ID exists with a changed key, the primary-key conflict is not handled.Consider either:
- upserting by
idand returning409on duplicate(deploymentId, key), or- treating this as key-based upsert and returning the actual row ID from
.returning().🐛 Safer key-based response shape
- await db + const [saved] = await db .insert(variable) .values({ id: variableId, scope: "deployment", @@ targetWhere: sql`${variable.deploymentId} is not null`, set: { description: description ?? null }, - }); + }) + .returning({ id: variable.id }); enqueueReleaseTargetsForDeployment(db, workspaceId, deploymentId); res.status(202).json({ - id: variableId, + id: saved.id, message: "Deployment variable upserted", });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/v1/workspaces/deployment-variables.ts` around lines 134 - 154, The upsert currently conflicts on (deploymentId, key) but always returns the requested variableId, which can misreport when an existing row with a different id is updated; fix by making the route return the actual row id from the DB upsert instead of the input variableId: change the db.insert(variable)…onConflictDoUpdate(...) call to include a .returning({ id: variable.id }) (or equivalent returning of the primary key) and use the returned id for the response payload and any downstream behavior (e.g., before calling enqueueReleaseTargetsForDeployment); alternatively, if you want id-based upsert, switch the conflict target to variable.id and explicitly detect conflicts on (deploymentId, key) to return 409 — but do not keep the current mismatch between upsert target and returned id.
🧹 Nitpick comments (3)
apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres_test.go (1)
420-423: Clean up child rows explicitly instead of relying on FK cascade.The helper inserts
variable_valuerows but only deletes the parentvariable. Deleting both keeps the DB-backed test deterministic even if cascade behavior changes.🧹 Proposed cleanup hardening
t.Cleanup(func() { + _, _ = pool.Exec(context.Background(), + `DELETE FROM variable_value WHERE variable_id = $1`, varID) _, _ = pool.Exec(context.Background(), `DELETE FROM variable WHERE id = $1`, varID) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres_test.go` around lines 420 - 423, The test cleanup currently only deletes the parent row (`DELETE FROM variable WHERE id = $1` using pool.Exec and varID) but leaves inserted child rows in `variable_value`; update the t.Cleanup to explicitly remove child rows first (e.g., `DELETE FROM variable_value WHERE variable_id = $1`) before deleting from `variable` so the DB-backed test is deterministic regardless of FK cascade behavior—use the same pool.Exec/context and varID in both Exec calls (or run both deletes in a single transaction) inside the existing t.Cleanup closure.apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/resolve.go (1)
245-287: LGTM.Candidate filtering, descending-priority sort, and first-successful-resolve semantics are consistent with
resolveFromValuesand are covered byTestResolve_ResourceVariableSelectorPriorityandTestResolve_ResourceVarRefFails_FallsToDeploymentValue. One minor nit: theselector.Matcherror is discarded — parity withresolveFromValues, so probably intentional, but a bad/unparseable CEL selector will silently cause a candidate to be dropped rather than surfacing an error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/resolve.go` around lines 245 - 287, The selector.Match error is currently discarded in resolveFromResource which silently drops bad/unparseable CEL selectors; change the match block to check the error returned by selector.Match and short-circuit on error instead of ignoring it (i.e., if err != nil return nil immediately or otherwise surface/log the error), mirroring the error-handling behavior used in resolveFromValues; update the selector.Match call site in resolveFromResource (the block that reads "if ok, _ := selector.Match(...)") to capture the error and handle it explicitly before appending to matched.packages/trpc/src/routes/_variables.ts (1)
15-21: Add an explicit return type for the exported client shape.
toClientVariableValueis exported, so define the response interface and annotate the helper return type.Proposed refactor
type VariableValueRow = typeof schema.variableValue.$inferSelect; +export interface ClientVariableValue { + id: string; + deploymentVariableId: string; + value: unknown; + resourceSelector: string | null; + priority: number; +} + export const flattenVariableValue = (v: VariableValueRow): unknown => { if (v.kind === "literal") return v.literalValue; if (v.kind === "ref") return { reference: v.refKey, path: v.refPath ?? [] }; return { provider: v.secretProvider, key: v.secretKey, path: v.secretPath ?? [], }; }; -export const toClientVariableValue = (v: VariableValueRow) => ({ +export const toClientVariableValue = ( + v: VariableValueRow, +): ClientVariableValue => ({ id: v.id, deploymentVariableId: v.variableId, value: flattenVariableValue(v), resourceSelector: v.resourceSelector, priority: v.priority, });As per coding guidelines, TypeScript code should use explicit types and prefer interfaces for public APIs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/trpc/src/routes/_variables.ts` around lines 15 - 21, Export an explicit interface for the client shape (e.g., export interface ClientVariableValue { id: string; deploymentVariableId: string; value: /* type returned by flattenVariableValue */; resourceSelector: /* type of v.resourceSelector */; priority: /* type of v.priority */ }) and annotate the toClientVariableValue function to return that type: export const toClientVariableValue = (v: VariableValueRow): ClientVariableValue => ({ ... }); ensure the interface fields match the shapes used in toClientVariableValue (use the return type of flattenVariableValue for value and the types from VariableValueRow for resourceSelector/priority) and export the interface for external use.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/routes/v1/workspaces/deployment-variables.ts`:
- Around line 242-260: Ensure the ON CONFLICT update only applies when the
existing value belongs to the same variable by restricting the conflict update
with a condition that the existing row's variableId equals deploymentVariableId
(i.e., only update when variableValue.variableId === deploymentVariableId for
the row with id == valueId); and when you set kind: "literal" in the
onConflictDoUpdate set clause, also explicitly clear any prior kind-specific
columns used for refs/secret_refs (null out the ref/secret_ref columns) along
with updating priority, resourceSelector and literalValue so the DB check
constraint won't fail.
- Around line 82-90: The current fetch by variableId ignores the workspaceId and
allows cross-tenant reads; modify the DB query that builds v (the select from
variable where eq(variable.id, variableId) and eq(variable.scope, "deployment"))
to join the deployment table (use variable.deploymentId -> deployment.id) and
add a filter eq(deployment.workspaceId, workspaceId) using
req.params.workspaceId so the query only returns the variable when it belongs to
a deployment in the requested workspace; update any variable/deployment
references in that query chain accordingly.
In `@apps/api/src/routes/v1/workspaces/resources.ts`:
- Around line 241-256: The query is filtering out non-literal resource variables
by using innerJoin(schema.variableValue, ...) and eq(schema.variableValue.kind,
"literal"); change the join to a leftJoin on schema.variableValue (preserving
variables without a literal value) and remove the eq(schema.variableValue.kind,
"literal") predicate from the where clause, then ensure the selected value field
(currently using schema.variableValue.literalValue) is guarded so it only
returns the literalValue when variableValue.kind === "literal" (otherwise
null/undefined); apply the same fix to the corresponding query in
packages/trpc/src/routes/resources.ts where the same innerJoin + kind filter
appears.
- Around line 167-174: The code inserting into schema.variableValue currently
writes literalValue: value directly in the tx.insert block (in
apps/api/src/routes/v1/workspaces/resources.ts), which causes inconsistency with
the setVariable writer (packages/trpc/src/routes/resources.ts) and the Go
backend that expect object values wrapped as a discriminated union; change the
insertion logic in that tx.insert (the entries.map callback that sets
variableId/priority/kind/literalValue) so that when value is an object you set
literalValue to { object: value } (keep primitives unchanged), matching the
encoding used by setVariable and the Go ObjectValue wrapper.
In `@apps/workspace-engine/oapi/spec/schemas/resourcevariables.jsonnet`:
- Around line 4-14: Remove the stale ResourceVariable schema definition from
entities.jsonnet so only the updated ResourceVariable in
resourcevariables.jsonnet remains; locate the ResourceVariable object in
entities.jsonnet (the obsolete object with resourceId/key/value without
priority/resourceSelector) and delete it, ensuring the spec relies on the
definition from resourcevariables.jsonnet (which already adds priority and
resourceSelector) and that main.jsonnet imports remain unchanged.
In `@apps/workspace-engine/pkg/db/queries/variables.sql`:
- Around line 13-27: The json_agg aggregations built with json_build_object must
be given a deterministic ordering by the variable resolution priority: update
each json_agg call to include an ORDER BY on the priority field (e.g., ORDER BY
vv.priority DESC, vv.id ASC as a stable tiebreaker) so consumers see values in
resolution order; apply the same change to all three json_agg blocks that build
objects with 'priority' (the json_build_object blocks that include 'priority',
'id', 'variableId', etc.) to ensure consistent, deterministic results.
In
`@apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres_test.go`:
- Around line 406-418: The helper insertResourceVar inserts into the variable
table without workspace_id which breaks tenant isolation; update the function
(insertResourceVar) to include workspace_id in the INSERT into variable (add
workspace_id column and corresponding parameter) and supply the correct
workspace UUID (either derive from the provided resourceID context or add a
workspaceID parameter to insertResourceVar), and ensure the same workspace value
is used when inserting variable_value so fixtures match production multi-tenant
rules.
In
`@apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres.go`:
- Around line 362-380: The function effectiveValue currently swallows conversion
errors from db.ToOapiDeploymentVariableValueFromAgg and returns “no value”;
change effectiveValue to propagate the conversion error instead of silently
returning false (e.g., adjust its signature to return an error alongside the
value), call db.ToOapiDeploymentVariableValueFromAgg and if it returns an error
return that error to the caller, and update the callers/loaders that rely on
effectiveValue so they handle and propagate the error; apply the same change
pattern to the other similar blocks noted (the analogous code at the regions
referenced around lines 397-403 and 419-432) so conversion failures are returned
rather than dropped.
In `@packages/db/drizzle/0187_dry_quicksilver.sql`:
- Around line 3-51: The new DDL must include a tenant column: add a workspace_id
column (type uuid) to both CREATE TABLE "variable" and CREATE TABLE
"variable_value" definitions (e.g., workspace_id uuid NOT NULL or nullable
depending on your backfill plan), and update any relevant constraints/indexes
(for example include workspace_id in uniqueness/indexing scopes and in any FK
relationships) so tenant isolation is present in the schema before backfilling;
refer to the "variable" table, "variable_value" table, and the
"variable_scope_target_check" constraint to locate where the new column and
related constraint/index adjustments must be applied.
In `@packages/db/drizzle/0188_backfill_variables.sql`:
- Around line 13-28: The INSERT should dedupe deterministically and guard
against non-array/null paths: build a CTE over deployment_variable_value that
(1) filters objects with a "reference" and only treats rows as refs when
jsonb_typeof(value->'path') = 'array' or the path is NULL (do not call
jsonb_array_elements_text on non-arrays), (2) computes ref_path as CASE WHEN
jsonb_typeof(value->'path') = 'array' THEN ARRAY(SELECT
jsonb_array_elements_text(value->'path')) ELSE ARRAY[]::text[] END, and (3)
performs deterministic dedupe using ROW_NUMBER() OVER (PARTITION BY
deployment_variable_id, COALESCE(resource_selector::text, '<<NULL>>'), priority
ORDER BY id) and only keep rn = 1; finally INSERT only rows that do not already
exist according to the new uniqueness semantics by adding a WHERE NOT EXISTS
(...) check against variable_value matching variable_id = deployment_variable_id
AND (resource_selector IS NOT DISTINCT FROM dvv.resource_selector) AND priority
= dvv.priority instead of relying solely on ON CONFLICT (id).
In `@packages/db/src/schema/variable.ts`:
- Around line 32-140: Add a workspaceId tenant column to both the variable and
variableValue tables: in the variable pgTable (symbol variable) add workspaceId:
uuid("workspace_id").notNull().references(() => workspace.id, { onDelete:
"cascade" }) and include it in the unique indexes (e.g.,
uniqueIndex("variable_resource_key_uniq").on(table.workspaceId,
table.resourceId, table.key) and similarly for deployment/job_agent uniques) and
index on scope as needed; in the variableValue pgTable (symbol variableValue)
add the same workspaceId: uuid("workspace_id").notNull().references(() =>
workspace.id, { onDelete: "cascade" }) and ensure any lookups/constraints
include workspaceId; finally, mirror these schema changes in the corresponding
migration so workspaceId is present before release.
In `@packages/trpc/src/routes/deployments.ts`:
- Around line 292-294: The variableValue.findMany call that fetches values for
inArray(schema.variableValue.variableId, variableIds) must include a
deterministic order to respect shadowing by priority; update the query in
deployments.ts (the variableValue.findMany call) to add an orderBy on priority
(e.g., descending if higher priority should win) and include a stable secondary
key such as id or createdAt to break ties so the returned array order is
deterministic across runs.
- Around line 279-286: The procedures (variables and deleteVariable) accept
workspaceId but currently query by deploymentId/variableId only, allowing
cross-workspace access; update the queries to enforce workspace ownership and
add authorization metadata. For the variables procedure
(protectedProcedure.input(...).query(...)) include a workspace guard in the DB
query (e.g., add eq(schema.variable.workspaceId, input.workspaceId) or
join/check the deployment record belongs to input.workspaceId when calling
ctx.db.query.variable.findMany) so results are limited to that workspace; for
deleteVariable, ensure you load/delete the variable by both id and workspaceId
(e.g., use ctx.db.mutation.variable.delete where variable.id = input.variableId
AND variable.workspaceId = input.workspaceId, or verify ownership via
ctx.db.query.variable.findFirst before deletion) and attach .meta({
authorizationCheck: ... }) to both procedures to enforce permission checks
(reference the protectedProcedure definitions named variables and deleteVariable
and the DB access points ctx.db.query.variable.findMany /
ctx.db.query.variable.findFirst / ctx.db.mutation.variable.delete).
---
Outside diff comments:
In `@apps/api/src/routes/v1/workspaces/deployment-variables.ts`:
- Around line 37-49: The query selecting deployment-scoped variables only
filters by deploymentId; add workspace scoping to prevent cross-workspace access
by retrieving the current workspaceId from the request/auth context (e.g.,
req.params.workspaceId or the authenticated workspace id on the request) and
include a filter like eq(variable.workspaceId, workspaceId) in the where clause
(or alternatively join the deployment table and assert
eq(deployment.workspaceId, workspaceId)) so that the
db.select().from(variable).where(...) call enforces both
eq(variable.deploymentId, deploymentId) and the workspace ownership check.
- Around line 134-154: The upsert currently conflicts on (deploymentId, key) but
always returns the requested variableId, which can misreport when an existing
row with a different id is updated; fix by making the route return the actual
row id from the DB upsert instead of the input variableId: change the
db.insert(variable)…onConflictDoUpdate(...) call to include a .returning({ id:
variable.id }) (or equivalent returning of the primary key) and use the returned
id for the response payload and any downstream behavior (e.g., before calling
enqueueReleaseTargetsForDeployment); alternatively, if you want id-based upsert,
switch the conflict target to variable.id and explicitly detect conflicts on
(deploymentId, key) to return 409 — but do not keep the current mismatch between
upsert target and returned id.
In
`@apps/workspace-engine/svc/controllers/desiredrelease/getters_postgres_test.go`:
- Around line 114-128: The cleanup currently deletes legacy tables
deployment_variable_value, deployment_variable, and resource_variable (using
pool.Exec with cleanCtx and f.deploymentID / f.resourceID); update these to
remove rows from the new unified tables by deleting from variable_value where
variable_id is from variable rows for deployment or resource scope, then delete
from variable for scope='deployment' with deployment_id = f.deploymentID OR
scope='resource' with resource_id = f.resourceID; keep using pool.Exec(cleanCtx,
...) and the same parameter order (deployment id then resource id) so the
fixture teardown removes entries created by the test instead of silently failing
against dropped tables.
---
Nitpick comments:
In
`@apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres_test.go`:
- Around line 420-423: The test cleanup currently only deletes the parent row
(`DELETE FROM variable WHERE id = $1` using pool.Exec and varID) but leaves
inserted child rows in `variable_value`; update the t.Cleanup to explicitly
remove child rows first (e.g., `DELETE FROM variable_value WHERE variable_id =
$1`) before deleting from `variable` so the DB-backed test is deterministic
regardless of FK cascade behavior—use the same pool.Exec/context and varID in
both Exec calls (or run both deletes in a single transaction) inside the
existing t.Cleanup closure.
In
`@apps/workspace-engine/svc/controllers/desiredrelease/variableresolver/resolve.go`:
- Around line 245-287: The selector.Match error is currently discarded in
resolveFromResource which silently drops bad/unparseable CEL selectors; change
the match block to check the error returned by selector.Match and short-circuit
on error instead of ignoring it (i.e., if err != nil return nil immediately or
otherwise surface/log the error), mirroring the error-handling behavior used in
resolveFromValues; update the selector.Match call site in resolveFromResource
(the block that reads "if ok, _ := selector.Match(...)") to capture the error
and handle it explicitly before appending to matched.
In `@packages/trpc/src/routes/_variables.ts`:
- Around line 15-21: Export an explicit interface for the client shape (e.g.,
export interface ClientVariableValue { id: string; deploymentVariableId: string;
value: /* type returned by flattenVariableValue */; resourceSelector: /* type of
v.resourceSelector */; priority: /* type of v.priority */ }) and annotate the
toClientVariableValue function to return that type: export const
toClientVariableValue = (v: VariableValueRow): ClientVariableValue => ({ ... });
ensure the interface fields match the shapes used in toClientVariableValue (use
the return type of flattenVariableValue for value and the types from
VariableValueRow for resourceSelector/priority) and export the interface for
external use.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e568557f-7971-418b-a271-097ef6696feb
📒 Files selected for processing (46)
apps/api/openapi/openapi.jsonapps/api/openapi/schemas/deploymentvariables.jsonnetapps/api/src/routes/v1/workspaces/deployment-variables.tsapps/api/src/routes/v1/workspaces/resources.tsapps/api/src/types/openapi.tsapps/web/app/api/openapi.tsapps/web/app/routes/ws/deployments/_components/variables/DeploymentVariableSection.tsxapps/web/app/routes/ws/deployments/settings/page.$deploymentId.general.tsxapps/workspace-engine/oapi/openapi.jsonapps/workspace-engine/oapi/spec/schemas/deployments.jsonnetapps/workspace-engine/oapi/spec/schemas/resourcevariables.jsonnetapps/workspace-engine/pkg/db/convert.goapps/workspace-engine/pkg/db/convert_test.goapps/workspace-engine/pkg/db/deployment_variables.sql.goapps/workspace-engine/pkg/db/models.goapps/workspace-engine/pkg/db/queries/deployment_variables.sqlapps/workspace-engine/pkg/db/queries/resource_variables.sqlapps/workspace-engine/pkg/db/queries/schema.sqlapps/workspace-engine/pkg/db/queries/variables.sqlapps/workspace-engine/pkg/db/resource_variables.sql.goapps/workspace-engine/pkg/db/sqlc.yamlapps/workspace-engine/pkg/db/variables.sql.goapps/workspace-engine/pkg/oapi/oapi.gen.goapps/workspace-engine/pkg/store/resources/get_resources.goapps/workspace-engine/svc/controllers/desiredrelease/getters_postgres_test.goapps/workspace-engine/svc/controllers/desiredrelease/reconcile_test.goapps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters.goapps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres.goapps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres_test.goapps/workspace-engine/svc/controllers/desiredrelease/variableresolver/resolve.goapps/workspace-engine/svc/controllers/desiredrelease/variableresolver/resolve_test.goapps/workspace-engine/test/controllers/harness/mocks.goapps/workspace-engine/test/controllers/harness/pipeline.goapps/workspace-engine/test/controllers/harness/pipeline_opts.goapps/workspace-engine/test/controllers/variable_test.gopackages/db/drizzle/0187_dry_quicksilver.sqlpackages/db/drizzle/0188_backfill_variables.sqlpackages/db/drizzle/meta/0187_snapshot.jsonpackages/db/drizzle/meta/0188_snapshot.jsonpackages/db/drizzle/meta/_journal.jsonpackages/db/src/schema/index.tspackages/db/src/schema/variable.tspackages/trpc/src/routes/_variables.tspackages/trpc/src/routes/deployments.tspackages/trpc/src/routes/resources.tspackages/workspace-engine-sdk/src/schema.ts
💤 Files with no reviewable changes (10)
- apps/web/app/routes/ws/deployments/_components/variables/DeploymentVariableSection.tsx
- apps/workspace-engine/oapi/spec/schemas/deployments.jsonnet
- apps/web/app/routes/ws/deployments/settings/page.$deploymentId.general.tsx
- apps/api/src/types/openapi.ts
- apps/api/openapi/openapi.json
- apps/workspace-engine/pkg/db/queries/deployment_variables.sql
- apps/api/openapi/schemas/deploymentvariables.jsonnet
- apps/workspace-engine/pkg/db/resource_variables.sql.go
- apps/workspace-engine/pkg/db/deployment_variables.sql.go
- apps/workspace-engine/pkg/db/queries/resource_variables.sql
| resourceId: schema.variable.resourceId, | ||
| key: schema.variable.key, | ||
| value: schema.variableValue.literalValue, | ||
| }) | ||
| .from(schema.resourceVariable) | ||
| .where(eq(schema.resourceVariable.resourceId, resource.id)); | ||
| .from(schema.variable) | ||
| .innerJoin( | ||
| schema.variableValue, | ||
| eq(schema.variableValue.variableId, schema.variable.id), | ||
| ) | ||
| .where( | ||
| and( | ||
| eq(schema.variable.scope, "resource"), | ||
| eq(schema.variable.resourceId, resource.id), | ||
| eq(schema.variableValue.kind, "literal"), | ||
| ), | ||
| ); |
There was a problem hiding this comment.
Filtering by kind = "literal" silently drops non-literal resource variables.
The INNER JOIN + eq(schema.variableValue.kind, "literal") filter means any resource variable with kind reference or sensitive (either via future writes or migrated data) will be invisible in this endpoint's response. If resource-scoped variables are intentionally literal-only today, this is OK; otherwise the GET endpoint will silently disagree with what the resolver sees.
The same pattern appears in packages/trpc/src/routes/resources.ts (Lines 456-473).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/routes/v1/workspaces/resources.ts` around lines 241 - 256, The
query is filtering out non-literal resource variables by using
innerJoin(schema.variableValue, ...) and eq(schema.variableValue.kind,
"literal"); change the join to a leftJoin on schema.variableValue (preserving
variables without a literal value) and remove the eq(schema.variableValue.kind,
"literal") predicate from the where clause, then ensure the selected value field
(currently using schema.variableValue.literalValue) is guarded so it only
returns the literalValue when variableValue.kind === "literal" (otherwise
null/undefined); apply the same fix to the corresponding query in
packages/trpc/src/routes/resources.ts where the same innerJoin + kind filter
appears.
| CREATE TABLE "variable" ( | ||
| "id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL, | ||
| "scope" "variable_scope" NOT NULL, | ||
| "resource_id" uuid, | ||
| "deployment_id" uuid, | ||
| "job_agent_id" uuid, | ||
| "key" text NOT NULL, | ||
| "is_sensitive" boolean DEFAULT false NOT NULL, | ||
| "description" text, | ||
| "created_at" timestamp with time zone DEFAULT now() NOT NULL, | ||
| "updated_at" timestamp with time zone DEFAULT now() NOT NULL, | ||
| CONSTRAINT "variable_scope_target_check" CHECK ( | ||
| ( | ||
| "variable"."scope" = 'resource' | ||
| and "variable"."resource_id" is not null | ||
| and "variable"."deployment_id" is null | ||
| and "variable"."job_agent_id" is null | ||
| ) | ||
| or | ||
| ( | ||
| "variable"."scope" = 'deployment' | ||
| and "variable"."deployment_id" is not null | ||
| and "variable"."resource_id" is null | ||
| and "variable"."job_agent_id" is null | ||
| ) | ||
| or | ||
| ( | ||
| "variable"."scope" = 'job_agent' | ||
| and "variable"."job_agent_id" is not null | ||
| and "variable"."resource_id" is null | ||
| and "variable"."deployment_id" is null | ||
| ) | ||
| ) | ||
| ); | ||
| --> statement-breakpoint | ||
| CREATE TABLE "variable_value" ( | ||
| "id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL, | ||
| "variable_id" uuid NOT NULL, | ||
| "resource_selector" text, | ||
| "priority" bigint DEFAULT 0 NOT NULL, | ||
| "kind" "variable_value_kind" NOT NULL, | ||
| "literal_value" jsonb, | ||
| "ref_key" text, | ||
| "ref_path" text[], | ||
| "secret_provider" text, | ||
| "secret_key" text, | ||
| "secret_path" text[], | ||
| "created_at" timestamp with time zone DEFAULT now() NOT NULL, | ||
| "updated_at" timestamp with time zone DEFAULT now() NOT NULL, |
There was a problem hiding this comment.
Include workspace_id in the production DDL.
The migration creates both new variable tables without tenant columns. That makes tenant isolation depend on indirect joins and leaves conflict/update paths hard to scope safely. Add workspace_id here alongside the Drizzle schema change before backfilling data.
🛡️ Migration direction
CREATE TABLE "variable" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
+ "workspace_id" uuid NOT NULL,
"scope" "variable_scope" NOT NULL,
@@
CREATE TABLE "variable_value" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
+ "workspace_id" uuid NOT NULL,
"variable_id" uuid NOT NULL,
@@
ALTER TABLE "variable" ADD CONSTRAINT "variable_resource_id_resource_id_fk" FOREIGN KEY ("resource_id") REFERENCES "public"."resource"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
+ALTER TABLE "variable" ADD CONSTRAINT "variable_workspace_id_workspace_id_fk" FOREIGN KEY ("workspace_id") REFERENCES "public"."workspace"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
+ALTER TABLE "variable_value" ADD CONSTRAINT "variable_value_workspace_id_workspace_id_fk" FOREIGN KEY ("workspace_id") REFERENCES "public"."workspace"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/db/drizzle/0187_dry_quicksilver.sql` around lines 3 - 51, The new
DDL must include a tenant column: add a workspace_id column (type uuid) to both
CREATE TABLE "variable" and CREATE TABLE "variable_value" definitions (e.g.,
workspace_id uuid NOT NULL or nullable depending on your backfill plan), and
update any relevant constraints/indexes (for example include workspace_id in
uniqueness/indexing scopes and in any FK relationships) so tenant isolation is
present in the schema before backfilling; refer to the "variable" table,
"variable_value" table, and the "variable_scope_target_check" constraint to
locate where the new column and related constraint/index adjustments must be
applied.
| export const variable = pgTable( | ||
| "variable", | ||
| { | ||
| id: uuid("id").defaultRandom().primaryKey(), | ||
|
|
||
| scope: variableScopeEnum("scope").notNull(), | ||
|
|
||
| resourceId: uuid("resource_id").references(() => resource.id, { | ||
| onDelete: "cascade", | ||
| }), | ||
|
|
||
| deploymentId: uuid("deployment_id").references(() => deployment.id, { | ||
| onDelete: "cascade", | ||
| }), | ||
|
|
||
| jobAgentId: uuid("job_agent_id").references(() => jobAgent.id, { | ||
| onDelete: "cascade", | ||
| }), | ||
|
|
||
| key: text("key").notNull(), | ||
|
|
||
| isSensitive: boolean("is_sensitive").notNull().default(false), | ||
| description: text("description"), | ||
|
|
||
| createdAt: timestamp("created_at", { withTimezone: true }) | ||
| .notNull() | ||
| .defaultNow(), | ||
|
|
||
| updatedAt: timestamp("updated_at", { withTimezone: true }) | ||
| .notNull() | ||
| .defaultNow() | ||
| .$onUpdate(() => new Date()), | ||
| }, | ||
| (table) => [ | ||
| uniqueIndex("variable_resource_key_uniq") | ||
| .on(table.resourceId, table.key) | ||
| .where(sql`${table.resourceId} is not null`), | ||
|
|
||
| uniqueIndex("variable_deployment_key_uniq") | ||
| .on(table.deploymentId, table.key) | ||
| .where(sql`${table.deploymentId} is not null`), | ||
|
|
||
| uniqueIndex("variable_job_agent_key_uniq") | ||
| .on(table.jobAgentId, table.key) | ||
| .where(sql`${table.jobAgentId} is not null`), | ||
|
|
||
| index("variable_scope_idx").on(table.scope), | ||
|
|
||
| check( | ||
| "variable_scope_target_check", | ||
| sql` | ||
| ( | ||
| ${table.scope} = 'resource' | ||
| and ${table.resourceId} is not null | ||
| and ${table.deploymentId} is null | ||
| and ${table.jobAgentId} is null | ||
| ) | ||
| or | ||
| ( | ||
| ${table.scope} = 'deployment' | ||
| and ${table.deploymentId} is not null | ||
| and ${table.resourceId} is null | ||
| and ${table.jobAgentId} is null | ||
| ) | ||
| or | ||
| ( | ||
| ${table.scope} = 'job_agent' | ||
| and ${table.jobAgentId} is not null | ||
| and ${table.resourceId} is null | ||
| and ${table.deploymentId} is null | ||
| ) | ||
| `, | ||
| ), | ||
| ], | ||
| ); | ||
|
|
||
| export const variableValue = pgTable( | ||
| "variable_value", | ||
| { | ||
| id: uuid("id").defaultRandom().primaryKey(), | ||
|
|
||
| variableId: uuid("variable_id") | ||
| .notNull() | ||
| .references(() => variable.id, { onDelete: "cascade" }), | ||
|
|
||
| resourceSelector: text("resource_selector"), | ||
|
|
||
| priority: bigint("priority", { mode: "number" }).notNull().default(0), | ||
|
|
||
| kind: variableValueKindEnum("kind").notNull(), | ||
|
|
||
| literalValue: jsonb("literal_value"), | ||
|
|
||
| refKey: text("ref_key"), | ||
| refPath: text("ref_path").array(), | ||
|
|
||
| secretProvider: text("secret_provider"), | ||
| secretKey: text("secret_key"), | ||
| secretPath: text("secret_path").array(), | ||
|
|
||
| createdAt: timestamp("created_at", { withTimezone: true }) | ||
| .notNull() | ||
| .defaultNow(), | ||
|
|
||
| updatedAt: timestamp("updated_at", { withTimezone: true }) | ||
| .notNull() | ||
| .defaultNow() | ||
| .$onUpdate(() => new Date()), | ||
| }, |
There was a problem hiding this comment.
Add workspaceId to the new variable tables.
Both new tables omit workspaceId, which weakens tenant isolation and forces every read/write/upsert to infer ownership through joins. Add tenant columns here and mirror them in the migration before release. As per coding guidelines, **/packages/db/**/*.ts: All database tables include workspaceId for multi-tenant isolation.
🛡️ Schema direction
import { deployment } from "./deployment.js";
import { jobAgent } from "./job-agent.js";
import { resource } from "./resource.js";
+import { workspace } from "./workspace.js";
@@
id: uuid("id").defaultRandom().primaryKey(),
+ workspaceId: uuid("workspace_id")
+ .notNull()
+ .references(() => workspace.id, { onDelete: "cascade" }),
scope: variableScopeEnum("scope").notNull(),
@@
id: uuid("id").defaultRandom().primaryKey(),
+ workspaceId: uuid("workspace_id")
+ .notNull()
+ .references(() => workspace.id, { onDelete: "cascade" }),
variableId: uuid("variable_id")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/db/src/schema/variable.ts` around lines 32 - 140, Add a workspaceId
tenant column to both the variable and variableValue tables: in the variable
pgTable (symbol variable) add workspaceId:
uuid("workspace_id").notNull().references(() => workspace.id, { onDelete:
"cascade" }) and include it in the unique indexes (e.g.,
uniqueIndex("variable_resource_key_uniq").on(table.workspaceId,
table.resourceId, table.key) and similarly for deployment/job_agent uniques) and
index on scope as needed; in the variableValue pgTable (symbol variableValue)
add the same workspaceId: uuid("workspace_id").notNull().references(() =>
workspace.id, { onDelete: "cascade" }) and ensure any lookups/constraints
include workspaceId; finally, mirror these schema changes in the corresponding
migration so workspaceId is present before release.
* fixes coderabbit comments
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/api/src/routes/v1/workspaces/resources.ts (1)
296-345:⚠️ Potential issue | 🟠 Major
PATCH /variablessilently wipes every other resource variable.The handler unconditionally
DELETEs allvariablerows wherescope="resource"andresourceId = resource.idbefore re-inserting only the keys present inbody. That turns this PATCH endpoint into a full replacement: a client sending{ foo: 1 }will delete every other variable on that resource. This is not PATCH semantics and is a likely source of data loss for any consumer calling the endpoint to update a single key.Either (a) convert this to a true upsert that only touches keys present in
body(and never deletes others), or (b) if replacement is intended, expose it asPUT/ rename to reflect the semantics.🛡️ Proposed direction (upsert-only)
await db.transaction(async (tx) => { - await tx - .delete(schema.variable) - .where( - and( - eq(schema.variable.scope, "resource"), - eq(schema.variable.resourceId, resource.id), - ), - ); const entries = Object.entries(body); - if (entries.length > 0) { - const inserted = await tx - .insert(schema.variable) - .values( - entries.map(([key]) => ({ - scope: "resource" as const, - resourceId, - key, - })), - ) - .returning({ ... }); + if (entries.length > 0) { + const inserted = await tx + .insert(schema.variable) + .values( + entries.map(([key]) => ({ + scope: "resource" as const, + resourceId, + key, + })), + ) + .onConflictDoUpdate({ + target: [schema.variable.resourceId, schema.variable.key], + targetWhere: sql`${schema.variable.resourceId} is not null`, + set: { key: sql`excluded.key` }, + }) + .returning({ id: schema.variable.id, key: schema.variable.key }); + // then delete/insert variableValue only for these keys } });The same concern also applies to
upsertResourceByIdentifier(Lines 160-200), but there a full replacement is reasonable forPUT.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/v1/workspaces/resources.ts` around lines 296 - 345, The PATCH handler updateVariablesForResource currently deletes all rows where scope="resource" and resourceId before re-inserting only submitted keys; change it to perform an upsert-only update that only touches keys present in req.body: inside the db.transaction, fetch existing variables for resourceId from schema.variable to build a map of key->id, insert new schema.variable rows only for keys missing, then insert or update corresponding schema.variableValue rows for each provided key (use variableId from the map), avoiding any unconditional tx.delete that wipes other keys; keep all references to resource.id/resourceId, entries/Object.entries(body), schema.variable and schema.variableValue so the logic is placed in the same transaction.apps/api/src/routes/v1/workspaces/deployment-variables.ts (1)
32-75:⚠️ Potential issue | 🔴 CriticalCross-tenant read: list handler ignores
workspaceId.
listDeploymentVariablesByDeploymentselects fromvariableusing onlydeploymentIdfrom the URL path — it never validates that the deployment belongs to theworkspaceIdin the same path. Any caller who knows adeploymentIdcan substitute anyworkspaceId(including one they shouldn't have access to) and read that deployment's variables and values.The peer handlers
getDeploymentVariable(Lines 83-94),deleteDeploymentVariable(Lines 167-178), and the value endpoints all join throughdeploymentand filterdeployment.workspaceId = workspaceId. Apply the same join/filter here.🛡️ Proposed fix
- const { deploymentId } = req.params; + const { workspaceId, deploymentId } = req.params; const limit = req.query.limit ?? 100; const offset = req.query.offset ?? 0; const allVariables = await db - .select() + .select({ variable }) .from(variable) + .innerJoin(deployment, eq(variable.deploymentId, deployment.id)) .where( and( eq(variable.scope, "deployment"), eq(variable.deploymentId, deploymentId), + eq(deployment.workspaceId, workspaceId), ), - ); + ) + .then((rows) => rows.map((r) => r.variable));This mirrors the fix already applied to the sibling endpoints in the same file after the earlier review round.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/v1/workspaces/deployment-variables.ts` around lines 32 - 75, The handler listDeploymentVariablesByDeployment currently ignores req.params.workspaceId and selects variables solely by deploymentId, allowing cross-tenant reads; modify the DB query in listDeploymentVariablesByDeployment to join the deployment table (or add an exists/subquery) and filter deployment.workspaceId === workspaceId from req.params while still limiting to the given deploymentId, then proceed with pagination and fetching variableValue rows as before so only variables belonging to a deployment in the specified workspace are returned.
🧹 Nitpick comments (3)
packages/trpc/src/routes/resources.ts (1)
481-493: Unsafe cast toschema.variableValue.$inferSelect.The object literal is cast via
as typeof schema.variableValue.$inferSelect, but it is missingid,variableId,resourceSelector,priority,createdAt, andupdatedAt. This compiles only because of the cast, not because the value actually matches the type. TodayflattenVariableValueonly readskind/literalValue/ref*/secret*, so it works at runtime, but the cast will hide real bugs ifflattenVariableValueever starts reading any of the missing fields.Narrow the helper's parameter to the subset of fields it needs, or pass the row object directly (the select shape already matches all required fields):
♻️ Proposed direction
-// in packages/trpc/src/routes/_variables.ts -type VariableValueRow = typeof schema.variableValue.$inferSelect; +type VariableValueRow = Pick< + typeof schema.variableValue.$inferSelect, + "kind" | "literalValue" | "refKey" | "refPath" + | "secretProvider" | "secretKey" | "secretPath" +>;Then the
as ...here can be dropped.As per coding guidelines ("Use TypeScript with explicit types; prefer
interfacefor public APIs").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/trpc/src/routes/resources.ts` around lines 481 - 493, The map is unsafely casting an object literal to typeof schema.variableValue.$inferSelect when calling flattenVariableValue, hiding missing fields (id, variableId, resourceSelector, priority, createdAt, updatedAt); fix by changing flattenVariableValue's parameter type to a narrower interface that only includes the used properties (kind, literalValue, refKey, refPath, secretProvider, secretKey, secretPath) or alternatively pass the full row object to flattenVariableValue (i.e., use rows.map(r => flattenVariableValue(r))), and then remove the unsafe `as typeof schema.variableValue.$inferSelect` cast so the compiler enforces correct shape.apps/api/src/routes/v1/workspaces/resources.ts (2)
263-294: Pagination reads the full result set before slicing.
rows.slice(offset, offset + limit)runs after loading every variable/value row for the resource. The SQLLIMIT/OFFSET(and a separateCOUNT(*)) should be pushed into the query; otherwiselimitandoffsetdon't bound DB work and memory scales with the number of values on a single resource.Also, the
innerJoinhides variables that exist invariablebut have novariableValuerow — if that is not intentional, switch toleftJoinand special-case the null row when flattening.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/v1/workspaces/resources.ts` around lines 263 - 294, The current query reads all matching rows into rows and then slices with rows.slice(offset, offset + limit), which loads the entire result set; modify the DB call on schema.variable/schema.variableValue to push pagination into SQL by applying .limit(limit) and .offset(offset) and run a separate COUNT(*) query to compute total, and replace the in-memory slicing of rows. Also, if you need to include variables that have no variableValue, change the innerJoin(...) between schema.variable and schema.variableValue to leftJoin(...) and update flattenResourceVariableValue to handle a possibly null variableValue row (e.g., treat missing literalValue/refKey/etc. appropriately) so the API still returns items for variables without values.
39-48:flattenResourceVariableValuesilently treats unknown kinds as secrets.The final
return { provider, key, path }branch runs for anykindother than"literal"or"ref", including values that are not"secret_ref"today and any kind added later. A non-exhaustive switch on the enum is safer — at minimum checkkind === "secret_ref"explicitly and throw (or returnnull) for unknown kinds rather than emitting a wrong-shaped payload.Consider sharing the helper from
packages/trpc/src/routes/_variables.ts(already imported there asflattenVariableValue) instead of re-implementing it here to avoid drift between the two endpoints.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/v1/workspaces/resources.ts` around lines 39 - 48, The function flattenResourceVariableValue currently treats any non-"literal" and non-"ref" kind as a secret, which can emit incorrect payloads for new/unknown kinds; update flattenResourceVariableValue to either call the shared helper flattenVariableValue (import it from packages/trpc/src/routes/_variables.ts) or explicitly branch: if (r.kind === "literal") ..., else if (r.kind === "ref") ..., else if (r.kind === "secret_ref") return { provider: r.secretProvider, key: r.secretKey, path: r.secretPath ?? [] }; else throw a clear error (or return null) for unknown r.kind to make the handling exhaustive and avoid silently mis-shaping values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/db/drizzle/0188_backfill_variables.sql`:
- Around line 154-178: The backfill inserts rv.value verbatim, causing
object-typed resource variables to lack the { object: ... } wrapper used by the
new writers and consumed by flattenResourceVariableValue; change the SELECT that
supplies literal_value so object values are wrapped (e.g. use a
CASE/conditional: when jsonb_typeof(rv.value) = 'object' THEN
jsonb_build_object('object', rv.value) ELSE rv.value END) so the inserted
literal_value matches the shape produced by the resource writers in
apps/api/src/routes/v1/workspaces/resources.ts and is consistent with
flattenResourceVariableValue (leave the existing WHERE exclusion for
reference/object-paths as-is).
---
Outside diff comments:
In `@apps/api/src/routes/v1/workspaces/deployment-variables.ts`:
- Around line 32-75: The handler listDeploymentVariablesByDeployment currently
ignores req.params.workspaceId and selects variables solely by deploymentId,
allowing cross-tenant reads; modify the DB query in
listDeploymentVariablesByDeployment to join the deployment table (or add an
exists/subquery) and filter deployment.workspaceId === workspaceId from
req.params while still limiting to the given deploymentId, then proceed with
pagination and fetching variableValue rows as before so only variables belonging
to a deployment in the specified workspace are returned.
In `@apps/api/src/routes/v1/workspaces/resources.ts`:
- Around line 296-345: The PATCH handler updateVariablesForResource currently
deletes all rows where scope="resource" and resourceId before re-inserting only
submitted keys; change it to perform an upsert-only update that only touches
keys present in req.body: inside the db.transaction, fetch existing variables
for resourceId from schema.variable to build a map of key->id, insert new
schema.variable rows only for keys missing, then insert or update corresponding
schema.variableValue rows for each provided key (use variableId from the map),
avoiding any unconditional tx.delete that wipes other keys; keep all references
to resource.id/resourceId, entries/Object.entries(body), schema.variable and
schema.variableValue so the logic is placed in the same transaction.
---
Nitpick comments:
In `@apps/api/src/routes/v1/workspaces/resources.ts`:
- Around line 263-294: The current query reads all matching rows into rows and
then slices with rows.slice(offset, offset + limit), which loads the entire
result set; modify the DB call on schema.variable/schema.variableValue to push
pagination into SQL by applying .limit(limit) and .offset(offset) and run a
separate COUNT(*) query to compute total, and replace the in-memory slicing of
rows. Also, if you need to include variables that have no variableValue, change
the innerJoin(...) between schema.variable and schema.variableValue to
leftJoin(...) and update flattenResourceVariableValue to handle a possibly null
variableValue row (e.g., treat missing literalValue/refKey/etc. appropriately)
so the API still returns items for variables without values.
- Around line 39-48: The function flattenResourceVariableValue currently treats
any non-"literal" and non-"ref" kind as a secret, which can emit incorrect
payloads for new/unknown kinds; update flattenResourceVariableValue to either
call the shared helper flattenVariableValue (import it from
packages/trpc/src/routes/_variables.ts) or explicitly branch: if (r.kind ===
"literal") ..., else if (r.kind === "ref") ..., else if (r.kind ===
"secret_ref") return { provider: r.secretProvider, key: r.secretKey, path:
r.secretPath ?? [] }; else throw a clear error (or return null) for unknown
r.kind to make the handling exhaustive and avoid silently mis-shaping values.
In `@packages/trpc/src/routes/resources.ts`:
- Around line 481-493: The map is unsafely casting an object literal to typeof
schema.variableValue.$inferSelect when calling flattenVariableValue, hiding
missing fields (id, variableId, resourceSelector, priority, createdAt,
updatedAt); fix by changing flattenVariableValue's parameter type to a narrower
interface that only includes the used properties (kind, literalValue, refKey,
refPath, secretProvider, secretKey, secretPath) or alternatively pass the full
row object to flattenVariableValue (i.e., use rows.map(r =>
flattenVariableValue(r))), and then remove the unsafe `as typeof
schema.variableValue.$inferSelect` cast so the compiler enforces correct shape.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5cf6890d-2730-4a44-9204-467eb31d180a
📒 Files selected for processing (8)
apps/api/src/routes/v1/workspaces/deployment-variables.tsapps/api/src/routes/v1/workspaces/resources.tsapps/workspace-engine/pkg/db/queries/variables.sqlapps/workspace-engine/pkg/db/variables.sql.goapps/workspace-engine/svc/controllers/desiredrelease/variableresolver/getters_postgres.gopackages/db/drizzle/0188_backfill_variables.sqlpackages/trpc/src/routes/deployments.tspackages/trpc/src/routes/resources.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/trpc/src/routes/deployments.ts
- apps/workspace-engine/pkg/db/queries/variables.sql
- apps/workspace-engine/pkg/db/variables.sql.go
| INSERT INTO "variable_value" (variable_id, priority, kind, literal_value) | ||
| SELECT | ||
| v.id, | ||
| 0, | ||
| 'literal'::variable_value_kind, | ||
| rv.value | ||
| FROM resource_variable rv | ||
| JOIN "variable" v | ||
| ON v.scope = 'resource' | ||
| AND v.resource_id = rv.resource_id | ||
| AND v.key = rv.key | ||
| WHERE NOT ( | ||
| jsonb_typeof(rv.value) = 'object' | ||
| AND rv.value ? 'reference' | ||
| AND ( | ||
| rv.value->'path' IS NULL | ||
| OR jsonb_typeof(rv.value->'path') = 'array' | ||
| ) | ||
| ) | ||
| AND NOT EXISTS ( | ||
| SELECT 1 FROM "variable_value" vv | ||
| WHERE vv.variable_id = v.id | ||
| AND vv.resource_selector IS NULL | ||
| AND vv.priority = 0 | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm writer-side encoding differences between resource and deployment literal values.
rg -nP -C3 '\bliteralValue\b' apps/api/src/routes/v1/workspaces packages/trpc/src/routesRepository: ctrlplanedev/ctrlplane
Length of output: 7807
Resource literal backfill does not match new write encoding for object values.
The new resource variable writers in apps/api/src/routes/v1/workspaces/resources.ts (Lines 193-196, 338-341) wrap object values as { object: value } before storing them in literal_value. This backfill inserts rv.value verbatim, so any pre-existing resource variable whose value is a JSON object will be read back with a different shape than freshly written rows (missing the object wrapper). The flattenResourceVariableValue function returns literalValue directly, so this creates a round-trip inconsistency.
Mirror the application-level wrapping in the backfill:
🛡️ Proposed direction
INSERT INTO "variable_value" (variable_id, priority, kind, literal_value)
SELECT
v.id,
0,
'literal'::variable_value_kind,
- rv.value
+ CASE
+ WHEN jsonb_typeof(rv.value) = 'object'
+ THEN jsonb_build_object('object', rv.value)
+ ELSE rv.value
+ END
FROM resource_variable rv
JOIN "variable" v
ON v.scope = 'resource'
AND v.resource_id = rv.resource_id
AND v.key = rv.keyThe deployment-literal backfill at Lines 66-111 correctly does not wrap, since upsertDeploymentVariableValue writes body.value directly without wrapping.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| INSERT INTO "variable_value" (variable_id, priority, kind, literal_value) | |
| SELECT | |
| v.id, | |
| 0, | |
| 'literal'::variable_value_kind, | |
| rv.value | |
| FROM resource_variable rv | |
| JOIN "variable" v | |
| ON v.scope = 'resource' | |
| AND v.resource_id = rv.resource_id | |
| AND v.key = rv.key | |
| WHERE NOT ( | |
| jsonb_typeof(rv.value) = 'object' | |
| AND rv.value ? 'reference' | |
| AND ( | |
| rv.value->'path' IS NULL | |
| OR jsonb_typeof(rv.value->'path') = 'array' | |
| ) | |
| ) | |
| AND NOT EXISTS ( | |
| SELECT 1 FROM "variable_value" vv | |
| WHERE vv.variable_id = v.id | |
| AND vv.resource_selector IS NULL | |
| AND vv.priority = 0 | |
| ); | |
| INSERT INTO "variable_value" (variable_id, priority, kind, literal_value) | |
| SELECT | |
| v.id, | |
| 0, | |
| 'literal'::variable_value_kind, | |
| CASE | |
| WHEN jsonb_typeof(rv.value) = 'object' | |
| THEN jsonb_build_object('object', rv.value) | |
| ELSE rv.value | |
| END | |
| FROM resource_variable rv | |
| JOIN "variable" v | |
| ON v.scope = 'resource' | |
| AND v.resource_id = rv.resource_id | |
| AND v.key = rv.key | |
| WHERE NOT ( | |
| jsonb_typeof(rv.value) = 'object' | |
| AND rv.value ? 'reference' | |
| AND ( | |
| rv.value->'path' IS NULL | |
| OR jsonb_typeof(rv.value->'path') = 'array' | |
| ) | |
| ) | |
| AND NOT EXISTS ( | |
| SELECT 1 FROM "variable_value" vv | |
| WHERE vv.variable_id = v.id | |
| AND vv.resource_selector IS NULL | |
| AND vv.priority = 0 | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/db/drizzle/0188_backfill_variables.sql` around lines 154 - 178, The
backfill inserts rv.value verbatim, causing object-typed resource variables to
lack the { object: ... } wrapper used by the new writers and consumed by
flattenResourceVariableValue; change the SELECT that supplies literal_value so
object values are wrapped (e.g. use a CASE/conditional: when
jsonb_typeof(rv.value) = 'object' THEN jsonb_build_object('object', rv.value)
ELSE rv.value END) so the inserted literal_value matches the shape produced by
the resource writers in apps/api/src/routes/v1/workspaces/resources.ts and is
consistent with flattenResourceVariableValue (leave the existing WHERE exclusion
for reference/object-paths as-is).
Summary by CodeRabbit
Bug Fixes
defaultValueproperty from deployment variable requests/responses and UI listings.New Features
Refactor