Skip to content

Commit c69e939

Browse files
authored
feat: Sessions - bidirectional durable agent streams (#3417)
> ⚠️ **Not released yet.** This PR is the server-side foundation only. The SDK changes that customers will actually use (`chat.agent` migration, `chat.createStartSessionAction`, `useTriggerChatTransport` updates) live on a separate branch and ship together in an upcoming `@trigger.dev/sdk` prerelease. Until that prerelease is published, this surface is reachable only via direct HTTP. ## What this gives Trigger.dev users A new first-class primitive, **Session**, for durable, task-bound, bidirectional I/O that outlives any single run. Sessions are the run manager for `chat.agent` going forward, and they unblock anything else that needs "one identifier, many runs over time" with a stable channel pair the client can write to and subscribe to. ### Use cases unblocked - **Chat agents that persist across many runs.** One session per chat (keyed on your own `chatId` via `externalId`), turns 1..N attach to the same Session, the UI subscribes once and keeps receiving output as new runs take over. - **Approval loops and long-running tasks with user feedback.** The task waits on `.in`, the client writes to `.in`, the server enforces no-writes-after-close. - **Workflow progress streams that live past the run.** Subscribe to `.out` after the task finishes to replay history. - **Resume-next-day flows.** A session is a durable row, not a transient stream. Send a message a day later and the server triggers a fresh run on the same session. ### How it works (Session-as-run-manager) A Session row is task-bound (`taskIdentifier` + `triggerConfig` are required) and owns its current run via `currentRunId` + `currentRunVersion` for optimistic claim. Three trigger paths: 1. **Session create** — `POST /api/v1/sessions` creates the row and triggers the first run synchronously. 2. **Append-time probe** — `POST /realtime/v1/sessions/:session/in/append` checks if the current run is alive; if it has terminated (idle exit, crash, etc.), the server triggers a new run before processing the append. 3. **End-and-continue handoff** — `POST /api/v1/sessions/:session/end-and-continue`, called by the running agent, triggers a fresh run and atomically swaps `currentRunId`. Used by `chat.requestUpgrade()` for version handoffs. Every triggered run is recorded in the `SessionRun` audit table with a reason (`initial`, `continuation`, `upgrade`, `manual`). ## Public API surface ### Control plane - `POST /api/v1/sessions` — create. Idempotent on `(env, externalId)`. Triggers the first run, returns the session and a session-scoped public access token. Returns 409 if the upserted row is already closed. - `GET /api/v1/sessions/:session` — retrieve by friendlyId (`session_abc...`) or by your own externalId (server disambiguates by prefix). - `GET /api/v1/sessions` — list with filters (`type`, `tag`, `taskIdentifier`, `externalId`, derived `status` ACTIVE/CLOSED/EXPIRED, created-at range) and cursor pagination. Backed by ClickHouse. - `PATCH /api/v1/sessions/:session` — update tags / metadata / externalId. - `POST /api/v1/sessions/:session/close` — terminate. Idempotent, hard-blocks new server-brokered writes. - `POST /api/v1/sessions/:session/end-and-continue` — agent-only handoff to a fresh run. ### Realtime - `PUT /realtime/v1/sessions/:session/:io` — initialize a channel. Returns S2 credentials in headers so high-throughput clients can write direct to S2. - `GET /realtime/v1/sessions/:session/:io` — SSE subscribe. Supports Last-Event-ID resume and an opt-in `X-Peek-Settled: 1` header that fast-closes the stream when the upstream is already settled (`trigger:turn-complete`), eliminating long-poll wait on reconnect-on-reload paths. - `POST /realtime/v1/sessions/:session/:io/append` — server-side appends. - `POST /api/v1/runs/:runFriendlyId/session-streams/wait` — runs wait on a session stream as a waitpoint, with a race-check to avoid suspending if data already landed. ### Auth scopes `sessions` is a new resource type. `read:sessions:{id}`, `write:sessions:{id}`, `admin:sessions:{id}` flow through the existing JWT validator. Session-scoped public access tokens minted by the server replace browser-held trigger-task tokens for chat-style flows — the browser never sees a run identifier or a run-scoped token in steady state. ## What's coming after this PR - **SDK + chat.agent migration**: separate branch, separate PR, ships in the next `@trigger.dev/sdk` prerelease alongside this server deploy. Customers using the prerelease `chat.agent` will follow the [upgrade guide](https://github.com/triggerdotdev/trigger.dev/blob/docs/tri-7532-ai-sdk-chat-transport-and-chat-task-system/docs/ai-chat/upgrade-guide.mdx). - **Dashboard surfaces**: dedicated agent list, agent playground, agent view on the run dashboard. Tracking separately. ## Implementation notes - **Postgres `Session` table**: scalar scoping columns (`projectId`, `runtimeEnvironmentId`, `environmentType`, `organizationId`) without FKs, matching the January TaskRun FK-removal decision. Point-lookup indexes only — list queries go to ClickHouse. Terminal markers (`closedAt`, `expiresAt`) are write-once. - **ClickHouse `sessions_v1`**: ReplacingMergeTree, partitioned by month, ordered by `(org_id, project_id, environment_id, created_at, session_id)`. Tags indexed via `tokenbf_v1` skip index. - **`SessionsReplicationService`**: mirrors `RunsReplicationService` exactly — leader-locked logical replication consumer, `ConcurrentFlushScheduler`, retry with exponential backoff + jitter, identical metric shape. Dedicated slot + publication so the two consume independently. - **S2 keys**: `sessions/{addressingKey}/{out|in}`. The existing `runs/{runId}/{streamId}` key format for run-scoped streams is untouched. - **Optimistic claim**: `ensureRunForSession` triggers a run upfront (cheap to cancel if it loses the race), then attempts an `updateMany` keyed on `currentRunVersion`. Loser cancels its triggered run and reuses the winner's. No DB lock held across the trigger. ### What did NOT change Run-scoped `streams.pipe` / `streams.input` and the existing `/realtime/v1/streams/{runId}/...` routes are unchanged. Sessions are net-new — not a reshaping of the current streams API. ## Deploy notes - Set `SESSION_REPLICATION_CLICKHOUSE_URL` and `SESSION_REPLICATION_ENABLED=1` to enable the replication consumer. - The `Session` table needs `REPLICA IDENTITY FULL` set on the prod source DB before the publication is created (same one-time DDL we did for `TaskRun`). Required for delete events to carry full column values. - Cross-form authorization on the `GET /api/v1/sessions/:session` loader (a JWT minted for either form authorizes both URL forms). Action routes are URL-form-specific, matching how the SDK mints PATs. ## Verification - Webapp typecheck clean (10/10). - `apps/webapp/test/sessionsReplicationService.test.ts` — round-trip tests for insert/update/delete through Postgres logical replication into ClickHouse via testcontainers. - Live end-to-end against local dev: create + retrieve (both forms) + update + close, `.out.initialize` + `.out.append` x2 + `.in.send` + `.out.subscribe` over SSE, list with all filter combinations + pagination, `end-and-continue` swap, `X-Peek-Settled` fast-close (verified in browser via reconnect-on-reload and via curl). Replicated row lands in ClickHouse within ~1s. - Multi-round Devin + CodeRabbit review feedback addressed (read-after-write paths use `prisma` writer, info-leak on auth-routes masked as 403, peek-settled discriminator parsing fix, etc.). ## Test plan - [ ] `pnpm run typecheck --filter webapp` - [ ] `pnpm run test --filter webapp ./test/sessionsReplicationService.test.ts --run` - [ ] Start the webapp with `SESSION_REPLICATION_CLICKHOUSE_URL` and `SESSION_REPLICATION_ENABLED=1`. Confirm the slot and publication auto-create on boot. - [ ] `POST /api/v1/sessions` and verify the row replicates to `trigger_dev.sessions_v1` within a couple of seconds. - [ ] `POST /api/v1/sessions/:id/close`, then confirm `POST /realtime/v1/sessions/:id/out/append` returns 400. - [ ] Reuse a closed session's `externalId` on `POST /api/v1/sessions` and confirm 409. - [ ] `GET /realtime/v1/sessions/:id/out` with `X-Peek-Settled: 1` after a turn completes and confirm `X-Session-Settled: true` response header + immediate close.
1 parent e134da7 commit c69e939

35 files changed

Lines changed: 4284 additions & 14 deletions

.changeset/session-primitive.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
"@trigger.dev/core": patch
3+
---
4+
5+
Add `SessionId` friendly ID generator and schemas for the new durable Session primitive. Exported from `@trigger.dev/core/v3/isomorphic` alongside `RunId`, `BatchId`, etc. Ships the `CreateSessionStreamWaitpoint` request/response schemas alongside the main Session CRUD.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
area: webapp
3+
type: feature
4+
---
5+
6+
Add the `Session` primitive — a durable, task-bound, bidirectional I/O channel that outlives a single run and acts as the run manager for `chat.agent`. Ships the Postgres `Session` + `SessionRun` tables, ClickHouse `sessions_v1` + replication service, the `sessions` JWT scope, and the public CRUD + realtime routes (`/api/v1/sessions`, `/realtime/v1/sessions/:session/:io`) including `end-and-continue` for server-orchestrated run handoffs and session-stream waitpoints.

apps/webapp/app/entry.server.tsx

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,44 @@ import {
2323
registerRunEngineEventBusHandlers,
2424
setupBatchQueueCallbacks,
2525
} from "./v3/runEngineHandlers.server";
26+
import { sessionsReplicationInstance } from "./services/sessionsReplicationInstance.server";
27+
import { signalsEmitter } from "./services/signals.server";
28+
29+
// Start the sessions replication service (subscribes to the logical replication
30+
// slot, runs leader election, flushes to ClickHouse). Done at entry level so it
31+
// runs deterministically on webapp boot rather than lazily via a singleton
32+
// reference elsewhere in the module graph.
33+
if (sessionsReplicationInstance && env.SESSION_REPLICATION_ENABLED === "1") {
34+
// Capture a non-nullable reference so the shutdown closure below
35+
// doesn't need to re-null-check (TS narrowing doesn't follow through
36+
// an inner function scope).
37+
const replicator = sessionsReplicationInstance;
38+
replicator
39+
.start()
40+
.then(() => {
41+
console.log("🗃️ Sessions replication service started");
42+
})
43+
.catch((error) => {
44+
console.error("🗃️ Sessions replication service failed to start", {
45+
error,
46+
});
47+
});
48+
49+
// Wrap the async shutdown in a sync handler that catches rejections —
50+
// SIGTERM/SIGINT fire during process teardown, and an unhandled
51+
// promise rejection from `_replicationClient.stop()` there would
52+
// bubble up past the process exit. Matches the pattern in
53+
// dynamicFlushScheduler.server.ts.
54+
const shutdownSessionsReplication = () => {
55+
replicator.shutdown().catch((error) => {
56+
console.error("🗃️ Sessions replication service shutdown error", {
57+
error,
58+
});
59+
});
60+
};
61+
signalsEmitter.on("SIGTERM", shutdownSessionsReplication);
62+
signalsEmitter.on("SIGINT", shutdownSessionsReplication);
63+
}
2664

2765
const ABORT_DELAY = 30000;
2866

apps/webapp/app/env.server.ts

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1237,6 +1237,38 @@ const EnvironmentSchema = z
12371237
RUN_REPLICATION_DISABLE_PAYLOAD_INSERT: z.string().default("0"),
12381238
RUN_REPLICATION_DISABLE_ERROR_FINGERPRINTING: z.string().default("0"),
12391239

1240+
// Session replication (Postgres → ClickHouse sessions_v1). Shares Redis
1241+
// with the runs replicator for leader locking but has its own slot and
1242+
// publication so the two consume independently.
1243+
SESSION_REPLICATION_CLICKHOUSE_URL: z.string().optional(),
1244+
SESSION_REPLICATION_ENABLED: z.string().default("0"),
1245+
SESSION_REPLICATION_SLOT_NAME: z.string().default("sessions_to_clickhouse_v1"),
1246+
SESSION_REPLICATION_PUBLICATION_NAME: z
1247+
.string()
1248+
.default("sessions_to_clickhouse_v1_publication"),
1249+
SESSION_REPLICATION_MAX_FLUSH_CONCURRENCY: z.coerce.number().int().default(1),
1250+
SESSION_REPLICATION_FLUSH_INTERVAL_MS: z.coerce.number().int().default(1000),
1251+
SESSION_REPLICATION_FLUSH_BATCH_SIZE: z.coerce.number().int().default(100),
1252+
SESSION_REPLICATION_LEADER_LOCK_TIMEOUT_MS: z.coerce.number().int().default(30_000),
1253+
SESSION_REPLICATION_LEADER_LOCK_EXTEND_INTERVAL_MS: z.coerce.number().int().default(10_000),
1254+
SESSION_REPLICATION_LEADER_LOCK_ADDITIONAL_TIME_MS: z.coerce.number().int().default(10_000),
1255+
SESSION_REPLICATION_LEADER_LOCK_RETRY_INTERVAL_MS: z.coerce.number().int().default(500),
1256+
SESSION_REPLICATION_ACK_INTERVAL_SECONDS: z.coerce.number().int().default(10),
1257+
SESSION_REPLICATION_LOG_LEVEL: z
1258+
.enum(["log", "error", "warn", "info", "debug"])
1259+
.default("info"),
1260+
SESSION_REPLICATION_CLICKHOUSE_LOG_LEVEL: z
1261+
.enum(["log", "error", "warn", "info", "debug"])
1262+
.default("info"),
1263+
SESSION_REPLICATION_WAIT_FOR_ASYNC_INSERT: z.string().default("0"),
1264+
SESSION_REPLICATION_KEEP_ALIVE_ENABLED: z.string().default("0"),
1265+
SESSION_REPLICATION_KEEP_ALIVE_IDLE_SOCKET_TTL_MS: z.coerce.number().int().optional(),
1266+
SESSION_REPLICATION_MAX_OPEN_CONNECTIONS: z.coerce.number().int().default(10),
1267+
SESSION_REPLICATION_INSERT_STRATEGY: z.enum(["insert", "insert_async"]).default("insert"),
1268+
SESSION_REPLICATION_INSERT_MAX_RETRIES: z.coerce.number().int().default(3),
1269+
SESSION_REPLICATION_INSERT_BASE_DELAY_MS: z.coerce.number().int().default(100),
1270+
SESSION_REPLICATION_INSERT_MAX_DELAY_MS: z.coerce.number().int().default(2000),
1271+
12401272
// Clickhouse
12411273
CLICKHOUSE_URL: z.string(),
12421274
CLICKHOUSE_KEEP_ALIVE_ENABLED: z.string().default("1"),
Lines changed: 188 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,188 @@
1+
import { json } from "@remix-run/server-runtime";
2+
import {
3+
CreateSessionStreamWaitpointRequestBody,
4+
type CreateSessionStreamWaitpointResponseBody,
5+
} from "@trigger.dev/core/v3";
6+
import { WaitpointId } from "@trigger.dev/core/v3/isomorphic";
7+
import { z } from "zod";
8+
import { $replica } from "~/db.server";
9+
import { createWaitpointTag, MAX_TAGS_PER_WAITPOINT } from "~/models/waitpointTag.server";
10+
import {
11+
canonicalSessionAddressingKey,
12+
isSessionFriendlyIdForm,
13+
resolveSessionByIdOrExternalId,
14+
} from "~/services/realtime/sessions.server";
15+
import { S2RealtimeStreams } from "~/services/realtime/s2realtimeStreams.server";
16+
import { getRealtimeStreamInstance } from "~/services/realtime/v1StreamsGlobal.server";
17+
import {
18+
addSessionStreamWaitpoint,
19+
removeSessionStreamWaitpoint,
20+
} from "~/services/sessionStreamWaitpointCache.server";
21+
import { createActionApiRoute } from "~/services/routeBuilders/apiBuilder.server";
22+
import { logger } from "~/services/logger.server";
23+
import { parseDelay } from "~/utils/delays";
24+
import { resolveIdempotencyKeyTTL } from "~/utils/idempotencyKeys.server";
25+
import { engine } from "~/v3/runEngine.server";
26+
import { ServiceValidationError } from "~/v3/services/baseService.server";
27+
28+
const ParamsSchema = z.object({
29+
runFriendlyId: z.string(),
30+
});
31+
32+
const { action, loader } = createActionApiRoute(
33+
{
34+
params: ParamsSchema,
35+
body: CreateSessionStreamWaitpointRequestBody,
36+
maxContentLength: 1024 * 10, // 10KB
37+
method: "POST",
38+
},
39+
async ({ authentication, body, params }) => {
40+
try {
41+
const run = await $replica.taskRun.findFirst({
42+
where: {
43+
friendlyId: params.runFriendlyId,
44+
runtimeEnvironmentId: authentication.environment.id,
45+
},
46+
select: {
47+
id: true,
48+
friendlyId: true,
49+
realtimeStreamsVersion: true,
50+
},
51+
});
52+
53+
if (!run) {
54+
return json({ error: "Run not found" }, { status: 404 });
55+
}
56+
57+
// Row-optional addressing — see the .out / .in.append handlers.
58+
// The waitpoint cache + S2 stream key derive from the row's
59+
// canonical identity (externalId if set, else friendlyId), so
60+
// the agent's wait registration and the append-side drain
61+
// converge regardless of which URL form each side used.
62+
const maybeSession = await resolveSessionByIdOrExternalId(
63+
$replica,
64+
authentication.environment.id,
65+
body.session
66+
);
67+
68+
if (!maybeSession && isSessionFriendlyIdForm(body.session)) {
69+
return json({ error: "Session not found" }, { status: 404 });
70+
}
71+
72+
const addressingKey = canonicalSessionAddressingKey(maybeSession, body.session);
73+
74+
const idempotencyKeyExpiresAt = body.idempotencyKeyTTL
75+
? resolveIdempotencyKeyTTL(body.idempotencyKeyTTL)
76+
: undefined;
77+
78+
const timeout = await parseDelay(body.timeout);
79+
80+
const bodyTags = typeof body.tags === "string" ? [body.tags] : body.tags;
81+
82+
if (bodyTags && bodyTags.length > MAX_TAGS_PER_WAITPOINT) {
83+
throw new ServiceValidationError(
84+
`Waitpoints can only have ${MAX_TAGS_PER_WAITPOINT} tags, you're trying to set ${bodyTags.length}.`
85+
);
86+
}
87+
88+
if (bodyTags && bodyTags.length > 0) {
89+
for (const tag of bodyTags) {
90+
await createWaitpointTag({
91+
tag,
92+
environmentId: authentication.environment.id,
93+
projectId: authentication.environment.projectId,
94+
});
95+
}
96+
}
97+
98+
// Step 1: Create the waitpoint.
99+
const result = await engine.createManualWaitpoint({
100+
environmentId: authentication.environment.id,
101+
projectId: authentication.environment.projectId,
102+
idempotencyKey: body.idempotencyKey,
103+
idempotencyKeyExpiresAt,
104+
timeout,
105+
tags: bodyTags,
106+
});
107+
108+
// Step 2: Register the waitpoint on the session channel so the next
109+
// append fires it. Keyed by (addressingKey, io) — the canonical
110+
// string for the row. The append handler drains by the same
111+
// canonical key, so writers and readers converge regardless of
112+
// which URL form the agent vs. the appending caller used.
113+
const ttlMs = timeout ? timeout.getTime() - Date.now() : undefined;
114+
await addSessionStreamWaitpoint(
115+
addressingKey,
116+
body.io,
117+
result.waitpoint.id,
118+
ttlMs && ttlMs > 0 ? ttlMs : undefined
119+
);
120+
121+
// Step 3: Race-check. If a record landed on the channel before this
122+
// .wait() call, complete the waitpoint synchronously with that data
123+
// and remove the pending registration.
124+
if (!result.isCached) {
125+
try {
126+
// Session streams are always v2 (S2) — the writer in
127+
// `appendPartToSessionStream` and the SSE subscribe both
128+
// hardcode "v2", so the race-check reader has to match.
129+
// Don't fall through to the run's own `realtimeStreamsVersion`,
130+
// which only describes the run's run-scoped streams.
131+
const realtimeStream = getRealtimeStreamInstance(authentication.environment, "v2");
132+
133+
if (realtimeStream instanceof S2RealtimeStreams) {
134+
const records = await realtimeStream.readSessionStreamRecords(
135+
addressingKey,
136+
body.io,
137+
body.lastSeqNum
138+
);
139+
140+
if (records.length > 0) {
141+
const record = records[0]!;
142+
143+
await engine.completeWaitpoint({
144+
id: result.waitpoint.id,
145+
output: {
146+
value: record.data,
147+
type: "application/json",
148+
isError: false,
149+
},
150+
});
151+
152+
await removeSessionStreamWaitpoint(
153+
addressingKey,
154+
body.io,
155+
result.waitpoint.id
156+
);
157+
}
158+
}
159+
} catch (error) {
160+
// Non-fatal: pending registration stays in Redis; the next append
161+
// will complete the waitpoint via the append handler path. Log so
162+
// a broken race-check doesn't silently degrade to timeout-only.
163+
logger.warn("session-stream wait race-check failed", {
164+
addressingKey,
165+
io: body.io,
166+
waitpointId: WaitpointId.toFriendlyId(result.waitpoint.id),
167+
error,
168+
});
169+
}
170+
}
171+
172+
return json<CreateSessionStreamWaitpointResponseBody>({
173+
waitpointId: WaitpointId.toFriendlyId(result.waitpoint.id),
174+
isCached: result.isCached,
175+
});
176+
} catch (error) {
177+
if (error instanceof ServiceValidationError) {
178+
return json({ error: error.message }, { status: 422 });
179+
}
180+
// Don't forward raw internal error messages (could leak Prisma/engine
181+
// details). Log server-side and return a generic 500.
182+
logger.error("Failed to create session-stream waitpoint", { error });
183+
return json({ error: "Something went wrong" }, { status: 500 });
184+
}
185+
}
186+
);
187+
188+
export { action, loader };
Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
import { json } from "@remix-run/server-runtime";
2+
import {
3+
CloseSessionRequestBody,
4+
type RetrieveSessionResponseBody,
5+
} from "@trigger.dev/core/v3";
6+
import { z } from "zod";
7+
import { $replica, prisma } from "~/db.server";
8+
import {
9+
resolveSessionByIdOrExternalId,
10+
serializeSessionWithFriendlyRunId,
11+
} from "~/services/realtime/sessions.server";
12+
import { createActionApiRoute } from "~/services/routeBuilders/apiBuilder.server";
13+
14+
const ParamsSchema = z.object({
15+
session: z.string(),
16+
});
17+
18+
const { action, loader } = createActionApiRoute(
19+
{
20+
params: ParamsSchema,
21+
body: CloseSessionRequestBody,
22+
maxContentLength: 1024,
23+
method: "POST",
24+
allowJWT: true,
25+
corsStrategy: "all",
26+
authorization: {
27+
action: "admin",
28+
resource: (params) => ({ sessions: params.session }),
29+
superScopes: ["admin:sessions", "admin:all", "admin"],
30+
},
31+
},
32+
async ({ authentication, params, body }) => {
33+
const existing = await resolveSessionByIdOrExternalId(
34+
$replica,
35+
authentication.environment.id,
36+
params.session
37+
);
38+
39+
if (!existing) {
40+
return json({ error: "Session not found" }, { status: 404 });
41+
}
42+
43+
// Idempotent: if already closed, return the current row without clobbering
44+
// the original closedAt / closedReason.
45+
if (existing.closedAt) {
46+
return json<RetrieveSessionResponseBody>(
47+
await serializeSessionWithFriendlyRunId(existing)
48+
);
49+
}
50+
51+
// `closedAt: null` on the where clause makes the update conditional at
52+
// the DB level. Two concurrent closes race through the earlier read,
53+
// but only one can win this update — the loser hits `count === 0` and
54+
// falls back to reading the winning row. Closedness is write-once.
55+
const { count } = await prisma.session.updateMany({
56+
where: { id: existing.id, closedAt: null },
57+
data: {
58+
closedAt: new Date(),
59+
closedReason: body.reason ?? null,
60+
},
61+
});
62+
63+
if (count === 0) {
64+
const final = await prisma.session.findFirst({ where: { id: existing.id } });
65+
if (!final) return json({ error: "Session not found" }, { status: 404 });
66+
return json<RetrieveSessionResponseBody>(
67+
await serializeSessionWithFriendlyRunId(final)
68+
);
69+
}
70+
71+
const updated = await prisma.session.findFirst({ where: { id: existing.id } });
72+
if (!updated) return json({ error: "Session not found" }, { status: 404 });
73+
return json<RetrieveSessionResponseBody>(
74+
await serializeSessionWithFriendlyRunId(updated)
75+
);
76+
}
77+
);
78+
79+
export { action, loader };

0 commit comments

Comments
 (0)