Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ import { openAIIntegration } from "___SDK_PACKAGE___";
Sentry.init({
dsn: "___PUBLIC_DSN___",
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
openAIIntegration(),
],
Expand Down Expand Up @@ -101,6 +102,7 @@ import { openAIIntegration } from "___SDK_PACKAGE___";
Sentry.init({
dsn: "___PUBLIC_DSN___",
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
openAIIntegration({
recordInputs: false, // Don't capture prompts
Expand All @@ -120,9 +122,9 @@ Sentry.init({

### Streaming Gen AI Spans

AI spans with large inputs and outputs can hit transaction payload size limits. Set `streamGenAiSpans` to `true` to send `gen_ai` spans as standalone envelope items instead of bundling them in the transaction.
Set `streamGenAiSpans` to `true` to send `gen_ai` spans as standalone envelope items instead of bundling them in the transaction. This is recommended for all AI monitoring setups and is required for <Link to="/ai/monitoring/conversations/">Conversations</Link> to work.

Enable this option if `gen_ai` spans are being dropped because the transaction payload exceeds size limits.
Without this option, AI spans with large inputs and outputs can hit transaction payload size limits and be dropped.

</SplitSectionText>
<SplitSectionCode>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ Defaults to `true` if `sendDefaultPii` is `true`.
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
Sentry.anthropicAIIntegration({
// your options here
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ Sentry.init({
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
Sentry.googleGenAIIntegration({
// your options here
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ Sentry.init({
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
Sentry.langChainIntegration({
// your options here
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,7 @@ Sentry.init({
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
Sentry.langGraphIntegration({
// your options here
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ Sentry.init({
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
Sentry.openAIIntegration({
// your options here
Expand Down Expand Up @@ -160,7 +161,7 @@ Streaming and non-streaming requests are automatically detected and handled appr

<Alert>

When using OpenAI's streaming API, you must also pass `stream_options: { include_usage: true }` to receive token usage data. Without this option, OpenAI does not include `prompt_tokens` or `completion_tokens` in streamed responses, and Sentry will be unable to capture `gen_ai.usage.input_tokens` / `gen_ai.usage.output_tokens` on the resulting span. This is an OpenAI API behavior, not a Sentry limitation. See [OpenAI docs on stream options](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options).
When using OpenAI's streaming API, you must also pass `stream_options: { include_usage: true }` to receive token usage data. Without this option, OpenAI does not include `prompt_tokens` or `completion_tokens` in streamed responses, and Sentry will be unable to capture `gen_ai.usage.input_tokens` / `gen_ai.usage.output_tokens` on the resulting span. This is an OpenAI API behavior, not a Sentry limitation. See [OpenAI API reference](https://platform.openai.com/docs/api-reference/chat/create).

</Alert>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ The `vercelAIIntegration` adds instrumentation for the [`ai`](https://www.npmjs.
Sentry.init({
dsn: "____PUBLIC_DSN____"
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [
Sentry.vercelAIIntegration({
recordInputs: true,
Expand All @@ -64,6 +65,7 @@ This integration is not enabled by default. You need to manually enable it by pa
Sentry.init({
dsn: "____PUBLIC_DSN____"
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [Sentry.vercelAIIntegration()],
});
```
Expand All @@ -77,6 +79,7 @@ This integration is enabled by default in the Node runtime, but not in the Edge
Sentry.init({
dsn: "____PUBLIC_DSN____"
tracesSampleRate: 1.0,
streamGenAiSpans: true,
integrations: [Sentry.vercelAIIntegration()],
});
```
Expand Down
8 changes: 8 additions & 0 deletions docs/platforms/javascript/common/configuration/options.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -602,6 +602,14 @@ See <PlatformLink to="/tracing/distributed-tracing/dealing-with-cors-issues/">De

</SdkOption>

<SdkOption name="streamGenAiSpans" type='boolean' defaultValue='false' availableSince='10.53.0'>

When set to `true`, `gen_ai` spans are sent as standalone envelope items instead of being bundled in the transaction payload. This prevents AI spans with large inputs and outputs from being dropped due to transaction payload size limits.

Enable this option if you are using <PlatformLink to="/ai-agent-monitoring/">AI Agent Monitoring</PlatformLink> or the <Link to="/ai/monitoring/conversations/">Conversations</Link> feature.

</SdkOption>

## Logs Options

<PlatformSection supported={["javascript.electron"]}>
Expand Down
Loading