feat(anthropic): conform instrumentation to OTel GenAI semantic conventions#3835
feat(anthropic): conform instrumentation to OTel GenAI semantic conventions#3835max-deygin-traceloop wants to merge 15 commits intomainfrom
Conversation
…→ GEN_AI_ - Import GEN_AI_REQUEST_FREQUENCY/PRESENCE_PENALTY from upstream gen_ai_attributes - Use SpanAttributes.GEN_AI_USAGE_TOTAL_TOKENS/IS_STREAMING/RESPONSE_FINISH_REASON/ RESPONSE_STOP_REASON (renamed from LLM_*) - Remove duplicate LLM_REQUEST_TYPE dict entry (operation_name var already handles it) - Update test_messages.py and test_bedrock_with_raw_response.py Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…pan_attrs.py Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- llm.usage.total_tokens → gen_ai.usage.total_tokens - gen_ai.usage.cache_creation_input_tokens → gen_ai.usage.cache_creation.input_tokens - gen_ai.usage.cache_read_input_tokens → gen_ai.usage.cache_read.input_tokens Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughMigrates Anthropic instrumentation from legacy LLM semconv to GenAI semantic-convention keys, consolidates per-index prompt/completion attributes into JSON message payloads, updates span provider/operation attributes, revises streaming/output serialization, and adjusts tests and dependency ranges. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py (1)
1274-1275: Assert the exactfinish_reasonspayload here.
inis too weak for the new array semantics—it will still pass if the instrumentation emits duplicates or unrelated finish reasons. These fixtures look single-choice, so an exact assertion will catch the regression this PR is trying to prevent.Based on learnings: Follow the OpenTelemetry GenAI semantic specification at https://opentelemetry.io/docs/specs/semconv/gen-ai/
Also applies to: 1613-1614
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py` around lines 1274 - 1275, Replace the weak membership check with an exact payload assertion: instead of asserting response.stop_reason in finish_reasons, assert that anthropic_span.attributes[GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS] equals the exact list expected (e.g. [response.stop_reason]) so duplicates or extra values fail; do the same change for the other occurrence referenced (lines around 1613-1614) and keep the reference to GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS to locate the test.packages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.py (1)
228-237: Consider simplifying the content parsing logic.The nested conditional parsing (
if isinstance(content, str)→json.loadscheck →if isinstance(content_parsed, list)) is complex. Since you control the span_utils output, you could assert a more specific expected structure.However, this defensive approach does handle edge cases, so it's acceptable as-is.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.py` around lines 228 - 237, The current nested parsing around assistant_msg["content"] (variables content → content_parsed) is more defensive than needed; simplify by asserting the expected shape up front, e.g. require content to be a str containing JSON when produced by span_utils, parse it once into content_parsed, then assert isinstance(content_parsed, list) before building types_in_content and checking "tool_use" is not present; update the test code that references assistant_msg, content, content_parsed, and types_in_content to remove the nested isinstance checks and make the expected structure explicit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/opentelemetry-instrumentation-anthropic/pyproject.toml`:
- Line 16: The pyproject.toml entry under tool.uv.sources masks the PyPI package
opentelemetry-semantic-conventions-ai so our tests never validate that the
published 0.5.0 wheel actually exports the symbols we import
(opentelemetry.semconv_ai.SpanAttributes, Meters,
SUPPRESS_LANGUAGE_MODEL_INSTRUMENTATION_KEY, and _testing); remove or
conditionally disable the local source override for
opentelemetry-semantic-conventions-ai in tool.uv.sources (or add an integration
test that installs the released wheel and imports those symbols) so CI/monorepo
tests exercise the real published artifact rather than the sibling checkout.
In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py`:
- Around line 1918-1932: The test fails because streaming builds separate
output.message entries for the same assistant turn (one for text and another for
tool_calls) instead of consolidating them like the non-streaming path; update
the streaming serialization to accumulate all blocks for a single assistant turn
and emit one message object with both "content" and "tool_calls" before writing
GEN_AI_OUTPUT_MESSAGES. Locate the streaming code that appends to
output_messages (the logic that produces the separate text_msg and tool_msgs)
and change it to merge content fragments and tool call entries into a single
assistant message (matching the non-streaming behavior shown around the
serialize logic in span_utils.py lines ~220–223), ensuring output_messages
yields one assistant message containing "content", "role":"assistant", and a
combined "tool_calls" array with arguments serialized the same way the
non-streaming path does.
---
Nitpick comments:
In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py`:
- Around line 1274-1275: Replace the weak membership check with an exact payload
assertion: instead of asserting response.stop_reason in finish_reasons, assert
that anthropic_span.attributes[GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS]
equals the exact list expected (e.g. [response.stop_reason]) so duplicates or
extra values fail; do the same change for the other occurrence referenced (lines
around 1613-1614) and keep the reference to
GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS to locate the test.
In
`@packages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.py`:
- Around line 228-237: The current nested parsing around
assistant_msg["content"] (variables content → content_parsed) is more defensive
than needed; simplify by asserting the expected shape up front, e.g. require
content to be a str containing JSON when produced by span_utils, parse it once
into content_parsed, then assert isinstance(content_parsed, list) before
building types_in_content and checking "tool_use" is not present; update the
test code that references assistant_msg, content, content_parsed, and
types_in_content to remove the nested isinstance checks and make the expected
structure explicit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8fd99eb7-5d57-4e35-9b6e-765b533d3d31
⛔ Files ignored due to path filters (1)
packages/opentelemetry-instrumentation-anthropic/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.pypackages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.pypackages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/streaming.pypackages/opentelemetry-instrumentation-anthropic/pyproject.tomlpackages/opentelemetry-instrumentation-anthropic/tests/test_bedrock_with_raw_response.pypackages/opentelemetry-instrumentation-anthropic/tests/test_completion.pypackages/opentelemetry-instrumentation-anthropic/tests/test_messages.pypackages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_compliance.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_structured_outputs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py
packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py
Outdated
Show resolved
Hide resolved
...pentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py
Show resolved
Hide resolved
...pentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py
Show resolved
Hide resolved
packages/opentelemetry-instrumentation-anthropic/pyproject.toml
Outdated
Show resolved
Hide resolved
No longer needed since 0.5.0 is published on PyPI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
OzBenSimhonTraceloop
left a comment
There was a problem hiding this comment.
Furhter more, current anthropic sdk version is
| span, | ||
| GenAIAttributes.GEN_AI_INPUT_MESSAGES, | ||
| json.dumps([{"role": "user", "content": kwargs.get("prompt")}]), |
There was a problem hiding this comment.
Based on gen_ai.input.messages messages
the structure should be
[{"role": "user", "parts": [{"type": "text", "content": "Hello"}]}]
not
[{"role": "user", "content": "Hello"}]
| stop_reason = response.get("stop_reason") | ||
|
|
||
| if stop_reason: | ||
| span.set_attribute(GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS, [stop_reason]) |
There was a problem hiding this comment.
The OTel FinishReason enum defines: stop, length, content_filter, tool_call, error, while Anthropic are different AFAIK, so u need mapping here instead of taking Anthropic as is
| GenAIAttributes.GEN_AI_SYSTEM_INSTRUCTIONS, | ||
| await _dump_content( | ||
| message_index=0, span=span, content=kwargs.get("system") |
There was a problem hiding this comment.
The _dump_content might return plain string in many cases while the Otel schema for it its array of objects
There was a problem hiding this comment.
Bump to latest
| response.get("role"), | ||
| ) | ||
| thinking_messages.append({ | ||
| "role": "thinking", |
There was a problem hiding this comment.
Isn't`"role": "thinking" should be in the message parts and here u should have ,
Same for sync, streaming and tests.
As now u're generating
[
{
"role": "thinking",
"content": "Let me analyze this step by step. The user is asking about the weather in Paris..."
},
{
"role": "assistant",
"content": "The weather in Paris is currently rainy with a temperature of 57°F."
}
]
instead of
[
{
"role": "assistant",
"parts": [
{
"type": "reasoning",
"content": "Let me analyze this step by step. The user is asking about the weather in Paris..."
},
{
"type": "text",
"content": "The weather in Paris is currently rainy with a temperature of 57°F."
}
],
"finish_reason": "stop"
}
]
Address PR #3835 review comments from OzBenSimhonTraceloop: - Input/output messages now use "parts" array structure per gen-ai-input-messages.json and gen-ai-output-messages.json schemas - Map Anthropic finish reasons to OTel enum: end_turn→stop, tool_use→tool_call, max_tokens→length, stop_sequence→stop - Thinking content is now a ReasoningPart (type: "reasoning") inside the assistant message's parts array, not a separate message - Tool calls are now tool_call parts, not a separate "tool_calls" key - Streaming events consolidated into a single assistant message - Rewrote test_semconv_span_attrs.py as spec-driven unit tests (TDD) - Updated all VCR integration test assertions to match Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py (2)
401-402: Remove dead code.Lines 401-402 are a no-op –
tool_argumentsis alreadyNoneif the condition is true.🧹 Remove redundant lines
if isinstance(tool_arguments, str): try: tool_arguments = json.loads(tool_arguments) except (json.JSONDecodeError, TypeError): pass - elif tool_arguments is None: - tool_arguments = None parts.append({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py` around lines 401 - 402, Remove the redundant no-op branch that checks "elif tool_arguments is None: tool_arguments = None" in span_utils.py; delete those two lines so the surrounding if/elif logic no longer contains this dead branch and the behavior for tool_arguments remains handled by the other existing branches.
207-220: Async operations inside list comprehension may not work as expected.The list comprehension on lines 207-220 contains
awaitinside a regular list comprehension. While Python 3.10+ allowsawaitin list comprehensions within async functions, this pattern creates all coroutines first and then awaits them sequentially, which is correct here. However, the structure is complex and hard to read.Consider using an explicit loop or
asyncio.gatherfor clarity:♻️ Suggested simplification
- content = [ - ( - await _process_image_item( - model_as_dict(item), - span.context.trace_id, - span.context.span_id, - message_index, - j, - ) - if _is_base64_image(model_as_dict(item)) - else model_as_dict(item) - ) - for j, item in enumerate(content) - ] + processed = [] + for j, item in enumerate(content): + item_dict = model_as_dict(item) + if _is_base64_image(item_dict): + processed.append(await _process_image_item( + item_dict, + span.context.trace_id, + span.context.span_id, + message_index, + j, + )) + else: + processed.append(item_dict) + content = processed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py` around lines 207 - 220, The list comprehension mixes await with conditional logic (using _process_image_item, _is_base64_image, model_as_dict and span.context.trace_id/span_id) which is hard to read; replace it by first building a list of coroutines for items that need processing (e.g., for j,item in enumerate(content) append _process_image_item(...) when _is_base64_image(model_as_dict(item)) else append a completed value or use asyncio.sleep(0)/None), then run them with asyncio.gather(...) to concurrently await all results and assign the returned list back to content; alternatively use an explicit async for loop to sequentially await _process_image_item for clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py`:
- Around line 401-402: Remove the redundant no-op branch that checks "elif
tool_arguments is None: tool_arguments = None" in span_utils.py; delete those
two lines so the surrounding if/elif logic no longer contains this dead branch
and the behavior for tool_arguments remains handled by the other existing
branches.
- Around line 207-220: The list comprehension mixes await with conditional logic
(using _process_image_item, _is_base64_image, model_as_dict and
span.context.trace_id/span_id) which is hard to read; replace it by first
building a list of coroutines for items that need processing (e.g., for j,item
in enumerate(content) append _process_image_item(...) when
_is_base64_image(model_as_dict(item)) else append a completed value or use
asyncio.sleep(0)/None), then run them with asyncio.gather(...) to concurrently
await all results and assign the returned list back to content; alternatively
use an explicit async for loop to sequentially await _process_image_item for
clarity.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 90a921e4-eef5-4e54-84b5-ba910e5af755
📒 Files selected for processing (8)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.pypackages/opentelemetry-instrumentation-anthropic/tests/test_bedrock_with_raw_response.pypackages/opentelemetry-instrumentation-anthropic/tests/test_completion.pypackages/opentelemetry-instrumentation-anthropic/tests/test_messages.pypackages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_structured_outputs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py
✅ Files skipped from review due to trivial changes (1)
- packages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.py
There was a problem hiding this comment.
@max-deygin-servicenow please bump the Anthropic SDK from 0.76 to 0.86 + fix the lint issue :)
...pentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py
Show resolved
Hide resolved
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
What
Refactors the Anthropic instrumentation package to emit span attributes that comply with the OpenTelemetry GenAI semantic conventions spec.
Changes
New attributes (replaces legacy flat gen_ai.prompt.* / gen_ai.completion.* keys):
Bug fixes:
Checklist
I have added tests that cover my changes.
If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
(If applicable) I have updated the documentation accordingly.
Validated locally against a running agent using ConsoleSpanExporter.
Confirmed all appear correctly on
anthropic.chat spans:Summary by CodeRabbit
Refactor
Tests
Chores