Skip to content

fix(semconv) Legacy attributes support#3847

Merged
max-deygin-traceloop merged 8 commits intomainfrom
max/semconv-legacy-attributes-support
Mar 26, 2026
Merged

fix(semconv) Legacy attributes support#3847
max-deygin-traceloop merged 8 commits intomainfrom
max/semconv-legacy-attributes-support

Conversation

@max-deygin-traceloop
Copy link
Copy Markdown
Contributor

@max-deygin-traceloop max-deygin-traceloop commented Mar 25, 2026

Summary

  • Reintroduce all legacy LLM_* constants in semconv SpanAttributes with
    their original string values, so non-migrated instrumentation packages work with
    semconv >=0.5.1 without code changes
  • Add GEN_AI_USAGE_CACHE_*_INPUT_TOKENS_DEPRECATED for cache attributes where
    the constant name stayed but the value changed (dot separator)
  • Bump semconv to 0.5.1, update all 32 packages to depend on >=0.5.1,<0.6.0
  • Update sample app: bump anthropic>=0.86.0, fix deprecated model references

Test plan

  • Semconv tests pass (213 tests)
  • Sample app dependency resolution succeeds
  • OpenAI instrumentation tests pass (197 passed — legacy LLM_* constants
    work)
  • Anthropic instrumentation tests pass (211 passed — on rebased migration
    branch)
  • Verified legacy + new constants coexist: SpanAttributes.LLM_SYSTEM and
    SpanAttributes.GEN_AI_IS_STREAMING both resolve correctly
  • Sent traces to Traceloop dashboard from both migrated (anthropic) and
    non-migrated (openai) instrumentations

Summary by CodeRabbit

  • New Features

    • Restored legacy LLM span attribute constants as backward-compatible aliases, including usage/cache deprecated aliases.
  • Tests

    • Updated tests to assert legacy LLM aliases remain present and refined regression checks to focus on new GEN_AI attributes.
  • Chores

    • Bumped package version to 0.5.1.
    • Sample apps updated to use newer Claude models and adjusted tracing initialization.
    • Tightened Anthropic dependency ranges and removed local editable source overrides.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 25, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 110b6cf1-d7a2-4b3d-8a3d-62ab1586c4b5

📥 Commits

Reviewing files that changed from the base of the PR and between feff469 and c88de61.

📒 Files selected for processing (1)
  • packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py

📝 Walkthrough

Walkthrough

Adds legacy LLM_* SpanAttribute constants as backward-compatible aliases, updates tests to assert those aliases, bumps package version to 0.5.1, updates Anthropic dependency ranges and local editable source mappings, and replaces several sample-app Anthropic model strings with claude-haiku-4-5.

Changes

Cohort / File(s) Summary
Semantic Conventions: legacy aliases
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
Added ~42 LLM_* public SpanAttributes constants and cache-related *_DEPRECATED aliases mapping to existing GEN_AI_* attribute values (no logic changes).
Semantic Conventions: tests
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py
Updated tests to assert presence and exact values of legacy LLM_* constants; narrowed regression checks to exclude LLM_* and _DEPRECATED entries.
Version & packaging (semantic conventions)
packages/opentelemetry-semantic-conventions-ai/version.py, packages/opentelemetry-semantic-conventions-ai/pyproject.toml
Bumped package version from 0.5.0 to 0.5.1.
Anthropic dependency & uv source changes
packages/sample-app/pyproject.toml, packages/traceloop-sdk/pyproject.toml, packages/opentelemetry-instrumentation-langchain/pyproject.toml
Updated Anthropic constraints (sample-app → >=0.86.0,<1; traceloop-sdk test deps → >=0.86.0,<1); removed local editable opentelemetry-semantic-conventions-ai uv/source overrides.
Instrumentation: cache attribute keys switched
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py
Switched span attribute assignments to use GEN_AI_USAGE_CACHE_*_DEPRECATED keys for cache read/creation tokens in both sync and async paths.
Sample app: Anthropic model & tracing init
packages/sample-app/sample_app/anthropic_joke_example.py, .../anthropic_joke_streaming_example.py, .../anthropic_structured_outputs_demo.py, .../anthropic_vision_base64_example.py, .../async_anthropic_example.py, .../async_anthropic_joke_streaming.py
Replaced multiple Claude model strings with claude-haiku-4-5; updated Traceloop.init() call in anthropic_joke_example.py to set app_name="sample-app" and disable_batch=True.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 I hopped through constants, old names snug and neat,

LLM echoes linked where new and old meet,
Models swapped to haiku, versions stepped one,
Tests now nod to echoes — backward bridges done.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(semconv) Legacy attributes support' directly describes the main change: reintroducing legacy LLM_* constants in semantic conventions SpanAttributes for backward compatibility.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch max/semconv-legacy-attributes-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (6)
packages/sample-app/sample_app/anthropic_joke_streaming_example.py (1)

20-20: Use Anthropic's stable model alias instead of the dated version for better resilience.

While claude-haiku-4-5-20251001 works now, Anthropic recommends using the stable alias claude-haiku-4-5, which automatically tracks the latest version. Configuring the model via an environment variable will further shield the sample from future API changes.

Suggested refactor
+import os
 from anthropic import Anthropic
 from traceloop.sdk import Traceloop
 from traceloop.sdk.decorators import workflow
@@
-        model="claude-haiku-4-5-20251001",
+        model=os.getenv("ANTHROPIC_MODEL", "claude-haiku-4-5"),
         stream=True,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/sample-app/sample_app/anthropic_joke_streaming_example.py` at line
20, Update the hardcoded dated model name to Anthropic's stable alias and allow
overriding via an environment variable: replace the "model" value currently set
to "claude-haiku-4-5-20251001" with a call that reads an env var (e.g.,
ANTHROPIC_MODEL) falling back to "claude-haiku-4-5"; change this where the model
parameter is passed (the "model=" argument in
anthropic_joke_streaming_example.py) so the sample automatically tracks the
stable model and can be configured without code edits.
packages/sample-app/sample_app/anthropic_structured_outputs_demo.py (1)

34-34: Optionally make the model ID configurable for easier maintenance.

Line 34 hardcodes a model ID. While claude-haiku-4-5-20251001 is currently valid and supported through at least October 2026, using an env var with a fallback would simplify future model updates without requiring code changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/sample-app/sample_app/anthropic_structured_outputs_demo.py` at line
34, Replace the hardcoded model ID "claude-haiku-4-5-20251001" used in the
model= argument with a configurable environment-backed value (e.g., read from an
env var like CLAUDE_MODEL with a fallback to "claude-haiku-4-5-20251001") so
future model changes don’t require code edits; update the code that sets the
model= parameter to reference that env-backed variable (use os.environ.get or a
config helper) and add a short comment noting the env var name.
packages/sample-app/sample_app/async_anthropic_example.py (1)

19-20: Extract the model ID into one constant.

The same model string is duplicated in two call sites; centralizing it avoids drift on future model updates.

♻️ Proposed refactor
 anthropic = AsyncAnthropic()
+ANTHROPIC_MODEL = "claude-haiku-4-5-20251001"
@@
-            model="claude-haiku-4-5-20251001",
+            model=ANTHROPIC_MODEL,
@@
-            model="claude-haiku-4-5-20251001",
+            model=ANTHROPIC_MODEL,

Also applies to: 36-37

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/sample-app/sample_app/async_anthropic_example.py` around lines 19 -
20, Extract the duplicated model string into a single constant (e.g.,
CLAUDE_MODEL = "claude-haiku-4-5-20251001") at the top of the module and replace
the two literal occurrences of model="claude-haiku-4-5-20251001" with
model=CLAUDE_MODEL in the call sites (the two places shown in
async_anthropic_example.py). Ensure the constant name is clear and used in both
invocation sites so future model updates only require changing one value.
packages/sample-app/pyproject.toml (1)

26-26: Adding an upper bound for anthropic is reasonable practice, though semantic versioning and CI checks mitigate breaking change risks.

While unbounded dependency ranges are generally not ideal, the Anthropic SDK follows semantic versioning and maintains CI workflows to detect breaking changes. No documented breaking changes to messages.create exist after 0.86.0. That said, constraining to <1 remains a reasonable best practice to prevent unexpected major version jumps if the versioning plan changes:

🔧 Suggested constraint (optional)
-  "anthropic>=0.86.0",
+  "anthropic>=0.86.0,<1",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/sample-app/pyproject.toml` at line 26, Update the anthropic
dependency range in pyproject.toml to add an upper bound to prevent accidental
major-version upgrades; replace the current "anthropic>=0.86.0" constraint with
a bounded range such as "anthropic>=0.86.0,<1" so the project stays on
compatible releases while allowing patch/minor updates.
packages/traceloop-sdk/pyproject.toml (1)

89-89: Consider restoring an upper bound for anthropic in test deps.

Line 89 removes the cap, which can make CI less reproducible when upstream breaking changes land. A bounded range would improve test stability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/traceloop-sdk/pyproject.toml` at line 89, Restore an upper bound on
the Anthropic test dependency by editing the dependency entry
"anthropic>=0.86.0" in pyproject.toml to include a conservative upper bound (for
example "anthropic>=0.86.0,<0.90.0"), then update the lockfile / test
environment and run the test suite to verify CI stability.
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1)

200-205: Consider consolidating duplicate cache attribute aliases.

LLM_USAGE_CACHE_CREATION_INPUT_TOKENS and GEN_AI_USAGE_CACHE_CREATION_INPUT_TOKENS_DEPRECATED have identical values ("gen_ai.usage.cache_creation_input_tokens"). While this is intentional for different migration paths, consider adding a brief inline comment clarifying why both exist (one for LLM_*GEN_AI_* name migration, the other for value-only migration from underscore to dot format).

📝 Suggested documentation improvement
     # Cache attributes — name unchanged but VALUE changed in v0.5.0 (added dot separator)
-    # Old value kept as _DEPRECATED so both old and new coexist
+    # Old value kept as _DEPRECATED so both old and new coexist.
+    # LLM_USAGE_CACHE_* aliases: for packages using the old LLM_* constant name.
+    # GEN_AI_USAGE_CACHE_*_DEPRECATED aliases: for packages already using GEN_AI_* name but old underscore value.
     LLM_USAGE_CACHE_CREATION_INPUT_TOKENS = "gen_ai.usage.cache_creation_input_tokens"  # TODO: migrate to SpanAttributes.GEN_AI_USAGE_CACHE_CREATION_INPUT_TOKENS
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py`
around lines 200 - 205, Duplicate constant values exist for
LLM_USAGE_CACHE_CREATION_INPUT_TOKENS, LLM_USAGE_CACHE_READ_INPUT_TOKENS and
their GEN_AI_*_DEPRECATED counterparts; add a concise inline comment above these
four constants explaining the intent: LLM_* names exist as the new
attribute-name migration while GEN_AI_*_DEPRECATED is kept to preserve the old
value-format (underscore→dot) for backward compatibility, and mention that both
coexist intentionally to support both name and value migration paths (apply same
comment to both creation and read token constants).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py`:
- Around line 200-205: Duplicate constant values exist for
LLM_USAGE_CACHE_CREATION_INPUT_TOKENS, LLM_USAGE_CACHE_READ_INPUT_TOKENS and
their GEN_AI_*_DEPRECATED counterparts; add a concise inline comment above these
four constants explaining the intent: LLM_* names exist as the new
attribute-name migration while GEN_AI_*_DEPRECATED is kept to preserve the old
value-format (underscore→dot) for backward compatibility, and mention that both
coexist intentionally to support both name and value migration paths (apply same
comment to both creation and read token constants).

In `@packages/sample-app/pyproject.toml`:
- Line 26: Update the anthropic dependency range in pyproject.toml to add an
upper bound to prevent accidental major-version upgrades; replace the current
"anthropic>=0.86.0" constraint with a bounded range such as
"anthropic>=0.86.0,<1" so the project stays on compatible releases while
allowing patch/minor updates.

In `@packages/sample-app/sample_app/anthropic_joke_streaming_example.py`:
- Line 20: Update the hardcoded dated model name to Anthropic's stable alias and
allow overriding via an environment variable: replace the "model" value
currently set to "claude-haiku-4-5-20251001" with a call that reads an env var
(e.g., ANTHROPIC_MODEL) falling back to "claude-haiku-4-5"; change this where
the model parameter is passed (the "model=" argument in
anthropic_joke_streaming_example.py) so the sample automatically tracks the
stable model and can be configured without code edits.

In `@packages/sample-app/sample_app/anthropic_structured_outputs_demo.py`:
- Line 34: Replace the hardcoded model ID "claude-haiku-4-5-20251001" used in
the model= argument with a configurable environment-backed value (e.g., read
from an env var like CLAUDE_MODEL with a fallback to
"claude-haiku-4-5-20251001") so future model changes don’t require code edits;
update the code that sets the model= parameter to reference that env-backed
variable (use os.environ.get or a config helper) and add a short comment noting
the env var name.

In `@packages/sample-app/sample_app/async_anthropic_example.py`:
- Around line 19-20: Extract the duplicated model string into a single constant
(e.g., CLAUDE_MODEL = "claude-haiku-4-5-20251001") at the top of the module and
replace the two literal occurrences of model="claude-haiku-4-5-20251001" with
model=CLAUDE_MODEL in the call sites (the two places shown in
async_anthropic_example.py). Ensure the constant name is clear and used in both
invocation sites so future model updates only require changing one value.

In `@packages/traceloop-sdk/pyproject.toml`:
- Line 89: Restore an upper bound on the Anthropic test dependency by editing
the dependency entry "anthropic>=0.86.0" in pyproject.toml to include a
conservative upper bound (for example "anthropic>=0.86.0,<0.90.0"), then update
the lockfile / test environment and run the test suite to verify CI stability.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 03f36beb-ac8d-4967-a925-68cc20763e8b

📥 Commits

Reviewing files that changed from the base of the PR and between 1a5e5bf and d2381f4.

⛔ Files ignored due to path filters (7)
  • packages/opentelemetry-instrumentation-groq/uv.lock is excluded by !**/*.lock
  • packages/opentelemetry-instrumentation-haystack/uv.lock is excluded by !**/*.lock
  • packages/opentelemetry-instrumentation-openai/uv.lock is excluded by !**/*.lock
  • packages/opentelemetry-instrumentation-replicate/uv.lock is excluded by !**/*.lock
  • packages/opentelemetry-instrumentation-vertexai/uv.lock is excluded by !**/*.lock
  • packages/sample-app/uv.lock is excluded by !**/*.lock
  • packages/traceloop-sdk/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (43)
  • packages/opentelemetry-instrumentation-agno/pyproject.toml
  • packages/opentelemetry-instrumentation-alephalpha/pyproject.toml
  • packages/opentelemetry-instrumentation-anthropic/pyproject.toml
  • packages/opentelemetry-instrumentation-bedrock/pyproject.toml
  • packages/opentelemetry-instrumentation-chromadb/pyproject.toml
  • packages/opentelemetry-instrumentation-cohere/pyproject.toml
  • packages/opentelemetry-instrumentation-crewai/pyproject.toml
  • packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml
  • packages/opentelemetry-instrumentation-groq/pyproject.toml
  • packages/opentelemetry-instrumentation-haystack/pyproject.toml
  • packages/opentelemetry-instrumentation-lancedb/pyproject.toml
  • packages/opentelemetry-instrumentation-langchain/pyproject.toml
  • packages/opentelemetry-instrumentation-llamaindex/pyproject.toml
  • packages/opentelemetry-instrumentation-marqo/pyproject.toml
  • packages/opentelemetry-instrumentation-mcp/pyproject.toml
  • packages/opentelemetry-instrumentation-milvus/pyproject.toml
  • packages/opentelemetry-instrumentation-mistralai/pyproject.toml
  • packages/opentelemetry-instrumentation-ollama/pyproject.toml
  • packages/opentelemetry-instrumentation-openai-agents/pyproject.toml
  • packages/opentelemetry-instrumentation-openai/pyproject.toml
  • packages/opentelemetry-instrumentation-pinecone/pyproject.toml
  • packages/opentelemetry-instrumentation-qdrant/pyproject.toml
  • packages/opentelemetry-instrumentation-replicate/pyproject.toml
  • packages/opentelemetry-instrumentation-sagemaker/pyproject.toml
  • packages/opentelemetry-instrumentation-together/pyproject.toml
  • packages/opentelemetry-instrumentation-transformers/pyproject.toml
  • packages/opentelemetry-instrumentation-vertexai/pyproject.toml
  • packages/opentelemetry-instrumentation-voyageai/pyproject.toml
  • packages/opentelemetry-instrumentation-watsonx/pyproject.toml
  • packages/opentelemetry-instrumentation-weaviate/pyproject.toml
  • packages/opentelemetry-instrumentation-writer/pyproject.toml
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml
  • packages/sample-app/pyproject.toml
  • packages/sample-app/sample_app/anthropic_joke_example.py
  • packages/sample-app/sample_app/anthropic_joke_streaming_example.py
  • packages/sample-app/sample_app/anthropic_structured_outputs_demo.py
  • packages/sample-app/sample_app/anthropic_vision_base64_example.py
  • packages/sample-app/sample_app/async_anthropic_example.py
  • packages/sample-app/sample_app/async_anthropic_joke_streaming.py
  • packages/traceloop-sdk/pyproject.toml

max-deygin-traceloop and others added 6 commits March 26, 2026 10:25
…bility

Bring back all LLM_* constants removed in v0.5.0 with their original
string values. This allows non-migrated instrumentation packages to
depend on semconv >=0.5.1 without code changes.

- Add legacy LLM_* constants to SpanAttributes with TODO comments
- Add GEN_AI_USAGE_CACHE_*_DEPRECATED for value-changed cache attrs
- Update _testing.py to verify legacy constants are present
- Bump semconv to 0.5.1
- Update all 32 packages to depend on >=0.5.1,<0.6.0
- Add local uv source overrides for semconv in all packages

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
These dev-only overrides don't belong in the semconv branch.
Only traceloop-sdk and langchain retain their pre-existing overrides.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

Revert "fix: remove local semconv source overrides from instrumentation packages"

This reverts commit d2381f4.

fix: add transitive local source overrides for CI resolution

Packages with test deps on other instrumentation packages need local
source overrides so uv can resolve semconv 0.5.1 transitively.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The version bump to >=0.5.1 will be done in a separate PR after
semconv 0.5.1 is published to PyPI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Move inline TODO comments above lines to fix ruff line-length violations
- Clarify duplicate cache constant comments (LLM_* vs GEN_AI_*_DEPRECATED)
- Add upper bound anthropic>=0.86.0,<1 in sample-app and traceloop-sdk
- Use stable model alias claude-haiku-4-5 in all sample app examples

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@max-deygin-traceloop max-deygin-traceloop force-pushed the max/semconv-legacy-attributes-support branch from d706a1f to d59dd6c Compare March 26, 2026 09:13
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py`:
- Around line 91-123: The pytest.parametrize matrix in
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py
(the pytest.mark.parametrize block) is missing tests for legacy aliases
LLM_USAGE_TOKEN_TYPE, LLM_REQUEST_REPETITION_PENALTY,
LLM_REQUEST_STRUCTURED_OUTPUT_SCHEMA, LLM_REQUEST_REASONING_SUMMARY, and
LLM_RESPONSE_REASONING_EFFORT; update that parametrize list to include entries
for each of those legacy constants with their expected modern attribute names
exactly as defined in opentelemetry/semconv_ai/__init__.py so the mapping
assertions cover these legacy aliases as well.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 78ef6884-cfa2-4632-a3f8-5496b46ff914

📥 Commits

Reviewing files that changed from the base of the PR and between d706a1f and d59dd6c.

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-semantic-conventions-ai/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (12)
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml
  • packages/sample-app/pyproject.toml
  • packages/sample-app/sample_app/anthropic_joke_example.py
  • packages/sample-app/sample_app/anthropic_joke_streaming_example.py
  • packages/sample-app/sample_app/anthropic_structured_outputs_demo.py
  • packages/sample-app/sample_app/anthropic_vision_base64_example.py
  • packages/sample-app/sample_app/async_anthropic_example.py
  • packages/sample-app/sample_app/async_anthropic_joke_streaming.py
  • packages/traceloop-sdk/pyproject.toml
✅ Files skipped from review due to trivial changes (9)
  • packages/sample-app/sample_app/anthropic_structured_outputs_demo.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py
  • packages/sample-app/sample_app/async_anthropic_example.py
  • packages/sample-app/sample_app/anthropic_vision_base64_example.py
  • packages/sample-app/pyproject.toml
  • packages/traceloop-sdk/pyproject.toml
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml
  • packages/sample-app/sample_app/async_anthropic_joke_streaming.py
  • packages/sample-app/sample_app/anthropic_joke_streaming_example.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/sample-app/sample_app/anthropic_joke_example.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/init.py

…gchain

Local semconv is 0.5.1 but all packages pin <0.5.0, causing transitive
resolution failures. Let them resolve semconv from PyPI until 0.5.1
is published.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/traceloop-sdk/pyproject.toml (1)

89-89: Consider adding Anthropic version <0.86.0 to SDK test coverage.

SDK test dependency at line 89 specifies anthropic>=0.86.0,<1, while the anthropic instrumentation package tests anthropic[bedrock]>=0.74.0. This creates a gap: if the SDK is intended to support Anthropic <0.86.0, that range is untested at the SDK level. The langchain instrumentation tests also use anthropic>=0.75.0,<0.83.0, further indicating older versions should be covered.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/traceloop-sdk/pyproject.toml` at line 89, The test dependency
currently pinned as "anthropic>=0.86.0,<1" should be widened to include older
Anthropic releases used by other packages (e.g., the instrumentation tests that
use anthropic[bedrock]>=0.74.0 and langchain ranges) so SDK tests exercise
versions <0.86.0; update the test dependency in pyproject.toml (the line
containing "anthropic>=0.86.0,<1") to a range that covers older supported
versions (for example "anthropic>=0.74.0,<1" or add an additional test extras
entry/CI matrix for the older range) and ensure CI/test matrix includes that
older version to provide coverage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/traceloop-sdk/pyproject.toml`:
- Line 89: The test dependency currently pinned as "anthropic>=0.86.0,<1" should
be widened to include older Anthropic releases used by other packages (e.g., the
instrumentation tests that use anthropic[bedrock]>=0.74.0 and langchain ranges)
so SDK tests exercise versions <0.86.0; update the test dependency in
pyproject.toml (the line containing "anthropic>=0.86.0,<1") to a range that
covers older supported versions (for example "anthropic>=0.74.0,<1" or add an
additional test extras entry/CI matrix for the older range) and ensure CI/test
matrix includes that older version to provide coverage.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 45537dfb-a877-48b0-b8c3-79f0dc912ba9

📥 Commits

Reviewing files that changed from the base of the PR and between d59dd6c and efa461e.

⛔ Files ignored due to path filters (2)
  • packages/opentelemetry-instrumentation-langchain/uv.lock is excluded by !**/*.lock
  • packages/traceloop-sdk/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • packages/opentelemetry-instrumentation-langchain/pyproject.toml
  • packages/traceloop-sdk/pyproject.toml
💤 Files with no reviewable changes (1)
  • packages/opentelemetry-instrumentation-langchain/pyproject.toml

Cover all LLM_* legacy constants including LLM_USAGE_TOKEN_TYPE,
LLM_REQUEST_REPETITION_PENALTY, LLM_REQUEST_STRUCTURED_OUTPUT_SCHEMA,
LLM_REQUEST_REASONING_SUMMARY, LLM_RESPONSE_REASONING_EFFORT, plus
OpenAI and Watsonx aliases.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@max-deygin-traceloop max-deygin-traceloop force-pushed the max/semconv-legacy-attributes-support branch from c88de61 to feff469 Compare March 26, 2026 12:00
@max-deygin-traceloop max-deygin-traceloop merged commit ddcff1c into main Mar 26, 2026
18 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants