fix(semconv) Legacy attributes support#3847
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds legacy Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (6)
packages/sample-app/sample_app/anthropic_joke_streaming_example.py (1)
20-20: Use Anthropic's stable model alias instead of the dated version for better resilience.While
claude-haiku-4-5-20251001works now, Anthropic recommends using the stable aliasclaude-haiku-4-5, which automatically tracks the latest version. Configuring the model via an environment variable will further shield the sample from future API changes.Suggested refactor
+import os from anthropic import Anthropic from traceloop.sdk import Traceloop from traceloop.sdk.decorators import workflow @@ - model="claude-haiku-4-5-20251001", + model=os.getenv("ANTHROPIC_MODEL", "claude-haiku-4-5"), stream=True,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/sample-app/sample_app/anthropic_joke_streaming_example.py` at line 20, Update the hardcoded dated model name to Anthropic's stable alias and allow overriding via an environment variable: replace the "model" value currently set to "claude-haiku-4-5-20251001" with a call that reads an env var (e.g., ANTHROPIC_MODEL) falling back to "claude-haiku-4-5"; change this where the model parameter is passed (the "model=" argument in anthropic_joke_streaming_example.py) so the sample automatically tracks the stable model and can be configured without code edits.packages/sample-app/sample_app/anthropic_structured_outputs_demo.py (1)
34-34: Optionally make the model ID configurable for easier maintenance.Line 34 hardcodes a model ID. While
claude-haiku-4-5-20251001is currently valid and supported through at least October 2026, using an env var with a fallback would simplify future model updates without requiring code changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/sample-app/sample_app/anthropic_structured_outputs_demo.py` at line 34, Replace the hardcoded model ID "claude-haiku-4-5-20251001" used in the model= argument with a configurable environment-backed value (e.g., read from an env var like CLAUDE_MODEL with a fallback to "claude-haiku-4-5-20251001") so future model changes don’t require code edits; update the code that sets the model= parameter to reference that env-backed variable (use os.environ.get or a config helper) and add a short comment noting the env var name.packages/sample-app/sample_app/async_anthropic_example.py (1)
19-20: Extract the model ID into one constant.The same model string is duplicated in two call sites; centralizing it avoids drift on future model updates.
♻️ Proposed refactor
anthropic = AsyncAnthropic() +ANTHROPIC_MODEL = "claude-haiku-4-5-20251001" @@ - model="claude-haiku-4-5-20251001", + model=ANTHROPIC_MODEL, @@ - model="claude-haiku-4-5-20251001", + model=ANTHROPIC_MODEL,Also applies to: 36-37
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/sample-app/sample_app/async_anthropic_example.py` around lines 19 - 20, Extract the duplicated model string into a single constant (e.g., CLAUDE_MODEL = "claude-haiku-4-5-20251001") at the top of the module and replace the two literal occurrences of model="claude-haiku-4-5-20251001" with model=CLAUDE_MODEL in the call sites (the two places shown in async_anthropic_example.py). Ensure the constant name is clear and used in both invocation sites so future model updates only require changing one value.packages/sample-app/pyproject.toml (1)
26-26: Adding an upper bound foranthropicis reasonable practice, though semantic versioning and CI checks mitigate breaking change risks.While unbounded dependency ranges are generally not ideal, the Anthropic SDK follows semantic versioning and maintains CI workflows to detect breaking changes. No documented breaking changes to
messages.createexist after 0.86.0. That said, constraining to<1remains a reasonable best practice to prevent unexpected major version jumps if the versioning plan changes:🔧 Suggested constraint (optional)
- "anthropic>=0.86.0", + "anthropic>=0.86.0,<1",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/sample-app/pyproject.toml` at line 26, Update the anthropic dependency range in pyproject.toml to add an upper bound to prevent accidental major-version upgrades; replace the current "anthropic>=0.86.0" constraint with a bounded range such as "anthropic>=0.86.0,<1" so the project stays on compatible releases while allowing patch/minor updates.packages/traceloop-sdk/pyproject.toml (1)
89-89: Consider restoring an upper bound foranthropicin test deps.Line 89 removes the cap, which can make CI less reproducible when upstream breaking changes land. A bounded range would improve test stability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/pyproject.toml` at line 89, Restore an upper bound on the Anthropic test dependency by editing the dependency entry "anthropic>=0.86.0" in pyproject.toml to include a conservative upper bound (for example "anthropic>=0.86.0,<0.90.0"), then update the lockfile / test environment and run the test suite to verify CI stability.packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1)
200-205: Consider consolidating duplicate cache attribute aliases.
LLM_USAGE_CACHE_CREATION_INPUT_TOKENSandGEN_AI_USAGE_CACHE_CREATION_INPUT_TOKENS_DEPRECATEDhave identical values ("gen_ai.usage.cache_creation_input_tokens"). While this is intentional for different migration paths, consider adding a brief inline comment clarifying why both exist (one forLLM_*→GEN_AI_*name migration, the other for value-only migration from underscore to dot format).📝 Suggested documentation improvement
# Cache attributes — name unchanged but VALUE changed in v0.5.0 (added dot separator) - # Old value kept as _DEPRECATED so both old and new coexist + # Old value kept as _DEPRECATED so both old and new coexist. + # LLM_USAGE_CACHE_* aliases: for packages using the old LLM_* constant name. + # GEN_AI_USAGE_CACHE_*_DEPRECATED aliases: for packages already using GEN_AI_* name but old underscore value. LLM_USAGE_CACHE_CREATION_INPUT_TOKENS = "gen_ai.usage.cache_creation_input_tokens" # TODO: migrate to SpanAttributes.GEN_AI_USAGE_CACHE_CREATION_INPUT_TOKENS🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py` around lines 200 - 205, Duplicate constant values exist for LLM_USAGE_CACHE_CREATION_INPUT_TOKENS, LLM_USAGE_CACHE_READ_INPUT_TOKENS and their GEN_AI_*_DEPRECATED counterparts; add a concise inline comment above these four constants explaining the intent: LLM_* names exist as the new attribute-name migration while GEN_AI_*_DEPRECATED is kept to preserve the old value-format (underscore→dot) for backward compatibility, and mention that both coexist intentionally to support both name and value migration paths (apply same comment to both creation and read token constants).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py`:
- Around line 200-205: Duplicate constant values exist for
LLM_USAGE_CACHE_CREATION_INPUT_TOKENS, LLM_USAGE_CACHE_READ_INPUT_TOKENS and
their GEN_AI_*_DEPRECATED counterparts; add a concise inline comment above these
four constants explaining the intent: LLM_* names exist as the new
attribute-name migration while GEN_AI_*_DEPRECATED is kept to preserve the old
value-format (underscore→dot) for backward compatibility, and mention that both
coexist intentionally to support both name and value migration paths (apply same
comment to both creation and read token constants).
In `@packages/sample-app/pyproject.toml`:
- Line 26: Update the anthropic dependency range in pyproject.toml to add an
upper bound to prevent accidental major-version upgrades; replace the current
"anthropic>=0.86.0" constraint with a bounded range such as
"anthropic>=0.86.0,<1" so the project stays on compatible releases while
allowing patch/minor updates.
In `@packages/sample-app/sample_app/anthropic_joke_streaming_example.py`:
- Line 20: Update the hardcoded dated model name to Anthropic's stable alias and
allow overriding via an environment variable: replace the "model" value
currently set to "claude-haiku-4-5-20251001" with a call that reads an env var
(e.g., ANTHROPIC_MODEL) falling back to "claude-haiku-4-5"; change this where
the model parameter is passed (the "model=" argument in
anthropic_joke_streaming_example.py) so the sample automatically tracks the
stable model and can be configured without code edits.
In `@packages/sample-app/sample_app/anthropic_structured_outputs_demo.py`:
- Line 34: Replace the hardcoded model ID "claude-haiku-4-5-20251001" used in
the model= argument with a configurable environment-backed value (e.g., read
from an env var like CLAUDE_MODEL with a fallback to
"claude-haiku-4-5-20251001") so future model changes don’t require code edits;
update the code that sets the model= parameter to reference that env-backed
variable (use os.environ.get or a config helper) and add a short comment noting
the env var name.
In `@packages/sample-app/sample_app/async_anthropic_example.py`:
- Around line 19-20: Extract the duplicated model string into a single constant
(e.g., CLAUDE_MODEL = "claude-haiku-4-5-20251001") at the top of the module and
replace the two literal occurrences of model="claude-haiku-4-5-20251001" with
model=CLAUDE_MODEL in the call sites (the two places shown in
async_anthropic_example.py). Ensure the constant name is clear and used in both
invocation sites so future model updates only require changing one value.
In `@packages/traceloop-sdk/pyproject.toml`:
- Line 89: Restore an upper bound on the Anthropic test dependency by editing
the dependency entry "anthropic>=0.86.0" in pyproject.toml to include a
conservative upper bound (for example "anthropic>=0.86.0,<0.90.0"), then update
the lockfile / test environment and run the test suite to verify CI stability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 03f36beb-ac8d-4967-a925-68cc20763e8b
⛔ Files ignored due to path filters (7)
packages/opentelemetry-instrumentation-groq/uv.lockis excluded by!**/*.lockpackages/opentelemetry-instrumentation-haystack/uv.lockis excluded by!**/*.lockpackages/opentelemetry-instrumentation-openai/uv.lockis excluded by!**/*.lockpackages/opentelemetry-instrumentation-replicate/uv.lockis excluded by!**/*.lockpackages/opentelemetry-instrumentation-vertexai/uv.lockis excluded by!**/*.lockpackages/sample-app/uv.lockis excluded by!**/*.lockpackages/traceloop-sdk/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (43)
packages/opentelemetry-instrumentation-agno/pyproject.tomlpackages/opentelemetry-instrumentation-alephalpha/pyproject.tomlpackages/opentelemetry-instrumentation-anthropic/pyproject.tomlpackages/opentelemetry-instrumentation-bedrock/pyproject.tomlpackages/opentelemetry-instrumentation-chromadb/pyproject.tomlpackages/opentelemetry-instrumentation-cohere/pyproject.tomlpackages/opentelemetry-instrumentation-crewai/pyproject.tomlpackages/opentelemetry-instrumentation-google-generativeai/pyproject.tomlpackages/opentelemetry-instrumentation-groq/pyproject.tomlpackages/opentelemetry-instrumentation-haystack/pyproject.tomlpackages/opentelemetry-instrumentation-lancedb/pyproject.tomlpackages/opentelemetry-instrumentation-langchain/pyproject.tomlpackages/opentelemetry-instrumentation-llamaindex/pyproject.tomlpackages/opentelemetry-instrumentation-marqo/pyproject.tomlpackages/opentelemetry-instrumentation-mcp/pyproject.tomlpackages/opentelemetry-instrumentation-milvus/pyproject.tomlpackages/opentelemetry-instrumentation-mistralai/pyproject.tomlpackages/opentelemetry-instrumentation-ollama/pyproject.tomlpackages/opentelemetry-instrumentation-openai-agents/pyproject.tomlpackages/opentelemetry-instrumentation-openai/pyproject.tomlpackages/opentelemetry-instrumentation-pinecone/pyproject.tomlpackages/opentelemetry-instrumentation-qdrant/pyproject.tomlpackages/opentelemetry-instrumentation-replicate/pyproject.tomlpackages/opentelemetry-instrumentation-sagemaker/pyproject.tomlpackages/opentelemetry-instrumentation-together/pyproject.tomlpackages/opentelemetry-instrumentation-transformers/pyproject.tomlpackages/opentelemetry-instrumentation-vertexai/pyproject.tomlpackages/opentelemetry-instrumentation-voyageai/pyproject.tomlpackages/opentelemetry-instrumentation-watsonx/pyproject.tomlpackages/opentelemetry-instrumentation-weaviate/pyproject.tomlpackages/opentelemetry-instrumentation-writer/pyproject.tomlpackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.pypackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.pypackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.pypackages/opentelemetry-semantic-conventions-ai/pyproject.tomlpackages/sample-app/pyproject.tomlpackages/sample-app/sample_app/anthropic_joke_example.pypackages/sample-app/sample_app/anthropic_joke_streaming_example.pypackages/sample-app/sample_app/anthropic_structured_outputs_demo.pypackages/sample-app/sample_app/anthropic_vision_base64_example.pypackages/sample-app/sample_app/async_anthropic_example.pypackages/sample-app/sample_app/async_anthropic_joke_streaming.pypackages/traceloop-sdk/pyproject.toml
…bility Bring back all LLM_* constants removed in v0.5.0 with their original string values. This allows non-migrated instrumentation packages to depend on semconv >=0.5.1 without code changes. - Add legacy LLM_* constants to SpanAttributes with TODO comments - Add GEN_AI_USAGE_CACHE_*_DEPRECATED for value-changed cache attrs - Update _testing.py to verify legacy constants are present - Bump semconv to 0.5.1 - Update all 32 packages to depend on >=0.5.1,<0.6.0 - Add local uv source overrides for semconv in all packages Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
These dev-only overrides don't belong in the semconv branch. Only traceloop-sdk and langchain retain their pre-existing overrides. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Revert "fix: remove local semconv source overrides from instrumentation packages" This reverts commit d2381f4. fix: add transitive local source overrides for CI resolution Packages with test deps on other instrumentation packages need local source overrides so uv can resolve semconv 0.5.1 transitively. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The version bump to >=0.5.1 will be done in a separate PR after semconv 0.5.1 is published to PyPI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Move inline TODO comments above lines to fix ruff line-length violations - Clarify duplicate cache constant comments (LLM_* vs GEN_AI_*_DEPRECATED) - Add upper bound anthropic>=0.86.0,<1 in sample-app and traceloop-sdk - Use stable model alias claude-haiku-4-5 in all sample app examples Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
d706a1f to
d59dd6c
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py`:
- Around line 91-123: The pytest.parametrize matrix in
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py
(the pytest.mark.parametrize block) is missing tests for legacy aliases
LLM_USAGE_TOKEN_TYPE, LLM_REQUEST_REPETITION_PENALTY,
LLM_REQUEST_STRUCTURED_OUTPUT_SCHEMA, LLM_REQUEST_REASONING_SUMMARY, and
LLM_RESPONSE_REASONING_EFFORT; update that parametrize list to include entries
for each of those legacy constants with their expected modern attribute names
exactly as defined in opentelemetry/semconv_ai/__init__.py so the mapping
assertions cover these legacy aliases as well.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 78ef6884-cfa2-4632-a3f8-5496b46ff914
⛔ Files ignored due to path filters (1)
packages/opentelemetry-semantic-conventions-ai/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.pypackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.pypackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.pypackages/opentelemetry-semantic-conventions-ai/pyproject.tomlpackages/sample-app/pyproject.tomlpackages/sample-app/sample_app/anthropic_joke_example.pypackages/sample-app/sample_app/anthropic_joke_streaming_example.pypackages/sample-app/sample_app/anthropic_structured_outputs_demo.pypackages/sample-app/sample_app/anthropic_vision_base64_example.pypackages/sample-app/sample_app/async_anthropic_example.pypackages/sample-app/sample_app/async_anthropic_joke_streaming.pypackages/traceloop-sdk/pyproject.toml
✅ Files skipped from review due to trivial changes (9)
- packages/sample-app/sample_app/anthropic_structured_outputs_demo.py
- packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py
- packages/sample-app/sample_app/async_anthropic_example.py
- packages/sample-app/sample_app/anthropic_vision_base64_example.py
- packages/sample-app/pyproject.toml
- packages/traceloop-sdk/pyproject.toml
- packages/opentelemetry-semantic-conventions-ai/pyproject.toml
- packages/sample-app/sample_app/async_anthropic_joke_streaming.py
- packages/sample-app/sample_app/anthropic_joke_streaming_example.py
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/sample-app/sample_app/anthropic_joke_example.py
- packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/init.py
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py
Show resolved
Hide resolved
…gchain Local semconv is 0.5.1 but all packages pin <0.5.0, causing transitive resolution failures. Let them resolve semconv from PyPI until 0.5.1 is published. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/traceloop-sdk/pyproject.toml (1)
89-89: Consider adding Anthropic version <0.86.0 to SDK test coverage.SDK test dependency at line 89 specifies
anthropic>=0.86.0,<1, while the anthropic instrumentation package testsanthropic[bedrock]>=0.74.0. This creates a gap: if the SDK is intended to support Anthropic <0.86.0, that range is untested at the SDK level. The langchain instrumentation tests also useanthropic>=0.75.0,<0.83.0, further indicating older versions should be covered.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/pyproject.toml` at line 89, The test dependency currently pinned as "anthropic>=0.86.0,<1" should be widened to include older Anthropic releases used by other packages (e.g., the instrumentation tests that use anthropic[bedrock]>=0.74.0 and langchain ranges) so SDK tests exercise versions <0.86.0; update the test dependency in pyproject.toml (the line containing "anthropic>=0.86.0,<1") to a range that covers older supported versions (for example "anthropic>=0.74.0,<1" or add an additional test extras entry/CI matrix for the older range) and ensure CI/test matrix includes that older version to provide coverage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/traceloop-sdk/pyproject.toml`:
- Line 89: The test dependency currently pinned as "anthropic>=0.86.0,<1" should
be widened to include older Anthropic releases used by other packages (e.g., the
instrumentation tests that use anthropic[bedrock]>=0.74.0 and langchain ranges)
so SDK tests exercise versions <0.86.0; update the test dependency in
pyproject.toml (the line containing "anthropic>=0.86.0,<1") to a range that
covers older supported versions (for example "anthropic>=0.74.0,<1" or add an
additional test extras entry/CI matrix for the older range) and ensure CI/test
matrix includes that older version to provide coverage.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 45537dfb-a877-48b0-b8c3-79f0dc912ba9
⛔ Files ignored due to path filters (2)
packages/opentelemetry-instrumentation-langchain/uv.lockis excluded by!**/*.lockpackages/traceloop-sdk/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
packages/opentelemetry-instrumentation-langchain/pyproject.tomlpackages/traceloop-sdk/pyproject.toml
💤 Files with no reviewable changes (1)
- packages/opentelemetry-instrumentation-langchain/pyproject.toml
Cover all LLM_* legacy constants including LLM_USAGE_TOKEN_TYPE, LLM_REQUEST_REPETITION_PENALTY, LLM_REQUEST_STRUCTURED_OUTPUT_SCHEMA, LLM_REQUEST_REASONING_SUMMARY, LLM_RESPONSE_REASONING_EFFORT, plus OpenAI and Watsonx aliases. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
c88de61 to
feff469
Compare
Summary
LLM_*constants in semconvSpanAttributeswiththeir original string values, so non-migrated instrumentation packages work with
semconv
>=0.5.1without code changesGEN_AI_USAGE_CACHE_*_INPUT_TOKENS_DEPRECATEDfor cache attributes wherethe constant name stayed but the value changed (dot separator)
0.5.1, update all 32 packages to depend on>=0.5.1,<0.6.0anthropic>=0.86.0, fix deprecated model referencesTest plan
LLM_*constantswork)
branch)
SpanAttributes.LLM_SYSTEMandSpanAttributes.GEN_AI_IS_STREAMINGboth resolve correctlynon-migrated (openai) instrumentations
Summary by CodeRabbit
New Features
Tests
Chores