fix(llamaindex): handle None content in StructuredLLM responses (#3513)#3665
Conversation
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to 69d0e79 in 10 seconds. Click for details.
- Reviewed
167lines of code in2files - Skipped
0files when reviewing. - Skipped posting
0draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
Workflow ID: wflow_4LB7l8lq0vXY3TI6
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdd a guarded attribute setter that skips None/empty values and update two LlamaIndex response handlers to use it when setting completion content; add unit tests covering None and non-None content handling in spans. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Hi @nirga @galkleinman @dinmukhamedm! Just wanted to gently ping this PR. I've updated the branch to be in sync with main. The original issue reporter cay89 confirmed the fix works for them in the comments. Happy to make any changes if needed. Thanks for maintaining this project! |
|
LGTM, but is still up for Traceloop folks to review |
There was a problem hiding this comment.
other instrumentations have a util _set_span_attribute that does pretty much the same thing? Potentially worth adding it here?
There was a problem hiding this comment.
Thanks for the feedback! I've added the _set_span_attribute utility function and updated the code to use it, consistent with other instrumentations. Could you please re-approve the workflow?
There was a problem hiding this comment.
Apologies for bugging you, could you reapprove one last time? Fixed the lint error (trailing whitespace), Thanks!
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py`:
- Around line 71-78: Replace the brittle string-matching assertions on
mock_span.set_attribute calls with exact attribute-key checks using
GenAIAttributes and assert_any_call/assert absence: import GenAIAttributes (from
opentelemetry.semconv._incubating.attributes import gen_ai_attributes as
GenAIAttributes) and replace the `'role' in str(call)` filter with
mock_span.set_attribute.assert_any_call(f"{GenAIAttributes.GEN_AI_COMPLETION}.0.role",
MessageRole.ASSISTANT.value) (or the appropriate role value), and replace the
`'content' in str(call) and call[0][1] is None` check with an explicit absence
check such as assert not any(c.args[0] ==
f"{GenAIAttributes.GEN_AI_COMPLETION}.0.content" for c in
mock_span.set_attribute.call_args_list) to ensure content attribute was not set
when None; apply the same pattern for the other indicated blocks referencing
mock_span.set_attribute and MessageRole.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 38bb65ed-4984-4561-88f6-de0aec0ef09a
📒 Files selected for processing (2)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.pypackages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py
packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py (1)
41-146: Consider adding tests for empty string content in response handlers.The
_set_span_attributehelper also guards against empty strings, but the response handler tests only verifyNoneand valid string cases. Adding tests for empty string content would provide more complete coverage of the guard behavior.📝 Example test case
def test_set_llm_chat_response_with_empty_string_content(self): """ Test that set_llm_chat_response doesn't set gen_ai.completion.0.content when response.message.content is an empty string. """ mock_span = MagicMock() mock_span.is_recording.return_value = True mock_message = MagicMock() mock_message.role = MessageRole.ASSISTANT mock_message.content = "" mock_response = MagicMock() mock_response.message = mock_message mock_event = MagicMock() mock_event.response = mock_response mock_event.messages = [] with patch('opentelemetry.instrumentation.llamaindex.span_utils.should_send_prompts', return_value=True): set_llm_chat_response(mock_event, mock_span) content_key = f"{GenAIAttributes.GEN_AI_COMPLETION}.0.content" assert not any( c.args[0] == content_key for c in mock_span.set_attribute.call_args_list ), "Content attribute should NOT be set when value is empty string"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py` around lines 41 - 146, Add tests that cover empty-string content for the response handlers: create a test similar to test_set_llm_chat_response_with_none_content named test_set_llm_chat_response_with_empty_string_content that sets mock_message.content = "" and asserts that GenAIAttributes.GEN_AI_COMPLETION + ".0.content" is NOT set by set_llm_chat_response; similarly add test_set_llm_predict_response_with_empty_output that sets mock_event.output = "" and asserts GenAIAttributes.GEN_AI_COMPLETION + ".content" is NOT set by set_llm_predict_response. These new tests should use the same mocking pattern and should rely on the existing _set_span_attribute guard behavior to ensure empty strings are skipped.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py`:
- Around line 41-146: Add tests that cover empty-string content for the response
handlers: create a test similar to test_set_llm_chat_response_with_none_content
named test_set_llm_chat_response_with_empty_string_content that sets
mock_message.content = "" and asserts that GenAIAttributes.GEN_AI_COMPLETION +
".0.content" is NOT set by set_llm_chat_response; similarly add
test_set_llm_predict_response_with_empty_output that sets mock_event.output = ""
and asserts GenAIAttributes.GEN_AI_COMPLETION + ".content" is NOT set by
set_llm_predict_response. These new tests should use the same mocking pattern
and should rely on the existing _set_span_attribute guard behavior to ensure
empty strings are skipped.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f24aa1bc-3cd9-4406-8ae9-46620907bc7e
📒 Files selected for processing (1)
packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
6c72e12 to
ac923d3
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py (1)
15-19: Consider centralizing_set_span_attributeto avoid intra-package duplication.This helper now exists here and also in
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/custom_llm_instrumentor.py(Line 71-75). Moving it to a shared module will reduce drift risk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py` around lines 15 - 19, Duplicate helper _set_span_attribute exists across modules; centralize it into a single shared helper module (e.g., keep it in span_utils.py or move to a new common module) and update the other module that currently defines its own copy (the one with the duplicate _set_span_attribute in custom_llm_instrumentor.py) to import and use the centralized function instead of redefining it; ensure imports are updated and remove the duplicate definition so there is a single source of truth for _set_span_attribute.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py`:
- Around line 15-19: Duplicate helper _set_span_attribute exists across modules;
centralize it into a single shared helper module (e.g., keep it in span_utils.py
or move to a new common module) and update the other module that currently
defines its own copy (the one with the duplicate _set_span_attribute in
custom_llm_instrumentor.py) to import and use the centralized function instead
of redefining it; ensure imports are updated and remove the duplicate definition
so there is a single source of truth for _set_span_attribute.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 79ac9b6d-9cce-4fb1-87b1-79a1c28df33d
📒 Files selected for processing (2)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.pypackages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
Fixes #3513
When using StructuredLLM with .complete() or .acomplete(), the response.message.content can be None because structured output goes to response.raw instead. This caused OpenTelemetry warnings:
Added None checks before setting completion content attributes in span_utils.py.
feat(instrumentation): ...orfix(instrumentation): ....Important
Add
Nonechecks inspan_utils.pyto handleNonecontent inStructuredLLMresponses, preventing OpenTelemetry warnings.Nonechecks inset_llm_chat_responseandset_llm_predict_responseinspan_utils.pyto prevent setting attributes withNonevalues.test_none_content_fix.pyto verifyNonehandling inset_llm_chat_responseandset_llm_predict_response.Noneand are set correctly when notNone.This description was created by
for 69d0e79. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
Bug Fixes
Tests