Skip to content

fix(llamaindex): handle None content in StructuredLLM responses (#3513)#3665

Merged
nirga merged 6 commits intotraceloop:mainfrom
Kash6:fix/llamaindex-structured-llm-none-content
Mar 25, 2026
Merged

fix(llamaindex): handle None content in StructuredLLM responses (#3513)#3665
nirga merged 6 commits intotraceloop:mainfrom
Kash6:fix/llamaindex-structured-llm-none-content

Conversation

@Kash6
Copy link
Contributor

@Kash6 Kash6 commented Feb 6, 2026

Fixes #3513

When using StructuredLLM with .complete() or .acomplete(), the response.message.content can be None because structured output goes to response.raw instead. This caused OpenTelemetry warnings:

Invalid type NoneType for attribute 'gen_ai.completion.0.content'

Added None checks before setting completion content attributes in span_utils.py.

  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Important

Add None checks in span_utils.py to handle None content in StructuredLLM responses, preventing OpenTelemetry warnings.

  • Behavior:
    • Add None checks in set_llm_chat_response and set_llm_predict_response in span_utils.py to prevent setting attributes with None values.
  • Tests:
    • Add test_none_content_fix.py to verify None handling in set_llm_chat_response and set_llm_predict_response.
    • Tests ensure attributes are not set when content or output is None and are set correctly when not None.

This description was created by Ellipsis for 69d0e79. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • Bug Fixes

    • Prevented null or empty LLM response content from being recorded in telemetry by adding a guard so spans only include meaningful response text while prompt data remains unchanged.
  • Tests

    • Added unit tests covering missing vs. present LLM response content to ensure telemetry records content only when valid and to improve reliability.

@CLAassistant
Copy link

CLAassistant commented Feb 6, 2026

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 69d0e79 in 10 seconds. Click for details.
  • Reviewed 167 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 0 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.

Workflow ID: wflow_4LB7l8lq0vXY3TI6

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@coderabbitai
Copy link

coderabbitai bot commented Feb 6, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Add a guarded attribute setter that skips None/empty values and update two LlamaIndex response handlers to use it when setting completion content; add unit tests covering None and non-None content handling in spans.

Changes

Cohort / File(s) Summary
Guarded attribute setter & response handlers
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py
Add internal _set_span_attribute(span, name, value) that only calls span.set_attribute when value is not None or an empty string; replace direct span.set_attribute calls for gen_ai.completion.0.content and gen_ai.completion.content in set_llm_chat_response and set_llm_predict_response.
Tests for None content handling
packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
Add tests that mock spans/messages/events and patch should_send_prompts to verify _set_span_attribute behavior and that set_llm_chat_response / set_llm_predict_response do not set content attributes when content/output is None, while still setting role attributes and valid content when present.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 I hop through spans with careful pace,

I skip the None and leave no trace.
When content gleams I set it right,
Empty strings and nulls take flight.
A tiny hop to keep traces bright.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 63.64% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: fixing None content handling in StructuredLLM responses for LlamaIndex instrumentation.
Linked Issues check ✅ Passed The PR fully addresses all coding requirements from issue #3513: added _set_span_attribute helper, applied None checks in set_llm_chat_response and set_llm_predict_response, and added comprehensive tests.
Out of Scope Changes check ✅ Passed All changes are directly related to fixing the None content handling issue; no unrelated modifications detected in the span_utils.py or test files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Kash6
Copy link
Contributor Author

Kash6 commented Feb 14, 2026

Hi @nirga @galkleinman @dinmukhamedm!

Just wanted to gently ping this PR. I've updated the branch to be in sync with main. The original issue reporter cay89 confirmed the fix works for them in the comments.

Happy to make any changes if needed. Thanks for maintaining this project!

@dinmukhamedm
Copy link
Collaborator

LGTM, but is still up for Traceloop folks to review

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

other instrumentations have a util _set_span_attribute that does pretty much the same thing? Potentially worth adding it here?

Copy link
Contributor Author

@Kash6 Kash6 Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! I've added the _set_span_attribute utility function and updated the code to use it, consistent with other instrumentations. Could you please re-approve the workflow?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for bugging you, could you reapprove one last time? Fixed the lint error (trailing whitespace), Thanks!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py`:
- Around line 71-78: Replace the brittle string-matching assertions on
mock_span.set_attribute calls with exact attribute-key checks using
GenAIAttributes and assert_any_call/assert absence: import GenAIAttributes (from
opentelemetry.semconv._incubating.attributes import gen_ai_attributes as
GenAIAttributes) and replace the `'role' in str(call)` filter with
mock_span.set_attribute.assert_any_call(f"{GenAIAttributes.GEN_AI_COMPLETION}.0.role",
MessageRole.ASSISTANT.value) (or the appropriate role value), and replace the
`'content' in str(call) and call[0][1] is None` check with an explicit absence
check such as assert not any(c.args[0] ==
f"{GenAIAttributes.GEN_AI_COMPLETION}.0.content" for c in
mock_span.set_attribute.call_args_list) to ensure content attribute was not set
when None; apply the same pattern for the other indicated blocks referencing
mock_span.set_attribute and MessageRole.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 38bb65ed-4984-4561-88f6-de0aec0ef09a

📥 Commits

Reviewing files that changed from the base of the PR and between 69d0e79 and 5a4f551.

📒 Files selected for processing (2)
  • packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py
  • packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py (1)

41-146: Consider adding tests for empty string content in response handlers.

The _set_span_attribute helper also guards against empty strings, but the response handler tests only verify None and valid string cases. Adding tests for empty string content would provide more complete coverage of the guard behavior.

📝 Example test case
def test_set_llm_chat_response_with_empty_string_content(self):
    """
    Test that set_llm_chat_response doesn't set gen_ai.completion.0.content
    when response.message.content is an empty string.
    """
    mock_span = MagicMock()
    mock_span.is_recording.return_value = True
    
    mock_message = MagicMock()
    mock_message.role = MessageRole.ASSISTANT
    mock_message.content = ""
    
    mock_response = MagicMock()
    mock_response.message = mock_message
    
    mock_event = MagicMock()
    mock_event.response = mock_response
    mock_event.messages = []
    
    with patch('opentelemetry.instrumentation.llamaindex.span_utils.should_send_prompts', return_value=True):
        set_llm_chat_response(mock_event, mock_span)
    
    content_key = f"{GenAIAttributes.GEN_AI_COMPLETION}.0.content"
    assert not any(
        c.args[0] == content_key for c in mock_span.set_attribute.call_args_list
    ), "Content attribute should NOT be set when value is empty string"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py`
around lines 41 - 146, Add tests that cover empty-string content for the
response handlers: create a test similar to
test_set_llm_chat_response_with_none_content named
test_set_llm_chat_response_with_empty_string_content that sets
mock_message.content = "" and asserts that GenAIAttributes.GEN_AI_COMPLETION +
".0.content" is NOT set by set_llm_chat_response; similarly add
test_set_llm_predict_response_with_empty_output that sets mock_event.output = ""
and asserts GenAIAttributes.GEN_AI_COMPLETION + ".content" is NOT set by
set_llm_predict_response. These new tests should use the same mocking pattern
and should rely on the existing _set_span_attribute guard behavior to ensure
empty strings are skipped.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py`:
- Around line 41-146: Add tests that cover empty-string content for the response
handlers: create a test similar to test_set_llm_chat_response_with_none_content
named test_set_llm_chat_response_with_empty_string_content that sets
mock_message.content = "" and asserts that GenAIAttributes.GEN_AI_COMPLETION +
".0.content" is NOT set by set_llm_chat_response; similarly add
test_set_llm_predict_response_with_empty_output that sets mock_event.output = ""
and asserts GenAIAttributes.GEN_AI_COMPLETION + ".content" is NOT set by
set_llm_predict_response. These new tests should use the same mocking pattern
and should rely on the existing _set_span_attribute guard behavior to ensure
empty strings are skipped.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f24aa1bc-3cd9-4406-8ae9-46620907bc7e

📥 Commits

Reviewing files that changed from the base of the PR and between 5a4f551 and 45d2018.

📒 Files selected for processing (1)
  • packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py

@Kash6 Kash6 requested a review from dinmukhamedm March 14, 2026 01:23
@Kash6 Kash6 requested a review from dinmukhamedm March 15, 2026 18:26
@dinmukhamedm dinmukhamedm force-pushed the fix/llamaindex-structured-llm-none-content branch from 6c72e12 to ac923d3 Compare March 17, 2026 18:44
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py (1)

15-19: Consider centralizing _set_span_attribute to avoid intra-package duplication.

This helper now exists here and also in packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/custom_llm_instrumentor.py (Line 71-75). Moving it to a shared module will reduce drift risk.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py`
around lines 15 - 19, Duplicate helper _set_span_attribute exists across
modules; centralize it into a single shared helper module (e.g., keep it in
span_utils.py or move to a new common module) and update the other module that
currently defines its own copy (the one with the duplicate _set_span_attribute
in custom_llm_instrumentor.py) to import and use the centralized function
instead of redefining it; ensure imports are updated and remove the duplicate
definition so there is a single source of truth for _set_span_attribute.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py`:
- Around line 15-19: Duplicate helper _set_span_attribute exists across modules;
centralize it into a single shared helper module (e.g., keep it in span_utils.py
or move to a new common module) and update the other module that currently
defines its own copy (the one with the duplicate _set_span_attribute in
custom_llm_instrumentor.py) to import and use the centralized function instead
of redefining it; ensure imports are updated and remove the duplicate definition
so there is a single source of truth for _set_span_attribute.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 79ac9b6d-9cce-4fb1-87b1-79a1c28df33d

📥 Commits

Reviewing files that changed from the base of the PR and between 6c72e12 and ac923d3.

📒 Files selected for processing (2)
  • packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py
  • packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py

@nirga nirga merged commit 1a5e5bf into traceloop:main Mar 25, 2026
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🐛 Bug Report: [LlamaIndex] NoneType error for gen_ai.completion.0.content when using StructuredLLM

4 participants