LCORE-716: Refactor duplicate integration/endpoints tests to parameterized tests#1394
LCORE-716: Refactor duplicate integration/endpoints tests to parameterized tests#1394anik120 wants to merge 1 commit intolightspeed-core:mainfrom
Conversation
WalkthroughRefactors multiple integration tests to use pytest parametrization, consolidating many duplicate tests into parameterized test cases and adds a new README section documenting data-driven (parameterized) testing patterns and best practices. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
e1fc184 to
f2de5ba
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/integration/endpoints/test_model_list.py (1)
89-155:⚠️ Potential issue | 🟡 MinorRun the formatter on this new parametrized block before merge.
This file is already failing
black --check, and pylint is also flagging the longtest_caseparameter line in the docstring.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 89 - 155, Run the code formatter and fix the long docstring line: run black (or your repo formatter) on the file so the new parametrized block (MODEL_FILTER_TEST_CASES) is formatted, and break/wrap the long docstring parameter lines inside test_models_list_with_filter (especially the "test_case: Dictionary containing test parameters..." line) so pylint line-length warnings go away; ensure the parameter name test_case and the function signature remain unchanged.tests/integration/endpoints/test_query_integration.py (1)
356-372:⚠️ Potential issue | 🟡 MinorWrap the
test_caseparameter docstring line.Line 367 is currently over pylint's 100-character limit, so this otherwise-good refactor still leaves CI red.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_query_integration.py` around lines 356 - 372, The docstring line describing the test_case parameter exceeds pylint's 100-char limit; break the long "test_case: Dictionary containing test parameters (attachments, expected_status, expected_error)" line into two or more wrapped lines so each is under 100 characters (e.g., "test_case: Dictionary containing test parameters such as attachments and expected_status" on one line and "expected_error on the next"), keeping the parameter name "test_case" intact and preserving the same content and punctuation in the docstring for the query v2 endpoint attachment validation tests.
🧹 Nitpick comments (2)
tests/integration/endpoints/test_tools_integration.py (1)
30-45: Drop the duplicatedexpect_www_authenticateflag.This table can derive that boolean from
www_authenticate is not None, so carrying both values creates an easy desync point when new cases are added.♻️ Suggested simplification
TOOLS_OAUTH_401_TEST_CASES = [ pytest.param( { "www_authenticate": 'Bearer realm="oauth"', - "expect_www_authenticate": True, }, id="with_www_authenticate_when_mcp_oauth_required", ), pytest.param( { "www_authenticate": None, - "expect_www_authenticate": False, }, id="without_www_authenticate_when_oauth_probe_times_out", ), ]www_authenticate = test_case["www_authenticate"] - expect_www_authenticate = test_case["expect_www_authenticate"] + expect_www_authenticate = www_authenticate is not None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_tools_integration.py` around lines 30 - 45, The test table TOOLS_OAUTH_401_TEST_CASES contains a redundant expect_www_authenticate flag; remove that key from each pytest.param and update any test that reads expect_www_authenticate to compute the expectation from the www_authenticate value (e.g., expected = www_authenticate is not None or bool(www_authenticate)). Locate references to TOOLS_OAUTH_401_TEST_CASES and the test function(s) that unpack its params and replace uses of expect_www_authenticate with the derived expression so the table only carries www_authenticate.tests/integration/endpoints/test_model_list.py (1)
89-129: Remove the redundantexpected_countfield.Every case already encodes the count in
expected_models, so keeping both makes the table easier to drift out of sync during future edits.♻️ Suggested simplification
MODEL_FILTER_TEST_CASES = [ pytest.param( { "filter_type": None, - "expected_count": 2, "expected_models": [ {"identifier": "test-provider/test-model-1", "api_model_type": "llm"}, {"identifier": "test-provider/test-model-2", "api_model_type": "embedding"}, ], }, id="no_filter_returns_all_models", ),- expected_count = test_case["expected_count"] expected_models = test_case["expected_models"] @@ - assert len(response.models) == expected_count + assert len(response.models) == len(expected_models)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 89 - 129, Remove the redundant expected_count entries from the table of test cases so the expected_models list is the single source of truth; update MODEL_FILTER_TEST_CASES by deleting the "expected_count" key from each pytest.param dict and adjust any test assertions that reference expected_count to compute count as len(entry["expected_models"]) or to assert against expected_models directly (search for uses of MODEL_FILTER_TEST_CASES and expected_count in the tests and replace with len(expected_models) or direct comparisons).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/endpoints/test_streaming_query_integration.py`:
- Around line 158-173: The docstring adds a long "Parameters" entry for
test_case that exceeds the 100-character line length; edit the docstring in the
test function in tests/integration/endpoints/test_streaming_query_integration.py
so the "test_case: Dictionary containing test parameters (attachments,
expected_status, expected_error)" line is wrapped to multiple shorter lines
(each <=100 chars) or rephrased into two lines (e.g., "test_case: Dict with test
parameters" followed by a short bulleted or indented continuation listing
attachments, expected_status, expected_error) so the docstring complies with the
pylint 100-character limit.
- Around line 251-258: The test function
test_streaming_query_endpoint_returns_401_for_mcp_oauth has too many positional
arguments and triggers pylint's
too-many-arguments/too-many-positional-arguments; fix by adding the same pylint
suppression used in tests/integration/endpoints/test_query_integration.py to the
test function declaration (disable too-many-arguments and
too-many-positional-arguments) so the linter ignores this parametrized test
signature.
In `@tests/integration/endpoints/test_tools_integration.py`:
- Around line 50-57: The test function
test_tools_endpoint_returns_401_for_mcp_oauth triggers a pylint
"too-many-arguments"/"too-many-positional-arguments" blocker; fix it by applying
the same local pylint suppression used in
tests/integration/endpoints/test_query_integration.py (add an inline # pylint:
disable=too-many-arguments,too-many-positional-arguments comment directly on the
async def line) or alternatively collapse several fixtures into a single helper
fixture and update the test signature to use that one fixture instead; update
the function definition for test_tools_endpoint_returns_401_for_mcp_oauth
accordingly.
In `@tests/integration/README.md`:
- Around line 255-355: Add a TOC entry for the new top-level section titled
"Data-Driven (Parameterized) Tests" to the README's table of contents so it
appears between "Running Tests" and "Best Practices"; update the TOC to include
a link that matches the exact header text ("Data-Driven (Parameterized) Tests")
using the same anchor style as other entries so the new guidance is
discoverable.
---
Outside diff comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 89-155: Run the code formatter and fix the long docstring line:
run black (or your repo formatter) on the file so the new parametrized block
(MODEL_FILTER_TEST_CASES) is formatted, and break/wrap the long docstring
parameter lines inside test_models_list_with_filter (especially the "test_case:
Dictionary containing test parameters..." line) so pylint line-length warnings
go away; ensure the parameter name test_case and the function signature remain
unchanged.
In `@tests/integration/endpoints/test_query_integration.py`:
- Around line 356-372: The docstring line describing the test_case parameter
exceeds pylint's 100-char limit; break the long "test_case: Dictionary
containing test parameters (attachments, expected_status, expected_error)" line
into two or more wrapped lines so each is under 100 characters (e.g.,
"test_case: Dictionary containing test parameters such as attachments and
expected_status" on one line and "expected_error on the next"), keeping the
parameter name "test_case" intact and preserving the same content and
punctuation in the docstring for the query v2 endpoint attachment validation
tests.
---
Nitpick comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 89-129: Remove the redundant expected_count entries from the table
of test cases so the expected_models list is the single source of truth; update
MODEL_FILTER_TEST_CASES by deleting the "expected_count" key from each
pytest.param dict and adjust any test assertions that reference expected_count
to compute count as len(entry["expected_models"]) or to assert against
expected_models directly (search for uses of MODEL_FILTER_TEST_CASES and
expected_count in the tests and replace with len(expected_models) or direct
comparisons).
In `@tests/integration/endpoints/test_tools_integration.py`:
- Around line 30-45: The test table TOOLS_OAUTH_401_TEST_CASES contains a
redundant expect_www_authenticate flag; remove that key from each pytest.param
and update any test that reads expect_www_authenticate to compute the
expectation from the www_authenticate value (e.g., expected = www_authenticate
is not None or bool(www_authenticate)). Locate references to
TOOLS_OAUTH_401_TEST_CASES and the test function(s) that unpack its params and
replace uses of expect_www_authenticate with the derived expression so the table
only carries www_authenticate.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 545532b4-198b-4089-ae21-97aa1b36c17e
📒 Files selected for processing (5)
tests/integration/README.mdtests/integration/endpoints/test_model_list.pytests/integration/endpoints/test_query_integration.pytests/integration/endpoints/test_streaming_query_integration.pytests/integration/endpoints/test_tools_integration.py
tests/integration/endpoints/test_streaming_query_integration.py
Outdated
Show resolved
Hide resolved
f2de5ba to
5b1976e
Compare
| }, | ||
| id="filter_llm_returns_llm_model", | ||
| ), | ||
| pytest.param( |
There was a problem hiding this comment.
there is no reason for this test, it tests already tested logic
There was a problem hiding this comment.
So this test was already there, I didn't add any new test. The test was called test_models_list_filter_model_type_embedding.
You could make a case for this test not needing to be there for sure, but that's out of scope for this PR and a follow up PR should remove this test.
Hope that makes sense?
| - Model list handler | ||
| Tests different model_type filter scenarios: | ||
| - No filter (returns all models) | ||
| - Filter by llm type |
There was a problem hiding this comment.
same as above, you do not need to test every option
There was a problem hiding this comment.
Same response as above, these existed before:
test_models_list()→ no filtertest_models_list_filter_model_type_llm()→ llm filtertest_models_list_filter_model_type_embedding()→ embedding filter (already existed)test_models_list_filter_model_type_unknown()→ unknown filter
They all got converted into parameterized test cases in MODEL_FILTER_TEST_CASE.
A follow up PR could address "no need to test every option" otherwise the scope of this PR will bloat up.
…erized tests Refactor integration tests to use parameterized/data-driven test pattern, eliminating duplicate test code and improving maintainability. Converted 19 duplicate test functions into 6 parameterized test functions with 21 test cases Signed-off-by: Anik Bhattacharjee <anbhatta@redhat.com>
5b1976e to
b954ce9
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/endpoints/test_query_integration.py`:
- Around line 281-288: The "empty_payload" pytest.param currently omits the
attachments field instead of sending it explicitly as null, so update the test
row with id "empty_payload" to include the attachments key set to None (e.g.,
{"attachments": None, ...}) if you want to assert the explicit null shape, or
rename the id/data to reflect that the field is omitted; modify the pytest.param
entry inside the test suite (the parameter list created with pytest.param and id
"empty_payload") accordingly to cover both shapes or clearly document which
shape is being tested.
- Around line 390-399: The assertion that uses getattr(response, "status_code",
status.HTTP_200_OK) is a false-positive because it defaults missing status_code
to 200; update the test to validate a real HTTP status by either (a) removing
the default and asserting response.status_code == expected_status (ensuring
response actually has a status_code attribute), or (b) invoking the real HTTP
client instead of calling query_endpoint_handler directly so you can assert the
actual HTTP response code; locate the check around query_endpoint_handler and
replace the getattr-based assertion with one of these approaches.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5088272f-9e57-442b-92f2-c7df99f3f964
📒 Files selected for processing (5)
tests/integration/README.mdtests/integration/endpoints/test_model_list.pytests/integration/endpoints/test_query_integration.pytests/integration/endpoints/test_streaming_query_integration.pytests/integration/endpoints/test_tools_integration.py
✅ Files skipped from review due to trivial changes (1)
- tests/integration/endpoints/test_streaming_query_integration.py
| pytest.param( | ||
| { | ||
| "attachments": None, | ||
| "expected_status": 200, | ||
| "expected_error": None, | ||
| }, | ||
| id="empty_payload", | ||
| ), |
There was a problem hiding this comment.
attachments=None is not exercised by this row.
Lines 382-388 omit the field whenever the table entry has attachments=None, so the "empty_payload" case only covers “attachments omitted.” If explicit null is a distinct input shape, this refactor dropped that coverage; if it is not, the case name/data should be updated to match what the test actually sends.
🧪 Minimal way to cover both shapes
pytest.param(
{
"attachments": None,
+ "omit_attachments": True,
"expected_status": 200,
"expected_error": None,
},
- id="empty_payload",
+ id="attachments_omitted",
+ ),
+ pytest.param(
+ {
+ "attachments": None,
+ "omit_attachments": False,
+ "expected_status": 200,
+ "expected_error": None,
+ },
+ id="attachments_explicitly_none",
),
@@
attachments = test_case["attachments"]
+ omit_attachments = test_case.get("omit_attachments", False)
expected_status = test_case["expected_status"]
expected_error = test_case["expected_error"]
- if attachments is None:
+ if omit_attachments:
query_request = QueryRequest(query="what is kubernetes?")
else:
query_request = QueryRequest(
query="what is kubernetes?",
attachments=attachments,
)Also applies to: 381-388
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/endpoints/test_query_integration.py` around lines 281 -
288, The "empty_payload" pytest.param currently omits the attachments field
instead of sending it explicitly as null, so update the test row with id
"empty_payload" to include the attachments key set to None (e.g.,
{"attachments": None, ...}) if you want to assert the explicit null shape, or
rename the id/data to reflect that the field is omitted; modify the pytest.param
entry inside the test suite (the parameter list created with pytest.param and id
"empty_payload") accordingly to cover both shapes or clearly document which
shape is being tested.
| if expected_status == 200: | ||
| # Success case - verify response | ||
| response = await query_endpoint_handler( | ||
| request=test_request, | ||
| query_request=query_request, | ||
| auth=test_auth, | ||
| mcp_headers={}, | ||
| ) | ||
|
|
||
| assert exc_info.value.status_code == status.HTTP_422_UNPROCESSABLE_CONTENT | ||
| assert isinstance(exc_info.value.detail, dict) | ||
| assert "unknown/type" in exc_info.value.detail["cause"] | ||
| assert "Invalid" in exc_info.value.detail["response"] | ||
| assert getattr(response, "status_code", status.HTTP_200_OK) == expected_status | ||
| assert response.conversation_id is not None |
There was a problem hiding this comment.
Line 398 is a false-positive status check.
Defaulting a missing status_code to 200 makes this assertion pass for any non-exception success path, so it is not validating a real HTTP status. Either remove it here or run this case through an HTTP client if you want to assert the actual response code.
🧹 Minimal cleanup
if expected_status == 200:
# Success case - verify response
response = await query_endpoint_handler(
request=test_request,
query_request=query_request,
auth=test_auth,
mcp_headers={},
)
- assert getattr(response, "status_code", status.HTTP_200_OK) == expected_status
assert response.conversation_id is not None
assert response.response is not None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/endpoints/test_query_integration.py` around lines 390 -
399, The assertion that uses getattr(response, "status_code",
status.HTTP_200_OK) is a false-positive because it defaults missing status_code
to 200; update the test to validate a real HTTP status by either (a) removing
the default and asserting response.status_code == expected_status (ensuring
response actually has a status_code attribute), or (b) invoking the real HTTP
client instead of calling query_endpoint_handler directly so you can assert the
actual HTTP response code; locate the check around query_endpoint_handler and
replace the getattr-based assertion with one of these approaches.
Description
Refactor integration tests to use parameterized/data-driven test pattern, eliminating duplicate test code and improving maintainability.
Converted 19 duplicate test functions into 6 parameterized test functions with 21 test cases
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Tests
Documentation