-
-
Notifications
You must be signed in to change notification settings - Fork 241
replace live server calls in tests/test_tasks/test_task.py
#1683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| <oml:upload_task xmlns:oml="http://openml.org/openml"> | ||
| <oml:id>99999</oml:id> | ||
| </oml:upload_task> |
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -3,16 +3,18 @@ | |||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| import unittest | ||||||||||||||||||||||||||||||||||||||
| from random import randint, shuffle | ||||||||||||||||||||||||||||||||||||||
| from unittest import mock | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| import pytest | ||||||||||||||||||||||||||||||||||||||
| import requests | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| from openml.datasets import ( | ||||||||||||||||||||||||||||||||||||||
| get_dataset, | ||||||||||||||||||||||||||||||||||||||
| list_datasets, | ||||||||||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||||||||||
| from openml.exceptions import OpenMLServerException | ||||||||||||||||||||||||||||||||||||||
| from openml.tasks import TaskType, create_task, get_task | ||||||||||||||||||||||||||||||||||||||
| from openml.testing import TestBase | ||||||||||||||||||||||||||||||||||||||
| from openml.testing import TestBase, create_request_response | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| class OpenMLTaskTest(TestBase): | ||||||||||||||||||||||||||||||||||||||
|
|
@@ -32,47 +34,55 @@ def setUpClass(cls): | |||||||||||||||||||||||||||||||||||||
| def setUp(self, n_levels: int = 1): | ||||||||||||||||||||||||||||||||||||||
| super().setUp() | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| @pytest.mark.test_server() | ||||||||||||||||||||||||||||||||||||||
| def test_download_task(self): | ||||||||||||||||||||||||||||||||||||||
| return get_task(self.task_id) | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| @pytest.mark.test_server() | ||||||||||||||||||||||||||||||||||||||
| def test_upload_task(self): | ||||||||||||||||||||||||||||||||||||||
| @mock.patch.object(requests.Session, "post") | ||||||||||||||||||||||||||||||||||||||
| def test_upload_task(self, mock_post): | ||||||||||||||||||||||||||||||||||||||
| tasks_dir = self.static_cache_dir / "mock_responses" / "tasks" | ||||||||||||||||||||||||||||||||||||||
| mock_post.side_effect = [ | ||||||||||||||||||||||||||||||||||||||
| create_request_response( | ||||||||||||||||||||||||||||||||||||||
| status_code=200, content_filepath=tasks_dir / "task_upload.xml" | ||||||||||||||||||||||||||||||||||||||
| ), | ||||||||||||||||||||||||||||||||||||||
| ] | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| # We don't know if the task in question already exists, so we try a few times. Checking | ||||||||||||||||||||||||||||||||||||||
| # beforehand would not be an option because a concurrent unit test could potentially | ||||||||||||||||||||||||||||||||||||||
| # create the same task and make this unit test fail (i.e. getting a dataset and creating | ||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+43
to
51
|
||||||||||||||||||||||||||||||||||||||
| mock_post.side_effect = [ | |
| create_request_response( | |
| status_code=200, content_filepath=tasks_dir / "task_upload.xml" | |
| ), | |
| ] | |
| # We don't know if the task in question already exists, so we try a few times. Checking | |
| # beforehand would not be an option because a concurrent unit test could potentially | |
| # create the same task and make this unit test fail (i.e. getting a dataset and creating | |
| mock_post.return_value = create_request_response( | |
| status_code=200, content_filepath=tasks_dir / "task_upload.xml" | |
| ) | |
| # We don't know if the task in question already exists, so we try a few times. Checking | |
| # beforehand would not be an option because a concurrent unit test could potentially | |
| # create the same task and make this unit test fail (i.e. getting a dataset and creating | |
| # beforehand would not be an option because a concurrent unit test could potentially | |
| # create the same task and make this unit test fail (i.e. getting a dataset and creating |
Copilot
AI
Feb 21, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment mentions "Checking beforehand would not be an option because a concurrent unit test could potentially create the same task" which refers to race conditions on the live server. With mocking, there are no concurrent tests accessing a shared server state, so this comment is no longer accurate and should be updated or removed to reflect that the test now uses mocks.
| # We don't know if the task in question already exists, so we try a few times. Checking | |
| # beforehand would not be an option because a concurrent unit test could potentially | |
| # create the same task and make this unit test fail (i.e. getting a dataset and creating | |
| # a task for it is not atomic). | |
| # We don't know if the task in question already exists on the server, so we try creating | |
| # it multiple times and handle the "task already exists" error (code 614) as it would | |
| # occur in real-world usage. This test simulates that behavior using mocked responses. |
Copilot
AI
Feb 21, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mocked methods _get_compatible_rand_dataset and _get_random_feature bypass the original logic, but the actual methods at lines 87-122 still call list_datasets and get_dataset, which make server calls. If any subclass or test directly calls these helper methods, they will hit the live server. Consider adding mocks for requests.Session.get to mock the responses from list_datasets and get_dataset to make this test truly isolated.
Copilot
AI
Feb 21, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exception handling for OpenMLServerException with error code 614 ("task already exists") is designed for live server interactions where tasks might already exist. With mocking, the mock response is predetermined and won't raise this exception unless explicitly configured to do so. This exception handling is now dead code and can be removed when the test is simplified to run once with the mock.
Copilot
AI
Feb 21, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With mocking in place, the 100-iteration loop is no longer necessary because the server errors (code 614) won't occur with a mocked response. The test could be simplified to execute only once, which would make it faster and clearer. The retry logic was designed to handle race conditions on the live server, which don't exist when using mocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test_download_task method still makes a live server call through get_task but no longer has the @pytest.mark.test_server decorator. This test needs to be mocked to avoid hitting the live server. Consider mocking requests.Session.get to provide a mock task XML response.