Conversation
…into issue1564
for more information, see https://pre-commit.ci
geetu040
left a comment
There was a problem hiding this comment.
please sync with base PR and update with these comments #1576 (comment)
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 52 out of 53 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| configurable_fields = [f for f in config._defaults if f not in ["max_retries"]] | ||
| configurable_fields = [ | ||
| f.name for f in fields(openml._config.OpenMLConfig) if f.name not in ["max_retries"] |
There was a problem hiding this comment.
The code filters out a field named 'max_retries', but the OpenMLConfig dataclass does not contain a field with this name. The actual field is connection_n_retries. This filter condition should be updated to match the actual field name or removed if no longer needed.
| f.name for f in fields(openml._config.OpenMLConfig) if f.name not in ["max_retries"] | |
| f.name | |
| for f in fields(openml._config.OpenMLConfig) | |
| if f.name not in ["connection_n_retries"] |
| # Example script which will appear in the upcoming OpenML-Python paper | ||
| # This test ensures that the example will keep running! | ||
| with overwrite_config_context( | ||
| with openml.config.overwrite_config_context( # noqa: F823 |
There was a problem hiding this comment.
The # noqa: F823 comment appears incorrect for this context. F823 is a Flake8 error code for "local variable referenced before assignment", which doesn't apply here. This line should likely use # noqa: F401 (unused import) if anything, but since openml is used on line 12, it's likely not needed at all and should be removed.
…into runs-migration-stacked
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 56 out of 57 changed files in this pull request and generated 10 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def test_switch_to_example_configuration(self): | ||
| """Verifies the test configuration is loaded properly.""" | ||
| # Below is the default test key which would be used anyway, but just for clarity: | ||
| openml.config.apikey = "any-api-key" | ||
| openml.config.server = self.production_server | ||
| openml.config.set_servers("production") | ||
|
|
||
| openml.config.start_using_configuration_for_example() | ||
|
|
||
| assert openml.config.apikey == TestBase.user_key | ||
| assert openml.config.server == self.test_server | ||
| openml.config.servers = openml.config.get_servers("test") |
There was a problem hiding this comment.
This test doesn't verify that start_using_configuration_for_example() actually switched to the test server configuration. It only sets openml.config.servers to test servers after the switch, which doesn't test the intended behavior. Add assertions to verify that the configuration was correctly switched to test servers.
| def test_switch_from_example_configuration(self): | ||
| """Verifies the previous configuration is loaded after stopping.""" | ||
| # Below is the default test key which would be used anyway, but just for clarity: | ||
| openml.config.apikey = TestBase.user_key | ||
| openml.config.server = self.production_server | ||
| openml.config.set_servers("production") | ||
|
|
||
| openml.config.start_using_configuration_for_example() | ||
| openml.config.stop_using_configuration_for_example() | ||
|
|
||
| assert openml.config.apikey == TestBase.user_key | ||
| assert openml.config.server == self.production_server | ||
| openml.config.servers = openml.config.get_servers("production") |
There was a problem hiding this comment.
This test doesn't verify that stop_using_configuration_for_example() actually restored the production server configuration. It only sets openml.config.servers to production servers after the stop, which doesn't test the intended behavior. Add assertions to verify that the configuration was correctly restored.
| assert "flow_id" in runs_df.columns | ||
|
|
||
|
|
||
| def test_run_v1_publish_mocked(run_v1, use_api_v1, test_api_key): |
There was a problem hiding this comment.
The fixture name test_api_key is not defined in conftest.py. The available fixture is named test_apikey_v1. Either rename this parameter to test_apikey_v1 or create a fixture named test_api_key.
|
|
||
| def _mocked_perform_api_call(call, request_method): | ||
| url = openml.config.server + "/" + call | ||
| url = openml.config.server + call |
There was a problem hiding this comment.
There's an extra space before the + operator on this line. While this doesn't affect functionality, it's inconsistent with the surrounding code style. Consider removing the extra space.
| assert "flow_id" in runs_df.columns | ||
|
|
||
|
|
||
| def test_run_v1_publish_mocked(run_v1, use_api_v1, test_api_key): |
There was a problem hiding this comment.
The fixture name use_api_v1 is not defined anywhere in the codebase. This fixture is referenced but never created, which will cause the test to fail.
| ) | ||
|
|
||
|
|
||
| def test_run_v1_delete_mocked(run_v1, use_api_v1, test_api_key): |
There was a problem hiding this comment.
The fixture name test_api_key is not defined in conftest.py. The available fixture is named test_apikey_v1. Either rename this parameter to test_apikey_v1 or create a fixture named test_api_key.
| ) | ||
|
|
||
|
|
||
| def test_run_v1_delete_mocked(run_v1, use_api_v1, test_api_key): |
There was a problem hiding this comment.
The fixture name use_api_v1 is not defined anywhere in the codebase. This fixture is referenced but never created, which will cause the test to fail.
| "openml_logger", | ||
| "_examples", | ||
| "OPENML_CACHE_DIR_ENV_VAR", | ||
| "OPENML_SKIP_PARQUET_ENV_VAR", |
There was a problem hiding this comment.
The attribute OPENML_TEST_SERVER_ADMIN_KEY_ENV_VAR is set in __init__ but is not included in the whitelist in __setattr__. This could cause issues if code tries to set this attribute after initialization. Add it to the whitelist on line 166-177.
| "OPENML_SKIP_PARQUET_ENV_VAR", | |
| "OPENML_SKIP_PARQUET_ENV_VAR", | |
| "OPENML_TEST_SERVER_ADMIN_KEY_ENV_VAR", |
| "_examples", | ||
| "OPENML_CACHE_DIR_ENV_VAR", | ||
| "OPENML_SKIP_PARQUET_ENV_VAR", | ||
| "_HEADERS", |
There was a problem hiding this comment.
The attribute _defaults is set in __init__ but is not included in the whitelist in __setattr__. This could cause issues if code tries to set this attribute after initialization. Add it to the whitelist on line 166-177.
| "_HEADERS", | |
| "_HEADERS", | |
| "_defaults", |
| self._config = replace( | ||
| self._config, | ||
| servers=config["servers"], | ||
| api_version=config["api_version"], | ||
| fallback_api_version=config["fallback_api_version"], | ||
| show_progress=config["show_progress"], | ||
| avoid_duplicate_runs=config["avoid_duplicate_runs"], | ||
| retry_policy=config["retry_policy"], | ||
| connection_n_retries=int(config["connection_n_retries"]), | ||
| ) |
There was a problem hiding this comment.
The _setup method expects the config dictionary to contain fields like "servers", "api_version", and "fallback_api_version", but _parse_config returns the raw config parser output which may not contain these fields unless they are explicitly set in the config file. This will cause a KeyError when trying to access these fields. Either add default handling for missing fields or ensure _parse_config returns all required fields with defaults.
Metadata
Details
fixes #1624