[MNT] Dockerized tests for CI runs using localhost#1629
[MNT] Dockerized tests for CI runs using localhost#1629satvshr wants to merge 86 commits intoopenml:mainfrom
Conversation
Locally, MinIO already has more parquet files than on the test server.
Note that the previously strategy didn't work anymore if the server returned a parquet file, which is the case for the new local setup.
This means it is not reliant on the evaluation engine processing the dataset. Interestingly, the database state purposely seems to keep the last task's dataset in preparation explicitly (by having processing marked as done but having to dataset_status entry).
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
I didn't look at it too closely but considering it looks like the local evaluations go wrong its not likely to do with any server connection issues https://github.com/openml/openml-python/actions/runs/21919012927/job/63293911025?pr=1629#logs . The error message about datatypes immediately makes me think of pandas, and this PR does not contain the fixes from #1628. I have to assume that is the underlying issue for that error. The other error you sent is strange https://github.com/openml/openml-python/actions/runs/21940621497/job/63365126160?pr=1629. I'll have a closer look after my next meeting. update: I can't quickly find a reason for the error. I added it on my list to check later. |
Co-authored-by: Armaghan Shakir <raoarmaghanshakir040@gmail.com>
.github/workflows/test.yml
Outdated
| # sed -i 's|/minio/|/data/|g' config/database/update.sh | ||
|
|
||
| # echo "=== Patched Update Script ===" | ||
| # cat config/database/update.sh | grep "nginx" |
There was a problem hiding this comment.
why extra work here? locally just running the services is enough
There was a problem hiding this comment.
Kindly ignore these, the pr isnt ready for review yet as tests are still failing and I was trying to debug tests.
openml/config.py
Outdated
| if sys.platform.startswith("win"): | ||
| TEST_SERVER_URL = "http://localhost" | ||
| else: | ||
| TEST_SERVER_URL = "http://localhost:8000" | ||
|
|
There was a problem hiding this comment.
we should actually use an env variable here, please see https://github.com/openml/openml-python/pull/1629/changes#r2797509441
should be controlled by that env variable, which if not set, should default to use https://test.openml.org/
There was a problem hiding this comment.
This is not how I plan to resolve this either, just a temporary fix to the windows issue.
|
The tests are taking too long because |
Will do that to prevent hold ups for other CIs in the repo, for my branch it is noticeable if a run is going to fail if it has been stuck on a single test for more than a minute. |
yeah but each job in this PR still takes full 150 minutes |
| "avoid_duplicate_runs": False, | ||
| "retry_policy": "human", | ||
| "connection_n_retries": 5, | ||
| "connection_n_retries": 1, |
There was a problem hiding this comment.
I don't think this would work, since we change this again in conftest.py.
To be completely sure that this works, you can temporarily set n_retries = 1 in _api_calls.py::_send_request
| run: | | ||
| git clone --depth 1 https://github.com/openml/services.git | ||
| cd services | ||
|
|
There was a problem hiding this comment.
you are not running these services yet.
There was a problem hiding this comment.
DId not realise I accidentally removed it
| f"collected from {__file__.split('/')[-1]}: {flow.flow_id}", | ||
| ) | ||
|
|
||
| @pytest.mark.skip(reason="Pending resolution of #1657") |
There was a problem hiding this comment.
skip these only if OPENML_USE_LOCAL_SERVICES is set to True
Metadata
Details
This PR implements the setting up of the v1 and v2 test servers in CI using docker via
localhost.