Adding a new function or class in most of skbase's modules will trigger a failure of test_get_package_metadata_returns_expected_results, since skbase itself is being used as a test case for the lookup functionality, with expected retrieved functions and classes hard coded in the test config.
I don't think this is such a good idea, as the tests couple unrelated functionality to the lookup module.
E.g., it is not possible to refactor modules of skbase without triggering the test failure that is logically unrelated.
I would suggest to reduce the scope of test_get_package_metadata_returns_expected_results, possibly substantially, to the mock_package or that and a few other places? The logical test coverage should not be substantially reduced by that (if it is, I would extend mock_package instead of abusing skbase)
Just based on the compartmentalisation design principle - refactoring a module shouldn't break things in unexpected, logically unrelated places.
Adding a new function or class in most of
skbase's modules will trigger a failure oftest_get_package_metadata_returns_expected_results, sinceskbaseitself is being used as a test case for the lookup functionality, with expected retrieved functions and classes hard coded in the test config.I don't think this is such a good idea, as the tests couple unrelated functionality to the
lookupmodule.E.g., it is not possible to refactor modules of
skbasewithout triggering the test failure that is logically unrelated.I would suggest to reduce the scope of
test_get_package_metadata_returns_expected_results, possibly substantially, to themock_packageor that and a few other places? The logical test coverage should not be substantially reduced by that (if it is, I would extendmock_packageinstead of abusingskbase)Just based on the compartmentalisation design principle - refactoring a module shouldn't break things in unexpected, logically unrelated places.