Combining with rule-based test selection
If you have an existing test selection mechanism or other rule-based test selection mechanisms of the following pattern, you can combine them with Launchable's ML-based predictive test selection.
This usually results in sub-optimal performance when finding more failures from test runs, but this can improve psychological safety by making the algorithm more “explainable;" there’s a clear, unambiguous link between committed changes and selected tests.
Creating a rule definition file
Using a rule definition file that maps source directories to tests, you can provide rules to prioritize tests. For example, the following rule snippet says any change under src/payment
should result in payment_test.py
and new_payment_test.py
to be selected automatically:
{ ...
{
"src/payment" : ["file=test/payment_test.py", "file=test/new_payment_test.py"]
}
}
The full format of this file is as follows:
{
"format": "prioritized-tests-v1",
"mappings": {
$REPO_NAME: {
$DIRECTORY: [$TEST_PATH, ...],
... // repeatable
},
... // repeatable
}
}
$REPO_NAME
refers to the repository name given inlaunchable record build --source REPO_NAME=DIR
. That’s how the CLI can match the information in the mapping file to what has changed.$DIRECTORY
refers to the relative path within the enclosing repository.$TEST_PATH
is the name of a test represented in Launchable’s internal test path notation. You can use the inspect tests command to see the test paths for your recorded tests.
Using the rule definition file
Pass this definition file through the --prioritized-tests-mapping
option when making a launchable subset
invocation, as follows:
launchable subset --prioritized-tests-mapping foo/bar/test-mapping.json ...
Subsetting behaviors
Tests selected via mapping rules are always prioritized first before Launchable starts selecting tests based on the estimated likelihood of failures. If the specified optimization target (e.g., --duration
) allows for more tests to be selected, then Launchable will add tests it estimates will be most effective. On the other hand, if tests selected via mapping rules are already too big to fit the goal, some of those tests will be dropped to create a subset that meets the goal.