Configuration options for your use-case

As you think about bringing in Launchable, there are a few configuration topology options to think about.

Simple configurations

Test suite run on a single machine with no commit data available

If your entire test suite runs on a single machine.

Implementation: Use the command launchable record tests --nobuild to record the test session information. The --nobuild option is utilized when there is no commit or build data to pass to Launchable.

Here’s how the command works for different test runners

Value unlocked:

  • Test Insights

  • Issue Grouping per test session

  • Predictive Test Selection (which doesn’t factor in code commit data. For e.g. Launchable indexes on other characteristics like recency of test failures)

Attach log files for a test session run

Launchable uses log files from during test runs to further analyze test results and assists users to diagnose test failures.

Implementation: Logs produced during the test sessions can be submitted as attachments.

Here’s how the command works

Value unlocked:

  • Issues dashboard (a dashboard that identifies new/resolved/ongoing issues across multiple runs)

Single source repository (correlate commit data with test failures for Predictive Test Selection)

If you plan to use PTS, then sending build information is critical to correlate commit data with test failures.

Implementation: To record a build, run launchable record build before you create a build in your CI script.

Here’s how the command works

Value unlocked:

  • Predictive Test Selection factors in commit information to predict what tests to run

Same test suite runs in multiple different environments

Specifically for cases where your test results depend on the browser environment it runs in.

Implementation: When you submit test results using launchable record tests, you can submit additional metadata in the form of key-value pairs using the --flavor option.

Here’s how the command works

Value unlocked:

  • Predictive Test Selection can find tailored tests to run for a particular environment

Multiple test suites run against the same build sent over to Launchable

If you are running different test suites against the same build, separating them out to different "test suites" would improve the quality of data analysis throughout our system.

Implementation: When you invoke launchable record tests command, specify the additional --test-suite option and give it a test suite name.

Here’s how the command works

Value unlocked:

  • Improved quality of data analysis through the platform

  • UI allows you to drill down by test suites to analyze data

Multiple repositories combined in one build, then tested

If you are producing builds by combining code from multiple repositories, then this is for you.

Implementation: Invoke launchable record build with multiple --source options to denote them.

Here’s how the command works


Complex configurations

Build and test processes happen on different machines

Implementation: You will have to manually create a test session id to run tests.

Read this help doc further to understand setup

Combining test reports from multiple runs

Implementation: Some pipelines execute multiple test runs against a build, outputting distinct test report(s) across several machines. Depending on your layout (see Test Session), you may want to combine these into a single test session.

Read this help doc further to understand setup

Multiple repositories built/deployed separately then tested together (e.g. micro-services)

Implementation: Some teams run regression tests against an environment where several services have been deployed. Each service is built from code from its own repository (or set of repositories).

Read this help doc further to understand setup