Launchable Onboarding Playbook

This playbook is a 4 week guide to the entire Launchable trial experience. Along with a list of things that will happen week-by-week, we will also walk you through:

  1. Use-cases solved by Launchable

  2. Tooling, Environment & PTS requirements

  3. Any complex configurations to take care of

The playbook will be split into 3 primary sections - Pre-trial, During-trial & Post-trial experience.


Pre-trial: Setting expectations, Choosing a use-case

The core objective of the pre-trial flow will be to identify a use-case to use Launchable for, review necessary technical requirements and available configuration options for you to begin the trial.

A Launchable Sales Engineer will help you evaluate which use-case best fits based on your problem statement and setup.

Our team will also assist you in implementing best practices and any configuration setup you may need.

Step 1: Identify a use-case

First of all, you need to pick a problem that you want to solve for your team by using Launchable. Pick a use-case (listed below) that resonate, and dive deeper to see if it fits your needs.

Use-case 1 - In place subset of a test suite

An existing test suite may be taking a long time to run. Some teams' capacity for executing tests is finite & limited, yet the demand to run tests is too high. Even in cases where test execution capacity is scalable & elastic, the number of tests are so many that it’s costing too much money.

To that end, the team would want to shorten the execution time of that test suite. Read more about it here.

Use-case 2 - Shift-left a test suite to find issues earlier in the pipeline

The feedback on changes may be coming in too late because the tests are towards the right in your delivery pipeline (e.g. UI tests, E2E, Nightly tests. These are run infrequently because of long run times). In several teams, another common challenge is that their main/dev branch is too unstable, causing QA people a lot of overhead to deal with failures.

Read in detail about the use-cases for Predictive Test Selection.

Step 2: Review technical requirements

Next, it is time to look at the must-have tooling & environment requirements from you, under which Launchable is supported. Launchable will not run, without these requirements.

  • Language support.

    • Python 3 and Pip3: Launchable CLI is a Python 3 package and you need Pip3 to install it.

    • Java 8+

  • Version control system. Git (we work with all popular Git systems like GitHub, Bitbucket and GitLab)

    • Note: Git optimization tools add additional complexity. We prefer not to take them on during a trial.

  • Internet access. Launchable is a SaaS service and the CLI needs access to the internet

  • Team enabled to edit CI script. Launchable is integrated into the CI script. In some larger teams, access to editing CI script is sourced to another team. Ensure that as you trial Launchable, you can edit the script.

  • Supported test frameworks. Here’s a list of supported test frameworks (link).

  • Test results in binary (true/false). Launchable doesn’t support tests that don’t report tests in binary form (usually performance tests).

  • Automated Tests. Launchable uses ML to learn from repeated test runs. Manual tests typically run infrequently and hence we rule them out. If manual tests are being run frequently Launchable will be able to support them.

Step 3: Requirements for Predictive Test Selection

Here, we will take you through a couple of must have requirements for running PTS. Along with that, we have also linked the best practices and a checklist for you to build a mental map of PTS.

  • No inter test dependencies. Tests need to be filtered/re-arranged. PTS re-orders tests. If tests have dependencies, you may have to go to a “higher” altitude to run them.

  • Test framework support to run tests fed from an external file. Launchable sends a subset of files in a text file with a list of tests to run in one test per line. The underlying test framework needs to support taking this list of tests and running it. Most test frameworks support it (see here).

Before you proceed, make sure you have read - the best practices & checklist. There are several important considerations for your team here.

Step 4: Configuration options for your use-case

As you think about bringing in Launchable, there are a few configuration topology options to think about.

Each of these configuration options may come in handy for your team. We advise you to make sure you have read through the configuration options and have an understanding of its implementation as well.


During trial, Weeks 1, 2 & 3: Showcasing value

The core objective during the trial will be for you to start sending data to Launchable & then to experience and evaluate the value of the product.

Our ML model for Predictive Test Selection is ready to use from Day 1. Additionally, the model is always learning from your data. This means, the model’s accuracy in selecting the optimum subset of tests, should only increase with time.

Week 1: Start sending data, experience PTS & Issue grouping functionalities

Goal: Get a “hands-on” experience of sending data to Launchable, Get a feel of the potential time savings based on your test suite.

What to do

Step 1: Create an account, setup Launchable CLI

  • First of all, you need to create your account and your Signup for an account here. Choose a “company name” and “test suite name”. These will show up as “Organization” and “Workspace” respectively within the Launchable web UI. You could create multiple workspaces within an organization.

  • To start sending data, you will need to create an API key and then install the Launchable CLI. Refer Getting started with the Launchable CLI, to learn how to do this.

Step 2: Start sending test data

  • Connect your test suite: Send tests data and optionally commit data

  • Launchable CLI integrates in your CI script. See the below picture for a quick mental model.

Configuration options for your use-case is a quick read to help you understand how you should be configuring based on your tests/source repository.

Step 3: Setup Predictive Test Selection in Observation mode

  • Set up the --observation flag to run the system in observation mode (see more)

  • We recommend beginning with a conservative approach (example - 99% confidence level). You will play around with this number during the trial to see what risk/reward ratio is good for your test.

  • Launchable team should also enable ‘Issue grouping’ (ask your Launchable sales engineer to enable this)

As you send data, make sure it is line with the best practices - link to best practices.

Value observed:

  • PTS observation mode: Understand potential time savings on test runs without any risk

  • Triage: Triage more efficiently, by grouping similar test failures into ‘Issues’

Week 2: Trends & Unhealthy tests identified from your test sessions start showing up

Goal: Exploring Insights about tests runs, Continue building an intuition around subsetting in observation mode by checking time savings.

Note: At this point, no additional configuration is needed to send data. You can tweak the subset confidence number to observe the behaviour of the system.

Steps:

  • Keep sending data through observation mode to Launchable

  • Launchable team should enable ‘Insights’

    • (at this point, there will be enough data to reveal trends and unhealthy tests from your test sessions)

  • Refer time savings under Observe page, even though made only under observation mode

Value observed:

  • Potential time savings with PTS: Now you can build a good understanding of how much time you can save by subsetting with Launchable

  • Insights module:

    • The Trends page will help you gauge how your test suite is evolving over time

    • Unhealthy tests will show up certain tests in your suite (identified by our model) that are causing more harm than good

Week 3: Saving actual developer hours

Goal: This week should mark your transition to subsetting tests under production mode.

Steps:

  • Remove the --observation flag

  • Based on your intuition of the test suite and time savings till now, you should select an appropriate optimization target to run in production mode

    • (again, our suggestion is to keep it at 99%)

  • Refer the Analyze page for results under PTS

Value observed:

  • Actual time savings with PTS: This week, your team can actually get feedback on test sessions much faster through PTS, saving valuable developer hours


Post-trial: Week 4 onwards

At this point, the trial should be complete. You should have been able to experience Predictive Test Selection in full flow, Insights on your test suite and Issue grouping to help you with triaging.

Next steps:

  • Mastering your intuition on what optimization targets to pick based on which tests you run

    • Continue playing with the optimization target in production and observing the results

    • Leverage the confidence curve to make an informed decision

  • Sync with the Launchable team to to discuss perceived value of the product for you organization