Posted on Leave a comment

How to integrate tests into Gitlab CI/CD pipelines

This tutorial shows how you can easily run all your tests from a Gitlab pipeline. We assume that your current pipeline is already able to build and deploy your software. This tutorial contains the following steps to complete this task:

  1. Get your ApiKey from Testup to authenticate your Gitlab pipeline
  2. Insert the ApiKey into a secure space as a Gitlab variable.
  3. Extract the Id of your project
  4. Setup the Gitlab pipeline to run your tests
  5. Optional: Make temporary changes to the settings in your test

1) Get your ApiKey from Testup

Go to your Testup start page and click on the “Profile” tab. Find your ApiKey und the section “Api Keys”

2) Provide your ApiKey to Gitlab

Since the api key gives full access to your testup settings it should be provided to the Gitlab pipelines in a secure way. To do this open Gitlab and follow these steps:

  1. Open the project settings
  2. Open the CI/CD section
  3. Expand the Variables section
  4. Press “Add variable”

Once you successfully opened the add variable dialog you can provide your details as follows:

  1. Change your variable’s name to “TESTUP_APIKEY”
  2. Paste the Testup ApiKey into the value field
  3. Uncheck the “Protect variable” box (or alternatively protect your pipelines that should run your tests)
  4. Press “Add variable” to complete the setup

3) Find your Testup project Id

In this step you must find the numeric Id of the project containing your tests. For this step you navigate to your project in the browser, such that you see a domain that it has the form http://app.testup.io/project/<projectId>?… Keep that project Id ready as you will need it in the next step.

4) Setup your Gitlab Pipeline

We assume that you already have a running pipeline that builds and deploys your project. Testup can only run your tests with access a deployed version of your software. Therefore, you need to add the end-to-end test as a final stage in your build pipeline. Go to your stages section and add e2e-test. Your file .gitlab-ci.yml will probably look something like this:

stages:          # List of stages for jobs, and their order of execution
  - build
  - unit-test
  - deploy
  - e2e-test     # Add this stage here

As a next step you add your end-to-end test step at the end of your pipline description. Your new pipeline step can be inserted as shown below. Don’t forget to provide the correct project Id.

Testup:   # Run this job after deployment completed successfully
  stage: e2e-test
  image: curlimages/curl
  variables:
    PROJECT_ID: <YourProjectId>
  script:
    - URL="https://app.testup.io/cicd/project/$PROJECT_ID/run/$CI_JOB_ID"
    - curl $URL -H Authorization:\ ApiKey-v1\ ${TESTUP_APIKEY} -s --retry 12
    - curl $URL -H Authorization:\ ApiKey-v1\ ${TESTUP_APIKEY} -s --fail

How exactly does this step work technically? First, it starts with an image that provides the “curl” command. Then it builds the url that triggers the start of the pipeline. This url contains the project id to run as well as the Gitlab job id to distinguish update requests from new runs. Following these preparations two curl commands are issued. The first calls Testup and retries until the test is either marked as failed or passed. Until ready the endpoint returns a 504 timeout code along with some early debug information. This first curl also makes sure that your pipeline’s debug messages contain useful information and a link to the corresponding resource in Testup. The second curl is necessary to make the pipeline fail if the tests failed.

Once your pipeline is set up you will see a debug message in your pipeline that looks something like this:

4) Optional: Provide additional settings to your test

Very often it is necessary to run your pipeline tests with other values than the ones used in interactive editing. Common cases are temporary domains or changing parameters for users, passwords etc. You can provide additional parameters in the message body of the curl. It is possible to replace urls and text contents that occur in your test. Your pipeline would then look like this:

Testup:   # Run this job after deployment completed successfully
  stage: e2e-test
  image: curlimages/curl
  variables:
    PROJECT_ID: <YourProjectId>
  script:
    - URL="https://app.testup.io/cicd/project/$PROJECT_ID/run/$CI_JOB_ID"
    - curl $URL -H Authorization:\ ApiKey-v1\ ${TESTUP_APIKEY} -s --retry 12 --data
      '{"textMap":[{
        "old":"OriginalValue",
        "new":"NewValue",
        "regex":false
       }],
       "urlMap":[{
        "old":"https://stage.example.com",
        "new":"https://qa.example.com",
        "regex":false
       }]
      }' 
    - curl $URL -H Authorization:\ ApiKey-v1\ ${TESTUP_APIKEY} -s --fail

Posted on Leave a comment

Testing with a Twist: How Testup Tests Itself

Testup is a frontend test automation system for web pages and front ends. But Testup does not only provide a tool to test front ends it also has a nice web front end itself. To assure that it remains like that, it should sound natural that we use our own software to test itself. Sounds twisted? Well…

Let’s first review some of the common challenges in UI testing.

Ideal Reality
Low Redundancy A generic redesign of the application requires redundant test updates at many locations.
Full automation Distinguishing design changes from design failures requires human intervention.
Reproducibility UI is particularly prone to inconsistencies when interactions occur at super human speed and internal states are not yet prepared.
Transparent State The internal state of server components are not accessible after the test ran through.
Locatablility Failure that surface end of a complex interaction cannot easily be attributed to a single failed feature or step.
Speed Certain features can only be reached after lengthy preparation phase. Causing for lengthy warm up periods.

Currently we have 11 UI tests. Each of the tests is focused on a specific aspect of the test application. Each of these tests activates several features that are attributed to that feature. Some tests share common predecessors that prepare the software in a stable state that can be used as a basis for more advanced features that require a filled data base.

The following picture gives you an outline of the relevant screens from each test. Please note that some predecessor tests have multiple continuations. For each of these continuations the entire predecessor must be rerun from scratch to ensure a clean state.

How are we doing on our own ideals?

  • Low redundancy
    We have shared predecessors that are 100% reused. E.g. the login process is only defined once and reused everywhere.
    Once the tests veer into different directions from a common predecessor there is no more sharing of test steps. The tests must be defined with the least possible overlap of accessed features.
  • Full automation
    In the good case tests run fully autonomously from start to end. What if it breaks? First, all test checks occur on the graphical representation of the application. Hence, it is easy for a human to assess the difference between expected and observed state. Second, tests will try to recover from the visual change and present the differences to the user who then accepts the change. In our dreams it would be smart enough to close a cookie banner, but we are not there yet.
  • Reproducibility
    It is not trivial to define tests that run consistently under variations of uncontrollable variables, e.g. server load, network latency, or expected changes in displayed calendars and times. This is certainly the hardest part and it totally relies on the usability of the software to make tracing and fixing issues as much fun as possible. (It cannot be explained until you use the software)
  • Transparent state
    Our approach is fully graphical and as such we might see less of an internal state (e.g. the DOM). However, we do record the entire screen sequence and can thus highlight any early deviations from the base line. Hence, it is usually possible to navigate quickly to the earliest indication of an incorrect internal state.
  • Locatability
    If tests were written to just replay recorded screen interactions it would be difficult to fail exactly at the point where the erroneous feature was executed. Instead we define tests such that they contain frequent checks and assertions. Adding an assertion is as easy as drawing a rectangle around the area you think should be graphically stable.
  • Speed
    Let’s face it. UI tests are not unit tests. Our tests are currently taking our 30 minutes of CPU time and it’s growing. That’s why our service comes with access to a cluster that can run tests massively in parallel.

To summarize this post with a (biased) view on our own software I have to say that we are quite happy. We have made progress on most of our set goals. In terms of usability I am convinced we have already surpassed most competition. If you haven’t done yet please do sign up and share your views.

“Testing with a Twist” is a series of articles about testing with Testup. Up to now the following articles were published:

Posted on Leave a comment

Testing with a Twist: The Testup Build Pipeline

How do you write software that tests itself? We at Testup are asking us this question every day, because we are writing software that allows everybody to test their software. And that obviously includes ourselves. One of our main missions is to write software that we love to use for testing our own software. Sounds twisted? Well, ….

Let’s first look at the attributes that we love and who is the hero in that discipline.

AttributeHero technologyIdeal
Fast responseStatic code analysisGet feedback as you type
Full historyStack trace
(Log file)
See everything that happened in run up to the failure
Transparent stateDebuggerSee all variable values at the time of failure
Production like environmentDocker,
Infrastructure as code
Quick reproduction of deployment settings
Smart assessmentsHuman
AI
Make informed decisions about the validity of the observed behavior

How can we achieve all these qualities at once? Unfortunately there is no hammer that hits all nails, at least not all at once. Let’s look at our build pipeline and see how we are doing.

There are three different environments to run the tests:

  • Build environment
    • Triggered with every commit (or locally on request)
    • Has access to only the artefacts of one repository at a time
    • Should finish within few minutes at most
  • Integration environment
    • Replicates all components of the productive environment
    • Mocks/disables external components like payment, messaging, …
    • Automated tests finish within an hour or less
  • Staging environment:
    • Usable environment for humans to interact
    • Final validity check for new features
    • Should be completed within a day

Below you can see the the build process from first commit up to production release. There are only two manual steps:

  • Pull request: Manual inspection of code quality
  • Acceptance test: Manual inspection of the feature’s usability before release

We will cover all aspects of the build pipeline in future posts. Next to come: “Automated UI tests”

“Testing with a Twist” is a series of articles about testing with Testup. Up to now the following articles were published: