How our Cypress E2E went from 20m+ to under 3m

Photo by Florian Steciuk / Unsplash

The Background

The PushOwl dashboard, used by 20k+ merchants, was initially written without integration/e2e tests. This in itself wasn’t very unusual given how young the company was and the limited dev resources available, along with the need to prioritise product development over well-tested code.

However, as the company and the product grew, the lack of tests became untenable in the face of ever-increasing regressions and app breakages.

We decided to implement an e2e system and zoned in on Cypress as the best tool available.

The Original Implementation

The dashboard uses Next.js as the frontend framework and GitHub Actions for CI.

To get up and running, our initial setup consisted of running an Actions workflow whenever a commit was made to a branch (triggers the push event).

This workflow would then checkout the branch via actions/checkout; install all dependencies via npm (available in path); build the project; and then run the Cypress action, which runs the test suite by checking for a deployment on localhost:3000 by default.

The Hurdles

We quickly ran into a few crippling issues:

  1. Unwieldy times to run the entire e2e since it required dependency installation; a local build; and Cypress using a development build of Next.js, which is slow.
  2. Cypress running the tests serially.
  3. We had to re-run the entire suite again if we wanted to re-test just a single failing spec.

The Optimisations

We set out to rectify these problems and optimise our productivity and feedback loop as much as we possibly could.

Vercel Preview Deployments

The local build issue was our first target. We were using the Vercel integration for GitHub and so we had access to preview deployments for every branch.

We wanted a way to use these for our tests since Cypress can be configured to take in a baseUrl. We referred to a useful article from Gleb Bahmutov and transitioned to the deployment_status event which Vercel triggers after every deployment.

This was a major duration reducer and we managed to wipe out 5-10 minutes from the e2e run.

Dependencies Removal

Our Cypress tests use a few commands for authentication and other functionality mocking. We were using external libraries like merge and get/set from lodash in these commands, and they required installation on every e2e run. Dependencies can usually be cached using actions/cache, however this action cannot be used with the deployment_status event.

As it turns out, Cypress ships with lodash and we can directly access methods using Cypress._.

We removed the external imports and disabled installs using a Cypress action input. An example can be found here. This shaved off another minute from the total duration.


We were making good headway and another readily apparent potential improvement was the serial execution of our specs.

We explored Cypress Dashboard which offers several features including parallel runs, and the ability to view execution stats in a tailored UI. The results were promising and we observed an overall e2e duration of ~5m. However, the inability to re-trigger individual specs in the case of failures was not optimal; and coupled with the paid nature of this service, we couldn’t justify the investment.

We decided to take matters into our own hands and split the single workflow into multiply yaml files. Each workflow would run a single spec and because they’re different jobs, they would run simultaneously.

The Cypress action has an input that can be used to run specific specs.

While Actions automatically runs the jobs listed in a single workflow in parallel, we couldn’t leverage that because, at the time, there was no option to re-run just a single failing job. The entire workflow would be re-run.

A rather ugly split was the only available option, but it worked and we had successfully parallelised our suite.

Moving Away From The Cypress Action

At this point, we weren’t sure what major benefits the Cypress action was offering us apart from a neat little interface for inputs like spec and env.

We explored directly running Cypress via npx and passing in all the requisite inputs like the baseUrl, spec, and env through the CLI options.

Since npx will install the Cypress executable to /home/runner/.cache/Cypress/ which is available to all runner instances, it will be re-used across e2e runs. Note that this doesn’t mean no download will take place. The dependencies that Cypress itself has will be downloaded to .npm/_npx/{hash}/node_modules/, but this only takes a few seconds.

Another caveat is that npx always checks for the latest Cypress version. So, the first time a new version is encountered and no corresponding executable is available in the cache, it will download it afresh, which will take time but will only affect the currently running workflow, and will be re-used from there on out.

With this final piece of the puzzle in place, our total e2e time became less than 3 minutes and we were satisfied that apart from a few setup seconds, that duration is overwhelmingly the inherent time needed to run the suite.

Back To A Single File

Around March ‘22 GitHub released an update for Actions which allowed us to re-run only the failing jobs in a workflow. With this change, we no longer needed distinct workflow files and reverted to using one file with multiple jobs, one for each spec.


Having video recording on consumes resources and can adversely affect your test timings, sometimes severely. We’ve observed that as long as a failing test is reproducible on our local, we do not have trouble debugging them.

  // Having this on is detrimental in CI, but it might
  // be a good tradeoff if you need the debugging.
  "video": false, 
  "retries": {
    "runMode": 1,
    "openMode": 0
  "env": {...}

The Code

And this is our current GitHub Actions CI workflow for our e2e test suite.

name: E2E # Whatever name is preferable
on: deployment_status # The event that triggers this workflow

	    if: github.event.deployment_status.state == 'success'
	    runs-on: ubuntu-latest
	      - name: Deployment # Optional steps to log a message
	        run: echo Deployment Successful 🔥

{...Other workflow checks}

		needs: check-deployment-status # Don't forget this!
	    runs-on: ubuntu-latest
	      - name: Checkout Repo 🔼
	        uses: actions/checkout@v2
	      - name: Run [SPEC_NAME] # Something related to the spec
          	# Multiple specs can be passed
	        run: |
	          npx cypress run \\
	          --config baseUrl=${{ github.event.deployment_status.target_url }} \\
	          --env PASSWORD=${{ secrets.PASSWORD }}

	      - uses: actions/upload-artifact@v2
	        if: failure()
	          name: cypress-screenshots
	          path: cypress/screenshots

			{...Change the spec passed to cypress run...}

The Wrap Up

The final workflow appears quite simple and it is; but simplicity is difficult to achieve and there was much effort expended in research and testing to get to where we are currently. Ultimately though, the industry was worth it and we enjoy a speedy feedback flow which ensures that we can keep pushing often and reliably.

Looking ahead, we have plans to take our learnings from all this study to create even more sophisticated actions to further automate and streamline our dev processes. If you have any questions about our workflow or want to share your optimisations, let us know in the comments.

Ritwik Das

Ritwik Das