Manual Distributed Task Execution on Circle CI
Using Nx Agents is the easiest way to distribute task execution, but it your organization may not be able to use hosted Nx Agents. You can set up distributed task execution on your own CI provider using the recipe below.
Run Agents on Circle CI
Run agents directly on Circle CI with the workflow below:
1version: 2.1
2orbs:
3 nx: nrwl/nx@1.5.1
4jobs:
5 main:
6 docker:
7 - image: cimg/node:lts-browsers
8 steps:
9 - checkout
10 - run: npm ci
11 - nx/set-shas
12
13 # Tell Nx Cloud to use DTE and stop agents when the e2e-ci tasks are done
14 - run: npx nx-cloud start-ci-run --distribute-on="manual" --stop-agents-after=e2e-ci
15 # Send logs to Nx Cloud for any CLI command
16 - run: npx nx-cloud record -- nx format:check
17 # Lint, test, build and run e2e on agent jobs for everything affected by a change
18 - run: npx nx affected --base=$NX_BASE --head=$NX_HEAD -t lint,test,build,e2e-ci --parallel=2 --configuration=ci
19 agent:
20 docker:
21 - image: cimg/node:lts-browsers
22 parameters:
23 ordinal:
24 type: integer
25 steps:
26 - checkout
27 - run: npm ci
28 # Wait for instructions from Nx Cloud
29 - run:
30 command: npx nx-cloud start-agent
31 no_output_timeout: 60m
32workflows:
33 build:
34 jobs:
35 - agent:
36 matrix:
37 parameters:
38 ordinal: [1, 2, 3]
39 - main
40
This configuration is setting up two types of jobs - a main job and three agent jobs.
The main job tells Nx Cloud to use DTE and then runs normal Nx commands as if this were a single pipeline set up. Once the commands are done, it notifies Nx Cloud to stop the agent jobs.
The agent jobs set up the repo and then wait for Nx Cloud to assign them tasks.
The ordinal: [1, 2, 3]
line and the --parallel
flag both parallelize tasks, but in different ways. The way this workflow is written, there will be 3 agents running tasks and each agent will try to run 2 tasks at once. If a particular CI run only has 2 tasks, only one agent will be used.
Rerunning jobs with DTE
Rerunning only failed jobs results in agent jobs not running, which causes the CI pipeline to hang and eventually timeout. This is a common pitfall when using a CI providers "rerun failed jobs", or equivalent, feature since agent jobs will always complete successfully.
To enforce rerunning all jobs, you can set up your CI pipeline to exit early with a helpful error. For example:
You reran only failed jobs, but CI requires rerunning all jobs. Rerun all jobs in the pipeline to prevent this error.
At a high level:
- Create a job that always succeeds and uploads an artifact on the pipeline with the run attempt number of the pipeline.
- The main and agent jobs can read the artifact file when starting and assert they are on the same re-try attempt.
- If the reattempt number does not match, then error with a message stating to rerun all jobs. Otherwise, the pipelines are on the same rerun and can proceed as normally.