The little guide to CI/CD for frontend developers

Jul 28 2020

/ 15 min read /

0 Likes

0 Replies

0 Reposts

If you've been following my work for a while, or read my previous articles, you might have noticed that I love building tools that improve the reliability and the scalability of the projects I work on. Continuous Integration and Continuous Delivery pipeline, also referred to as CI/CD, is one of them. Building such a pipeline and making it as automated as possible, is like giving superpowers to your team. With it, you can enable your organization to deliver:

  • Code that respects consistent styling guidelines and formatting
  • Reliable software is tested and so are its subsequent releases to avoid regressions
  • Consistent releases: releasing a new version to the customer is as easy as possible and your team can ship fixes to production in no time
  • Features that can easily be reverted if they degrade the user experience
  • Any upcoming change to the product can be previewed as an independent unit of change
  • Use every developer's time as efficiently as possible. Developers cost money and you don't want them to be constantly putting out fires in production. Automate testing and releases, remove all the humans in the process as much as possible. More testing means fewer bugs means less fear of change. Less fear of change means more experimentation and innovation. More automation means more time for experimentation and innovation.

Change must be in the DNA of the team -- Eric Elliott in How to Build a High-Velocity Development Team

If your team suffers from complex release processes, struggles to patch production within the same day or to get a new feature to the customers reliably: this article is for you! In this post, I'll give you all the tools that you and your team need to build a high-velocity development environment, eradicate the fear of releasing, and establish processes for your team to become unstoppable. Like the title suggests, the following will be written for frontend developers since this is the area where I'm the most knowledgeable, especially when it comes to tooling. However, the concepts and steps that I will detail can also be valuable to backend developers who are looking to improve their team's testing and release pipeline.

The impact of automation on your team, your org ,and your users

When starting to work on a new CI/CD pipeline, or looking at improving an existing one, it's essential to target the efforts where you want to make the most positive impact:

  • unit-testing, formatting, linting, and integration-testing: impact mainly the developers within your team. Having a good habit of writing unit tests, and having consistent code styling can increase velocity within the team. These are what I called fast to run fast to fail: they can be run quickly to identify any issues within the codebase and act as the first safeguard against bugs.
  • end to end testing, automated release, and branch previews are more impactful at the cross-functional or organizational level. End to End testing will, for example, enable your frontend team and backend team to test some critical user paths. The automated releases ensure things are released with as little friction as possible and that your entire org can address a customer request as fast as possible. Finally, branch previews enable your frontend team and QA team to review work before it lands on production. Each upcoming feature or fix can be hosted in its service and tested on its own.
  • feature flags and accessibility testing are more customer-facing. They guarantee a better and more inclusive experience for all your users and also avoid any service disruption when it comes to releasing new features.

The following showcases a rather complete CI/CD pipeline and all its different steps. Additionally I separated that pipeline in 3 parts, representing which of the team, the org and the end-user each step of the pipeline is bringing the most positive impact:

Illustration of each step of a CI/CD pipeline and which of the team, the organization and the end-user they impact the most

A CI/CD pipeline and which of the team, the organization and the end-user is most impacted at different steps

Linting, Formatting, and Unit tests

These three items are the foundational pieces for your team to ship more reliable software, faster.

Linting and formatting

Linting and formatting are essential to keep your codebase consistent and clean. Each team member should follow the same rules and conventions when it comes to writing code. Consistency in the codebase itself is essential: 

  • you do not want to bring confusion on how to write a given piece of code in your app when you onboard a new team member
  • you do not want to have to document multiple ways of doing the same thing

Tools I use

For this step, I want my tools to be fast and reliable. It should only take a few seconds to lint and format my codebase. As a frontend engineer, I use

  • ESlint for linting, it comes with a set of rules to write proper Javascript, and these rules can be customized to your own team's fit. Additionally, should you need something more specific, you can build your own ESLint rules, I wrote about it here, it's an interesting exercise that involves Abstract Syntax Tree (AST).
  • Prettier for formatting. It became the defacto formatting tool for Javascript developers within the last few years. I set it up in my project and editor in a way that saving a file will format it automatically for me.

As said above, this step must be super fast. So fast that you can execute this step as a pre-commit hook (an arbitrary script that runs on every commit. I like using husky to set these up) as it will ensure that the code is formatted and readable before it's up for review by your teammates.

Unit tests

As stated earlier, I like to call these tests fast to runfast to fail. They should not take an extensive amount of time to run and should reveal errors or bugs in a matter of a few seconds or even a few minutes depending on the scale of your project.

The aim here is to test each part of your app as "units" or isolated components. In a React project, for example, these tests can cover: 

  • Components: I like to use unit tests to ensure my components have the proper behavior and are functioning as expected on their own, i.e. not in combination with other components or views of my app
  • Reducers / State / Actions: unit tests can help to validate that your state is updated in a specific way for a given action. Reducers are pure functions (i.e. functions that always return the same output for a given input)
  • Utility functions: we build a lot of helpers, or abstract a lot of functions in our projects: these are a perfect example of things that you might want to write unit tests for.

I like unit tests a lot because they act as a sanity check for your project to make sure its individual pieces work as intended over time, in a very efficient way (fast, reliable).

Tools I use

As frontend developers, you might have probably heard about Jest. It's the most popular Javascript testing framework and has been for a few years now. Jest is the testing tool I always install first in my Javascript projects. To run tests on my React apps, for example, I use it in combination with:

  • @testing-library/react : If you want to write maintainable tests over time without worrying about implementation details. I mainly use it to render individual components and test them.
  • @testing-library/react-hooks: This library gives you all the tooling necessary to test your custom hooks.
  • @testing-library/jest-dom: This package gives you extra DOM element matchers to make your tests even easier to write and read.

The @testing-library maintainers also provided a ton of other packages that will help you test your app no matter the framework (Svelte, VueJS, etc).

Below, you will find code snippets showcasing some test suites that are meant to illustrate how I usually write tests in different situations.

In this one, I test a simple React Button component using @testing-library/react and Jest.

Example of a unit test suite for a Button component using @testing-library/react

js
1
// Button.jsx
2
import React from "react";
3
4
const Button = (props) => {
5
const {
6
onClick,
7
disabled = false,
8
loading = false,
9
children,
10
...rest
11
} = props;
12
13
return (
14
<button
15
{...rest}
16
onClick={() => onClick()}
17
disabled={loading || disabled}
18
>
19
{loading ? "Loading ..." : children}
20
</button>
21
);
22
};
23
24
export default Button;
25
26
// ===============
27
// Button.test.jsx
28
import React from "react";
29
import { render, screen } from "@testing-library/react";
30
import userEvent from "@testing-library/user-event"; // I use the userEvent package to manage events rather than fireEvent
31
import "@testing-library/jest-dom/extend-expect";
32
import Button from "./";
33
34
describe("Button Component", () => {
35
it("Renders the Button as expected and clicking on it calls the function passed in the onClick prop", () => {
36
const onClickMock = jest.fn();
37
const { container, debug } = render(
38
<Button name="test-btn" onClick={onClickMock}>
39
Test
40
</Button>
41
);
42
43
expect(screen.getByRole("button")).toBeInTheDocument(); // .toBeInTheDocument is a handy function that is given by the jest-dom/extend-expect package
44
expect(screen.getByRole("button")).toHaveTextContent("Test");
45
expect(screen.getByRole("button")).not.toHaveAttribute("disabled");
46
userEvent.click(screen.getByRole("button"));
47
expect(onClickMock).toHaveBeenCalled();
48
});
49
50
it("Renders the Button with loading set to true and clicking on it does not call the function passed in the onClick prop", () => {
51
const onClickMock = jest.fn();
52
const { container, debug } = render(
53
<Button name="test-btn" loading onClick={onClickMock}>
54
Test
55
</Button>
56
);
57
58
expect(screen.getByRole("button")).toBeInTheDocument();
59
expect(screen.getByRole("button")).toHaveTextContent("Loading ...");
60
expect(screen.getByRole("button")).toHaveAttribute("disabled");
61
userEvent.click(screen.getByRole("button"));
62
expect(onClickMock).not.toHaveBeenCalled(); // you can negate a specific matcher by inserting `.not` before calling it
63
});
64
65
it("Renders the Button with disabled set to true and clicking on it does not call the function passed in the onClick prop", () => {
66
const onClickMock = jest.fn();
67
const { container, debug } = render(
68
<Button name="test-btn" disabled onClick={onClickMock}>
69
Test
70
</Button>
71
);
72
73
expect(screen.getByRole("button")).toBeInTheDocument();
74
expect(screen.getByRole("button")).toHaveTextContent("Test");
75
expect(screen.getByRole("button")).toHaveAttribute("disabled");
76
userEvent.click(screen.getByRole("button"));
77
expect(onClickMock).not.toHaveBeenCalled();
78
});
79
});

For this code snippet, I focus on testing a reducer function that can handle two different types of actions. I love testing reducers because as pure functions, they have predictable outputs regardless of the complexity, thus writing tests for these is always an easy win for your team.

Example of a unit test for a reducer / function

js
1
// reducer.js
2
const initialState = {};
3
4
const reducer = (state = initialState, action) => {
5
switch (action.type) {
6
case "FETCH_POSTS": {
7
const { payload } = action;
8
const items = payload.reduce((accumulator, currentItem) => {
9
accumulator[currentItem.id] = currentItem;
10
return accumulator;
11
}, {});
12
return { ...state, ...items };
13
}
14
case "CLEAR_POSTS": {
15
return {};
16
}
17
default: {
18
return state;
19
}
20
}
21
};
22
23
export default reducer;
24
25
// ===============
26
// reducer.test.js
27
import reducer from "./reducer";
28
29
describe("Reducer", () => {
30
it("Handles the FETCH_POST action as expected when the initial state is an empty object", () => {
31
const action = {
32
type: "FETCH_POSTS",
33
payload: [
34
{
35
userId: 1,
36
id: 1,
37
title: "Title Test",
38
body: "Test",
39
},
40
{
41
userId: 1,
42
id: 2,
43
title: "Title Test 2",
44
body: "Test2",
45
},
46
],
47
};
48
49
const initialState = {};
50
51
expect(reducer(initialState, action)).toEqual({
52
"1": { body: "Test", id: 1, title: "Title Test", userId: 1 },
53
"2": { body: "Test2", id: 2, title: "Title Test 2", userId: 1 },
54
});
55
});
56
57
it("Handles the FETCH_POST action as expected when the initial state is an empty object", () => {
58
const action = {
59
type: "FETCH_POSTS",
60
payload: [
61
{
62
userId: 1,
63
id: 1,
64
title: "Title Test",
65
body: "Test",
66
},
67
{
68
userId: 1,
69
id: 2,
70
title: "Title Test 2",
71
body: "Test2",
72
},
73
],
74
};
75
76
const initialState = {
77
"3": {
78
body: "Test",
79
id: 3,
80
title: "Title Test 3",
81
userId: 2,
82
},
83
};
84
85
expect(reducer(initialState, action)).toEqual({
86
"3": { body: "Test", id: 3, title: "Title Test 3", userId: 2 },
87
"1": { body: "Test", id: 1, title: "Title Test", userId: 1 },
88
"2": { body: "Test2", id: 2, title: "Title Test 2", userId: 1 },
89
});
90
});
91
92
it("Handles the CLEAR_POSTS action as expected", () => {
93
const action = {
94
type: "CLEAR_POSTS",
95
};
96
97
const initialState = {
98
"3": {
99
body: "Test",
100
id: 3,
101
title: "Title Test 3",
102
userId: 2,
103
},
104
};
105
106
expect(reducer(initialState, action)).toEqual({});
107
});
108
});

A note about test coverage

I see a lot of people putting quarterly objectives for test coverage. Unless your project is an open-source library or a design system containing components that are critical across your entire organization, test coverage should simply remain a metric to measure whether your team is making progress when it comes to testing your consumer app.

A note on type checking

I'm skipping type checking in this section on purpose as this step deserves an article on its own.

Integration and end-to-end testing

I'm dedicating this section to both integration and end-to-end testing as I sometimes see these two types of testing used interchangeably and I think that it's important to know the nuance.

Integration tests

This is perhaps where most of your efforts should go when writing tests.

Why? Well, when considering the effort it takes to write tests, the time it takes to execute them and the confidence level it gives back to your team: integration tests are simply the best. Unit tests give you a low confidence level but are fast to run, while end-to-end tests are slow to execute (sometimes taking over an hour in some large apps) and require expensive infrastructure to run but give you the highest confidence level possible. Integration tests, however, are easier to write than e2e tests and help you validate more complex behaviors than unit tests, all that in a pretty short amount of time.

Write tests not too much mostly integration -- Guillermo Rauch

If you want to know why in detail, I advise reading Kent C Dodd's Write Tests blog post.

While unit tests help to test parts of your project in isolation, integration tests help to test whether an entire set of units work together as expected. They also allow you to test full user flows and all the different paths they can take (error state, loading state, success state).

With integration tests, I like testing groups of components, functionalities together such as:

  • Navigation: Does clicking on the user setting menu item load the expected view?
  • Forms: Fill up the form in all possible ways (valid and invalid, with and without optional fields). Test that the expected error messages are displayed when invalid. Validate that clicking on submit sends the right payload when valid. A form like this may be composed of components, reducers, and utility functions that we tested individually in the unit test phase. Here we're testing them working altogether in a specific context.
  • Views depending on external data: Test your list view that's fetching some data with different mocked API responses: does it show the proper empty state if there's no data? Is the filter button enabled if your API returned an error? Does it show a notification if the fetch was successful? 

I could go on and on with different examples but this is roughly the main use-cases I usually focus on validating when writing integration tests. I try to validate all the possible paths that a group of components, a form, or a view can take.

Tools I use

When it comes to integration tests I'm split between using two different tools, sometimes within the same project. 

  • Jest: You can write pretty advanced integration tests with Jest, @testing-library/react, and all the cool tools we've mentioned before. I recently started using msw to mock the APIs that the views I'm testing are depending on different.
  • Cypress: It comes with a neat way to write fixtures and mock API endpoints and thus run some integration tests. I mainly use it to validate some browser-related behaviors like: are the proper query parameters passed to the URL? Can I load a view in a specific state by adding this set of parameters to the URL? Is a specific set of values set in local storage or not?

Sample React app that fetches posts and handles different states

js
1
import React from "react";
2
import Button from "./Button";
3
import reducer from "./reducer/reducer";
4
5
const App = () => {
6
const [shouldFetch, setShouldFetch] = React.useState(false);
7
const [error, setError] = React.useState(null);
8
const [posts, dispatch] = React.useReducer(reducer, {});
9
10
React.useEffect(() => {
11
if (shouldFetch) {
12
fetch("https://jsonplaceholder.typicode.com/posts")
13
.then((response) => response.json())
14
.then((json) => {
15
dispatch({
16
type: "FETCH_POSTS",
17
payload: json,
18
});
19
setShouldFetch(false);
20
})
21
.catch(() => setError({ message: "Error :(" }));
22
}
23
}, [shouldFetch]);
24
25
if (error) {
26
return <div data-testid="error">{error.message}</div>;
27
}
28
29
return (
30
<div>
31
{Object.values(posts).length > 0 ? (
32
<ul data-testid="posts">
33
{Object.values(posts).map((post) => (
34
<li key={post.id} data-testid="post">
35
{post.title}
36
</li>
37
))}
38
</ul>
39
) : (
40
<div data-testid="empty">No Posts</div>
41
)}
42
<Button onClick={() => setShouldFetch(true)} loading={shouldFetch}>
43
Fetch Posts
44
</Button>
45
</div>
46
);
47
};
48
49
export default App;

You might have noticed that this app uses the same Button component and reducer we tested in isolation (i.e. unit tested) before. As stated before, the aim of integration tests is to validate whether these units can now work together in a specific use case. Below is an example of a typical integration test I'd write for an app like the one showcased above. I'd test the different possible outcomes for this list of posts:

  • The list of posts loads as expected and is properly displayed
  • The list of posts loads but is empty
  • An error occurs when fetching the posts and the fallback error state is displayed as expected

Example of integration test suite I'd write to validate the different paths possible for the sample app

js
1
import React from "react";
2
import { rest } from "msw";
3
import { setupServer } from "msw/node";
4
import { render, fireEvent, waitFor, screen } from "@testing-library/react";
5
import userEvent from "@testing-library/user-event";
6
import "@testing-library/jest-dom/extend-expect";
7
import App from "./App";
8
9
/**
10
Here I set up our mock server using msw and msw/node.
11
When testing our app, any requests to https://jsonplaceholder.typicode.com/posts will return
12
the output specified below. This allows me to test different scenarios like:
13
- What if my endpoint returns an empty array
14
- What if my requests fails
15
16
This where the true value of integration tests resides.
17
*/
18
const server = setupServer(
19
rest.get("https://jsonplaceholder.typicode.com/posts", (req, res, ctx) => {
20
return res(
21
ctx.json([
22
{
23
userId: 1,
24
id: 1,
25
title: "Title Test",
26
body: "Test",
27
},
28
{
29
userId: 1,
30
id: 2,
31
title: "Title Test 2",
32
body: "Test2",
33
},
34
])
35
);
36
})
37
);
38
39
beforeAll(() => server.listen());
40
afterEach(() => server.resetHandlers());
41
afterAll(() => server.close());
42
43
describe("App", () => {
44
it("Renders the app and loads the posts", async () => {
45
render(<App />);
46
47
userEvent.click(screen.getByText("Fetch Posts"));
48
expect(screen.getByRole("button")).toHaveTextContent("Loading ...");
49
expect(screen.getByRole("button")).toHaveAttribute("disabled");
50
await waitFor(() => screen.getByTestId("posts"));
51
52
expect(screen.getAllByTestId("post")).toHaveLength(2);
53
expect(screen.getAllByTestId("post")[0]).toHaveTextContent("Title Test");
54
expect(screen.getAllByTestId("post")[1]).toHaveTextContent("Title Test 2");
55
56
expect(screen.getByRole("button")).toHaveTextContent("Fetch Posts");
57
expect(screen.getByRole("button")).not.toHaveAttribute("disabled");
58
});
59
60
it("Renders the app when there are no posts returned", async () => {
61
server.use(
62
rest.get(
63
"https://jsonplaceholder.typicode.com/posts",
64
(req, res, ctx) => {
65
// Here I mock the response to an empty array to test the behavior of my app when there are no posts to show.
66
return res(ctx.json([]));
67
}
68
)
69
);
70
71
render(<App />);
72
userEvent.click(screen.getByText("Fetch Posts"));
73
expect(screen.getByRole("button")).toHaveTextContent("Loading ...");
74
expect(screen.getByRole("button")).toHaveAttribute("disabled");
75
await waitFor(() => screen.getByTestId("empty"));
76
77
expect(screen.getByText("No Posts")).toBeInTheDocument();
78
});
79
80
it("Renders the app when the posts do not load", async () => {
81
server.use(
82
rest.get(
83
"https://jsonplaceholder.typicode.com/posts",
84
(req, res, ctx) => {
85
// Here I mock the status of the response to 500 to validate that my app can handle errors gracefully.
86
return res(ctx.status(500));
87
}
88
)
89
);
90
91
render(<App />);
92
userEvent.click(screen.getByText("Fetch Posts"));
93
expect(screen.getByRole("button")).toHaveTextContent("Loading ...");
94
expect(screen.getByRole("button")).toHaveAttribute("disabled");
95
await waitFor(() => screen.getByTestId("error"));
96
97
expect(screen.getByText("Error :(")).toBeInTheDocument();
98
});
99
});

End to End testing

End-to-End tests, or also sometimes named e2e, are the set of tests that are the closest to what the user should experience when using your product. In most frameworks like Selenium or Cypress, an e2e test suite is nothing more than a scripted user flow that the computer will go through. Additionally, most of these tests will be executed directly within a browser which gives you the ability to validate whether your app runs properly on different browsers that your customers may use.

If you're curious about cross-browser testing, I wrote a blog post about it earlier this year showcasing a very simple setup!

End to End tests have multiple pros and cons:

Pros:

  • They are the most "realistic" set of tests: you run your tests against the built version of your frontend app in a browser.
  • They validate whether your entire product works as expected, that includes the backend, APIs, the databases that might be involved, etc.
  • They can surface latency issues (long loading times) and race conditions that your team and org might not have caught yet.

Cons:

  • They are slow, complex, and expensive to run. As of today, e2e steps are the longest steps in most of my CI/CD pipelines. Additionally, they are very hard to maintain over time as your app becomes more complex, tests might become flaky, you might have to rewrite them completely to adapt to some new UX elements.
  • You only test what I call the "Happy Path". For example, when running an e2e test against a form that sends data to an API, you can only test whether the case where things go as expected as this test depends on external APIs, backend services that here are not mocked and are supposed to work whereas with integration tests you can test empty states, success states and failure states:

Illustration showcasing the difference in testing paths possible between e2e (only the "happy path") and integration tests (all paths)

Illustration showcasing the difference in testing paths possible between e2e (only the "happy path") and integration tests (all paths)


Tools I use

If you haven't introduced e2e tests in your team just yet, I'd highly recommend Cypress  as a starting point. The Cypress team has built the most accessible way to write e2e tests to my eyes and also has the best documentation and community support.

Rather than showcasing some code snippets, I'd like to share with you some of my tips that I keep using for writing e2e tests:

  • Each test should be self-contained. For a given suite with a test A, B, and C, the whole suite fails because test A failed might make it hard to find other issues with test B and C. I try to keep each test as independent as possible as it saves me time and effort when debugging a broken test.
  • Trigger API calls before the test to create all the objects (todos, posts, ...) you need for your test. For a given object in your app, you might have a "create", "read", and "update" flow and I want to test all three of them. However, the "read" and "update" flow can't be self-contained if they depend on the "create" test being successful. Thus I tend to create custom commands to call the related APIs to create the objects I need before executing a test.
  • Promote good test practices within your team, run them often (we'll get to that in the next part), fix them as soon as they break, gather a list of tests that you want to write, and prioritize them.
  • If you currently have 0 e2e tests in your codebase and don't know which test to write first: start by writing a test that validates the most buggy or flaky feature in your app. This single test will have a positive impact on your product instantly. As stated earlier in this post, emphasize the impact of your CI/CD and tests by making the product better than it was before you wrote the test. Your organization and users will be more than thankful.

Accessibility tests and audits

This is the last and the most important piece of the CI/CD pipeline. Often enough it's also the most complicated because guaranteeing for your frontend project to be 100% accessible is no easy feat, but it's something that everyone should strive for.

Nothing is more efficient than sitting in front of your computer and using your app with a screen reader, however, here are some tools that can be run as part of an automated CI/CD pipeline that I use to guide the accessibility efforts:

  • Lighthouse CI: This is a suite of tools to help you audit performance, accessibility, and whether your app follows best practices. I use this tool to essentially hold the line and ensure things do not get worse over time. It allows you to put together "performance and accessibility budgets" and thresholds. It will fail in case your score goes below the targetted budget. This probably deserves an entire article on its own, but in the meantime you can check their documentation that contains sample Github Workflows and easily integrate it into your CI/CD pipeline.
  • Cypress Axe: This package works on top of Cypress and allows you to run a series of accessibility focused test suite. It helped me find some more complex accessibility issues that Lighthouse CI would skip. I wrote a blog post about Cypress Axe last year, and invite you to check it out if you want to learn more about it.

Tools I use

I also use a couple of chrome extensions to track and find new accessibility issues:

These, however, are purely used outside of my CI/CD pipeline, but I figured they were perhaps worth mentioning in this context.

Automation: When and how to run my tests and release

Now that we have written some unit, integration, e2e tests, and put in place the tooling to track accessibility issues, it's time to talk automation. The objective for your team should be to automate as much as possible, from running the tests to previewing the deployments, to deploying to production. The only manual step left in your CI/CD pipeline should be the code review. Automation is the key component of any High-Velocity development team.

Validate every code change

As of now, we know how to run these tests locally but we want to ensure that these tests can be run automatically every time a change occurs on the codebase. 

I generally am in favor of running these tests on every pull requestEach change has to be tested before it's merged to the main branch without any exception. That is the secret to keep your project stable and bug-free: tests are run as often as possible, for every unit of change. Tests must pass for any code change to reach the main branch.

As my main tool for automation, I've been using Github CI, Actions and Workflows for both work-related and personal projects, and it has been working like a charm! Thus, I'm going to mainly focus on it in the upcoming part, and share some Github Workflow configurations as they are easy to read and thus very accessible to people who are new to that category of tools. Your team might be using other CI/CD services, like CircleCI, Jenkins or Google Cloud Build so you may have to do a little bit of investigation on your own when it comes to the actual configuration files needed, but the concepts stated below are still valid for those services.

Here is a sample Github Workflows that I'd typically use on several projects. If you do not have an automated CI/CD pipeline already in place you can use it to get started quickly and iterate over it, it integrates very well with Github PRs:

Example of Github Workflow that runs automated tests on every PR

yml
1
name: Linting Formatting Unit and Integration Tests
2
3
on:
4
pull_request:
5
branch:
6
- "main" # This ensures these tests are run on pull requests that are open against the branch "main"
7
8
jobs:
9
validate-code-and-test:
10
runs-on: ubuntu-20.04
11
strategy:
12
matrix:
13
node: [12.x] # If your app or package needs to be tested on multiple versions of node, you can specify multiple versions here and your workflow will be run on each one of them
14
steps:
15
- name: Checkout Commit
16
uses: actions/checkout@v2
17
with:
18
ref: ${{ github.event.pull_request.head.sha }}
19
- name: Use Node.js ${{ matrix.node }}
20
uses: actions/setup-node@v1
21
with:
22
node: ${{ matrix.node }}
23
- name: Install Dependencies
24
run: |
25
yarn install --non-interactive
26
- name: Run Prettier
27
run: |
28
yarn format
29
- name: Run Lint
30
run: |
31
yarn lint
32
- name: Run Unit and Integration tests
33
run: |
34
yarn jest

Example of Github Workflow that runs e2e tests on every PR

yml
1
name: Linting - Formatting - Unit Tests - Integration
2
3
on:
4
pull_request:
5
branch:
6
- "main" # This ensures these tests are run on pull requests that are open against the branch "main"
7
8
jobs:
9
build-and-e2e-tests:
10
runs-on: ubuntu-20.04
11
strategy:
12
containers: [1, 2, 3] # The Cypress lets you scale the number of containers to use to run your e2e tests. This will parallelize your test run and can help speeding up your CI/CD pipeline
13
matrix:
14
node: [12.x] # If your app or package needs to be tested on multiple versions of node, you can specify multiple versions here and your workflow will be run on each one of them
15
steps:
16
- name: Checkout Commit
17
uses: actions/checkout@v2
18
with:
19
ref: ${{ github.event.pull_request.head.sha }}
20
- name: Use Node.js ${{ matrix.node }}
21
uses: actions/setup-node@v1
22
with:
23
node: ${{ matrix.node }}
24
- name: Install Dependencies
25
run: |
26
yarn install --non-interactive
27
- name: Build UI
28
run: yarn build
29
env:
30
NODE_ENV: production # Don't forget to run your e2e tests against the production bundle of your app!
31
- name: Run E2E Tests
32
uses: cypress-io/github-action@v2.2.2 # The cypress team gives a pretty handy Github action. This is the easiest way to get your Cypress test working in a Github workflow!
33
with:
34
browser: chrome # Cypress now supports multiple browsers as well!
35
headless: true
36
parallel: true # Let Cypress know you want to run tests in parallel
37
start: yarn serve # You'll have to serve your own build files to run Cypress against your app. For that I simply add the NPM package called "serve".
38
wait-on: "http://localhost:3000"
39
config: video=true,videoUploadOnPasses=false # You can pass a series of options here, I invite you to checkout the Cypress docs to learn more about them. Here I like to enable video recordings and disable them if the test passes. This gives me back videos that are then uploaded as artifacts, they help me debug failing tests and know exactly what happened.
40
- uses: actions/upload-artifact@v1 # In this step I tell the workflow to upload Cypress video recordings as workflow artifacts. They will be available to download on the Github UI.
41
if: always()
42
with:
43
name: cypress-videos
44
path: cypress/videos

Some resources you might find interesting regarding Github Workflows and Cypress:

Another thing I tend to run on every PR is preview deployments. These are perhaps my favorite feature of the whole CI/CD pipeline: you get a standalone deployment each PR that is accessible through a unique endpoint. Each deployment is a version of your frontend project with a specific change. This can not only help your team to speed up reviews, but it also lets your design and product team validate some new features easily. They shouldn't have to run your project on their computers to preview some changes: the review process should be as fast as possible and without roadblocks.

There are a couple of services out there that provide a great preview deployment feature like Netlify and Vercel. If your org is using some other services to deploy and host your project, you can easily integrate with those just to use the preview deployment feature, or you can even implement your own! I plan on publishing a blog post about how I built such a service with Google Cloud for my team.

Releases

The last thing we want to automate is the release process. You do not want to have to run 20 scripts, manually, in a specific order, to get your application from your main branch to production. For this, I tend to favor having what I call a release branch in my Github repository and have the automated scripts run every time the main branch is merged on the release branch. You could also run the automated script on other events such as when you tag a release or you can even have scheduled deployments if your organization has a consistent release cadence. At this point, it depends on your team or your organization and how/when you want to do your release.

Here's a sample GitHub Action that runs a script (a placeholder in this case, you will have to replace it with your own) following a push event on a release branch:

Example of Release Github Workflow

yml
1
name: Build and Deploy to Production
2
3
on:
4
push:
5
branches:
6
- "production" # Any push on the production branch will trigger this workflow
7
jobs:
8
build-and-deploy:
9
runs-on: ubuntu-20.04
10
strategy:
11
matrix:
12
node: [12.x] # If your app or package needs to be built on multiple versions of node, you can specify multiple versions here and your workflow will be run on each one of them
13
steps:
14
- name: Checkout Commit
15
uses: actions/checkout@v2
16
with:
17
ref: ${{ github.event.pull_request.head.sha }}
18
- name: Use Node.js ${{ matrix.node }}
19
uses: actions/setup-node@v1
20
with:
21
node: ${{ matrix.node }}
22
- name: Install Dependencies
23
run: |
24
yarn install --non-interactive
25
- name: Build UI
26
run: yarn build
27
env:
28
NODE_ENV: production
29
- name: Deploy to production
30
run: yarn deploy:production
31
env:
32
SOME_TOKEN_TO_DEPLOY=${{ secrets.MY_PRODUCTION_TOKEN }} # Never expose tokens! Github has a very handy secrets feature that can store your tokens securely, and allows them to be used in any workflow!

Another essential point regarding releases is that, once you automate them, you should do releases as often as possible. By increasing the cadence of production deployments you limit the scope of each deployment. This in return limits the number of issues that could impact your user. On top of that, you can add Feature Flags, to allow a slow rollout of a big new feature. This also helps you mitigate any potential problems that a massive change could create once deployed to production and also gives you even more control over the release of a new feature. I especially like feature flags because they also provide a better experience for the end-user, the rollouts are smoother and can be more targetted: you may only want to enable a given feature to a subset of user before making it generally available.

Conclusion

This article contains all the concepts, tools, and knowledge I use daily to ship software without sweating. I know that it is pretty dense and that there's a lot to take in, but really if you implement each of these steps and concepts in your project I can ensure you that this will enable you, your team and your organization to do the best work you've ever done

Below you'll find a couple of extra links that I found useful when learning about tests and CI/CD. Some of them are blog posts, some of them are classes, I found them all very valuable and I'm sure they would help you in your journey to build a high-velocity development environment and make you and your team unstoppable.

Resources:

Fetching Replies...

Do you have any questions, comments or simply wish to contact me privately? Don’t hesitate to shoot me a DM on Twitter.


Have a wonderful day.
Maxime


© 2020 Maxime Heckel —— Made in SF. Polished in NY.