Skip to content

Best practices for Web application maintenance

Published: at 12:00 AM

How not to throw away your application every two years?

Feedback based on best practices applied to the web platform developed at Bedrock Streaming

A bit of context

At Bedrock Streaming many teams develop and maintain frontend applications for our customers and users. Some of those application are not very young. In fact, the application I’m mainly working on is a website whose developments started in 2014. I have already mentioned it in different articles of this blog.

screenshot of the number of commits on master of our project 15668

You might think: “Oh poor people, maintaining an almost 10 year old application must be hell!”

Don’t worry, it’s not the case! I have worked on projects that are much less old but where the development of new features was much more painful.

Today the project is technically up to date, we must be on the latest version of React while it had started on a version 0.x.x. In this world of web technologies often criticized (eg: the many articles on the Javascript Fatigue) whose tools and practices are constantly evolving, keeping a project “up to date” remains a real challenge.

number of versions of the application 1445

Moreover, in the context of this project, in almost 10 years, we have had about 100 contributors. Some have only stayed a few months/years. How can we keep the maximum knowledge on “How we do things and how it works?” in such a moving human context?

list of the 100 contributors of the project

This is what I would like to present you.

With the help of my colleagues, I have collected the list of good practices that still allow us to maintain this project today. With Florent Dubost, we often thought that it would be interesting to publish it. We hope you will find it useful.

Set rules and automate them

A project that stands the test of time is first and foremost a set of knowledge that is stacked one on top of the other. It’s like the Kapla tower you used to build as a child, trying to get as high as possible. A solid base on which we hope to add as much as possible before a potential fall.

From the beginning of a project, we have to make important decisions about “How do we want to do things? We think for example about “What format for our files? How do we name this or that thing?” Writing accurate documentation of “How we do things” might seem like a good idea.

However, documentation is cool, but it tends to get outdated very quickly. Our decisions evolve, but documentation does not.

“Times change but not READMEs.”

Olivier Mansour (deputy CTO at Bedrock)

Automating the checking of each of the rules we impose on ourselves (on our codebase or our processes) is much more durable. To make it simple, we avoid as much as possible to say “We should do things like that”, and we prefer “we’ll code something that checks it for us”. On top of that, on the JS side we are really well equipped with tools like Eslint that allow us to implement our own rules.

So the reflex we try to adopt is the following:

Continuous Integration of a project is the perfect solution to not miss anything on every Pull Request we provide. Reviews are only easier because you don’t have to worry about all the rules that are already automated. In this model, the review is more for knowledge sharing than for typo copying and other non-compliance with the project conventions.

In this principle, we must therefore try to banish oral rules. The time of the druids is over, if all the good practices of a project have to be transmitted orally, it will only take longer to guide new developers into your team.

the recipe of the magic potion of panoramix is lost because secret

A project is not set in stone. These rules evolve with time. It is therefore preferable to add rules that have a script that will autofix the whole codebase intelligently. Many Eslint rules offer this, and it is a very important selection criteria when choosing new conventions.

eslint --fix

A very strict rule that will force you to modify your code manually before each push is annoying in the long run and will annoy your teams. Whereas a rule (even a very strict one) that can auto-fix itself at commit time will not be seen as annoying.

How to decide to add new rules ?

This question may seem thorny, take for example the case of <tab> / <space> in files. For this, we try to avoid the endless debates and follow the trend and rules of the community. For example, our Eslint configuration base is based on Airbnb’s which seems to have some success in the JS community. But if the rule we want to impose on ourselves is not available in Eslint or other tools, we sometimes prefer not to follow the rule rather than say “We’ll do it without a checking CI”.

The almost exhaustive list 🤞

Our continuous integration workflow

Test, test, test

I hope that in 2021 it is no longer necessary to explain why automatic testing of your application is essential to make it sustainable. In JS, we are rather well equipped in terms of testing tools today. However, the eternal question remains:

“What do we want to test?”

Globally if we search on the internet this question, we see that different needs make emerge very different practices and testing tools. It would be very presumptuous to think that there is a good way to automatically test your application. This is why it is preferable to define one or more test strategies that meet defined and limited needs.

Our test strategies are based on two distinct goals:

To do this, we perform two “types of tests” that I propose to present here.

Our E2E tests

We call them “functional tests”, they are End-to-end (E2E) tests on a very efficient technical stack composed of CucumberJS, WebdriverIO with ChromeHeadless This is a technical stack set up at the beginning of the project (at the time with PhantomJS for the oldest among you)

This stack allows us to automate the piloting of tests that control a browser. This browser will perform actions that are as close as possible to what our real users can do while checking how the site reacts.

A few years ago, this technical stack was rather complicated to set up, but today it is rather simple to do. The site that hosts this blog post is itself proof of this. It only took me about ten minutes to set up this stack with the WebdriverIo CLI to verify that my blog is working as expected.

I recently published an article presenting the implementation of this stack.

So here is an example of an E2E test file to give you an idea:

Feature: Playground

  Background: Playground context
    Given I use "playground" test context

  Scenario: Check if playground is reachable
    When As user "toto@toto.fr" I visit the "playground" page
    And I click on "playground trigger"
    Then I should see a "visible playground"
    And I should see 4 "playground tab" in "playground"

    When I click on "playground trigger"
    Then I should not see a "visible playground"

    # ...

And it looks like this in local with my Chrome browser!

Example of functional test execution

Here is a diagram that explains how this stack works:

diagram that explains how our stack works

Today, Bedrock’s web application has over 800 E2E test cases running on each of our Pull Request and the master branch. They assure us that we are not introducing any functional regression and that’s just great!

👍 The positives

👎 The complications

Our “unit” tests

To complete our functional tests we also have a stack of tests written with Jest. We call these tests unit tests because we have as a principle to try to always test our JS modules independently from the others.

Let’s not debate here about “Are these real unit tests?”, there are enough articles on the internet about this topic.

We use these tests for different reasons that cover needs that our functional tests do not cover:

With these tests, we put ourselves at the level of a utility function, a Redux action, a reducer, a React component. We rely mainly on the automock functionality of Jest which allows us to isolate our JS modules when we test.

visual representation of the automock

The previous image represents the metaphor that allows us to explain our unit testing strategy to newcomers.

You have to imagine that the application is a wall made of unit bricks (our ecmascript modules), our unit tests must test one by one the bricks in total independence from the others. Our functional tests are there to test the cement between the bricks.

To summarize, we could say that our E2E tests test what our application should do, and our unit tests make sure to check how it works.

Today there are more than 6000 unit tests that cover the application and allow to limit regressions.

👍

👎

Our principles

We try to always follow the following rules when asking the question “Should I add tests?”.

  1. If our Pull Request introduces new user features, we need to integrate E2E test scenarios. Unit tests with Jest can complete / replace them accordingly.
  2. If our Pull Request aims to fix a bug, it means that we are missing a test case. We must therefore try to add an E2E test or, failing that, a unit test.

It is while writing these lines that I think that these principles could very well be automated. 🤣

The project stays, the features don’t

“The second evolution of a feature is very often its removal.”

As a matter of principle, we want to make sure that every new feature in the application does not base its activation on simply being in the codebase. Typically, the lifecycle of a feature in a project can be as follows (in a Github Flow):

To simplify some steps, we have implemented feature flipping on the project.

How does it work?

In our config there is a map key/value that lists all the features of the application associated with their activation status.

const featureFlipping = {
  myAwesomeFeature: false,
  anotherOne: true,
};

In our code, we have implemented conditional treatments that say “If this feature is activated then…”. This can change the rendering of a component, change the implementation of a Redux action or disable a route in our react-router.

But what’s the point?

To give you a more concrete example, between 2018 and 2020 we completely overhauled the application’s interface. This graphical evolution was just a featureFlipping key. The graphical redesign was not a reset of the project, we still live with both versions (as long as the switchover of all our customers is not completed).

screenshot comparing v4 / v5 on 6play

A/B testing

Thanks to the great work of the backend and data teams, we were even able to extend the use of feature flipping by making this configuration modifiable for sub-groups of users.

This allows us to deploy new features on a smaller portion of users in order to compare our KPI.

Decision making, technical or product performance improvement, experimentation, the possibilities are numerous and we exploit them more and more.

The future flipping.

Based on an original idea by Florent Lepretre.

We regularly had the need to activate features at very early hours in the future. For that we had to be connected at a precise time on our computer to modify the configuration on the fly.

To avoid forgetting to do this, or doing it late, we made sure that a configuration key could be activated from a certain date. To do this, we evolved our selector redux which indicated if a feature was activated so that it could handle date formats and compare them to the current time.

const featureFlipping = {
  myAwesomeFeature: {
    offpubDatetime: "2021-07-12 20:30:00",
    onpubDatetime: "2021-07-12 19:30:00",
  },
};

Many coffees ☕️ at 9am have been saved by future flipping.

Monitor, Measure, Alert

To maintain a project as long as bedrock’s web application, testing, documentation and rigor are not enough. You also need visibility on what works in production.

“How do you know that the application you have in production right now is working as expected?”

We assume that no functionality works until it is monitored. Today, monitoring in Bedrock on the frontend side takes the form of different tools and different stacks. I could quote NewRelic, a Statsd, a ELK stack or even Youbora for the video.

To give you an example, each time a user starts a browsing session we send an anonymous monitoring Hit to increment a counter in Statsd. We then have to define a dashboard that displays the evolution of this number in a graph. If we observe a too important variation, it can allow us to detect an incident.

example of monitoring dashboard

Monitoring also offers us solutions to understand and analyze a bug that occurred in the past. Understanding an incident, explaining it, finding its root cause are the possibilities that are open to you if you monitor your application. Monitoring can also allow you to better communicate with your customers about the impact of an incident and also to estimate the number of impacted users.

With the multiplication of our customers, monitoring our platforms well is not enough. Too much data, too many dashboards to monitor, it becomes very easy to miss something. So we started to complement our metrics monitoring with automatic alerting. Once we have enough confidence in the metrics, we can easily set up alerts that will warn us if there is an inconsistent value.

However, we try to always trigger alerts only when it is actionable. In other words, if an alert sounds, we have something to do. Sounding alerts that do not require immediate human action generates noise and wastes time.

general alert

Limit, monitor and update your dependencies

What goes out of date faster than your shadow in a web project based on javascript technologies are your dependencies. The ecosystem evolves rapidly and your dependencies can quickly become unmaintained, out of fashion or completely overhauled with big breaking changes.

We therefore try as much as possible to limit our dependencies and avoid adding them unnecessarily. A dependency is often very easy to add but it can become a real headache to remove.

The graphic component libraries (e.g. React bootstrap, Material Design) are a good example of dependencies that we do not want to introduce. They can make integration easier at first, but they often freeze the version of your component library later on. You don’t want to freeze the React version in your application for two form components.

Monitoring is also part of our dependency management routines. Since the addition of reporting security flaws in an NPM package, it is possible to know if a project has a dependency that contains a known security flaw with a simple command. So we have daily jobs on our projects that run the yarn audit command to force us to apply patches.

Dependency maintenance is greatly facilitated by our E2E test stack which sounds the alarm if the version upgrade generates a regression.

Today, except for security flaws, we update our dependencies “when we have time”, often at the end of sprint. We are not satisfied with this because some dependencies can be forgotten. I personally use tools like yarn outdated and Dependabot on my personal projects to automate the update of my dependencies.

Accepting your technical debt

A project will always accumulate technical debt. This is a fact. Whether it is voluntary or involuntary debt, a project that resists the years will inevitably accumulate debt. Even more so, if during all these years you keep adding features.

Since 2014, our best practices, our ways of doing things have evolved well. Sometimes we decided these changes but sometimes we underwent them (an example, the arrival of functional components with React and the Hooks api).

Our project is not completely “state of art” and we assume it.

It will hold!

We try to prioritize our refactoring topics on the parts of the application on which we have the most concern, the most pain. We consider that a part of the application that we don’t like but on which we don’t need to work (bring evolutions) doesn’t deserve that we refactor it.

I could name many features of our application that have not evolved functionally for several years. But since we have covered these features with E2E tests since the beginning, we didn’t really have to touch them.

As said above, the next evolution of a code feature is sometimes its deactivation. So why spend time rewriting the whole application?

To summarize

The best practices presented here are obviously subjective and will not be perfectly/directly applicable in your contexts. However, I am convinced that they can probably help you identify what can make your project go from fun to stale. At Bedrock we have other practices in place that I haven’t listed here but that will be the occasion for a new article sometime.

Finally, if you want me to go into more detail on some of the chapters presented here, don’t hesitate to tell me, I could try to dedicate a specific article to it.


Previous Post
🎄 Advent of Code 2021 🎄: my solutions with JS
Next Post
Bonnes pratiques pour la maintenance d'une application web