Skip to main content

Testing API Changes vs. API Linting

· 3 min read

If the goal is to ship high quality, well-designed, reliable APIs to our consumers — I don't think linting our OpenAPI descriptions is enough to get us there. After months of helping companies like Snyk and SendCloud go API-first, I'm ready to make a case for why it's a good idea to replace linting with a tool that tests API changes.

The most critical moments in an API’s lifecycle, are the points when it is about to change. A breaking change will ruin a consumers’s day. A poorly designed interface, once shipped, cannot be changed and will become a permanent part of the API. Each set of API changes has the potential to hurt our consumers, limit our future options, or (when everything goes right) ship a great improvement.

The API changes themselves are what we should be testing. That's because the current set of changes being reviewed, not the rest of the API, is where all the real opportunities for improvement are. A linter will tell us we should follow our pagination standard everywhere, but that’s noise, because breaking an existing API to make these checks pass is something we shouldn't do. But if a tool reminds us to use pagination in the new API we’re about to release, that has impact.

When you run Optic CI, it first computes the effective diff between the working copy of your OpenAPI file, and the version of it on your default branch. You can write tests that are triggered by different kinds of API changes i.e. property added or operation removed or parameter became required. You can check everything a linter can, but you can also do much more.

Read the docs here

Here's what a test that detects a breaking change looks like:

response.property.removed('prevent removing response properties', (property) => {
if (property.required)
throw new RuleError(`Removing required response property ${property.key} is a breaking change`)
})

Optic has published an entire suite of built-in breaking change checks here

Testing changes is not a contrarian idea, this approach is common among other kinds of interfaces, particularly our databases. Database schemas are built from a series of migration files, each of which has to apply cleanly to the last state, and pass some basic sanity checks.

Change is when everything important happens, that’s what we should concern ourselves with. That's where our automation can have the most impact.

Real world examples

I asked some Optic users to compare what parts of their API standards they were able to automate with linters, vs testing their changes with Optic CI, rank the impact of each kind of test on the quality and reliability of their APIs. Here are the results:

Feature comparison

Feature comparison

Comparision:
Linting
Testing API Changes
Impact
Metadata rules
Include summary, descriptions, formats, etc.
YesYes1 - improved documentation quality
Naming rules for parameters, properties, etc.
Consistent name cases, plural/singular path params, etc.
YesYes2 - helped with consistency
Consistent style for every request/response
Require certain status codes, pagination style, response shape depending on type of API
basic style guidecomplete style guide4 - better API designs, easier to evolve
Breaking change rules
Removing a response property, changing a response type
NoYes5 - no more breaking changes
Versioning policy rules
Semver used correctly, Breaking changes require new versions.
NoYes5 - helps us iterate faster
Deprecation rules
Removal allowed 90 days after deprecation, etc.
NoYes4 - consumers trust us more
Apply different rules to new APIs vs legacy APIs
Legacy APIs can not be changed to pass rules. Those warnings are false positives
NoYes5 - we automate more standards because of this

Human in the loop

We cannot automate everything, and we probably do not want to. Automated testing is useful not because it replaces getting a review from our team, but because it focuses that review. We don't have to talk about versioning, breaking changes, or API standards. We can talk about the product we're building, the needs of our users, and the best approach to delivering them. I've seen the impact on the teams we work with, and I hope you do too.