TLDR: Code audits can help teams to determine how well a product functions across a wide variety of use cases, and can help ensure the health and stability of the product.
Independent code audits have become a popular process to ensure quality and security in products. An outside professional opinion of design and implementation based on the actual code and build process greatly enhances quality and security, and confirms implementation of high development standards. At Yeti we regularly consult with our clients to determine overall code health. Here’s how.
Yeti has the capability to audit code for incomplete or finished, functioning products. When a client brings us their product for an independent code audit, we first utilize existing documentation to get it running in whatever environment it is designed for. During this process, we dig into the product’s setup, deployment and code documentation giving us an idea of the quality of the software from the get go.
Here’s a bit of insight from James Hague’s blog post Organization skills beat algorithmic wizardry: “When it comes to writing code, the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity.”
What does this mean for a code audit?
As a product grows and so does the codebase, the code needs to be organized. Is the code readable, and if so, what small chunks and functions make it so? We are pulling apart a repository of files and looking for distinctly separate packages that the code resides in. Once these packages have been uncovered, we ask whether they have proper documentation and commenting, or are simple enough to be self-explanatory. Ultimately, the code a developer is writing needs to be crafted not just for the machine running it, but also the future developer who is going to have to read it and make changes to it.
Once we have successfully organized the source code, we are looking for tests and what they are actually testing for. Are there integration tests, or unit tests, or both? While integration tests help make sure high level flows through the product work, unit tests break the product down and scrutinize the smallest testable parts. For instance, you can test an entire car engine by turning on the car, or you can break it down by testing each individual piston or spark plug.
To understand what has been audited, we look at the percentage of code that has been thoroughly reviewed, also known as “code coverage”. The code coverage may vary by programming language and project: At Google, for instance, programmers see a range of code coverage varying from 56.5% (C++) up to 84.2% (Python). In most cases, 100% coverage is not feasible due to some parts just being hard to test, but on smaller projects we generally aim for 80 to 90%. By analyzing the result of code coverage testing, we can determine that the product’s code is reliable and effective to a high degree of accuracy.
When sufficient testing has ensured satisfactory code coverage, we look at overall code quality.
Is the code written in a clean and concise way, so current and future developers can understand what each function does? Is it using the latest version of the programming language or framework? Are there any comments in the code explaining the more complicated sections? Did the original programmers use the proper styling guides and syntax for the programming language they were writing in? These questions help us sniff out bad or “rotting” code—a common consequence of outdated, inefficient code.
Throughout the code audit, we reference the original developer’s Issue Tracking System (ITS). This backlog of information isn’t always available, but if it is, it will provide valuable details about defects and enhancements in the original code.
There is more to understand before deciding the success of a product’s code. For example, what are our client’s plans for pushing the product live? Here we offer consultation on best practices in continuous integration and code review. Is the code using any legacy or dated programming languages? If so, we work with them to put frameworks in place that maintain the codebase. Is the application storing data? If so, we look at the database design and SQL queries and determine if the data design is normalized, effectively denormalized for certain use cases and is generally properly structured.
The goal of an independent code audit is to reveal vulnerabilities and to translate audit findings into a recommended course of action. This systematic examination reveals mistakes overlooked in the original development phase. Whether we are looking at pre-launch code or older code in need of a face lift, an independent code audit allows our clients to be comfortable in the overall health of their product, and walk away with a clear plan.
At our last Django Meetup Group event, Jayden Windle, the lead engineer at Jetpack, an on demand delivery company, talks building APIs with Django and GraphQL. Watch the video to learn more.
At the last meeting of the San Francisco Django Meetup Group, Wes Kendall gave a talk on how to make a bulletproof Django application by testing it with pytest. Check out his talk here!
Part of the Yeti Lunch and Learn series - our amazing developer, Resdan, gives a presentation on creating a reusable component library. Enjoy the video!