Everyone who writes code knows that the best code is code that is not written! But as they say, everyone is not perfect and we have to write code. But how to help improve code, and then monitor the quality of code in your project? This question is constantly asked by people who are not indifferent to the quality of the code. Now, there are many different OSS solutions for analyzing code quality on the market, but in this blog post, I’ll tell you about integrating a free and very handy tool! Meet the Code Climate!
Because I write code in VSCode using Remote-Containers, then I’ll tell you how to connect these great tools together and get the benefit from it.
A little background, how did I come to this?
I write code, Linter says: “Everything is fine, don’t worry, be happy” and everything seems to be good, but you have a thought in your head: “Linter does not see all problems! Yes, there are some problems with duplication and structure in my code”. I’ll fix them later, and it calms me down a bit. But in my head there are questions:
- How difficult is it to understand the code? Cognitive Complexity
- How confusing is the branching logic of the conditions? Cyclomatic Complexity
- What is the number of duplicate code? Duplication
- How well can my code be maintained? Maintainability
I began to search on the Internet for any solutions in the niche of code analysis, but by old memory, I remembered the Code Climate and decided to try it. Added my OSS project to the Code Climate dashboard. Previously, I did not have the opportunity to configure code analysis locally, but now I have, and I returned to this idea again.
Well, the result did not keep me waiting, and the repo received the first marker “C” (it is yellowish) since I had 17 places with duplication of the code and 2 code smells. In general, there are the following markers in descending order of code quality — A, B, C, D, F.
Then, after removing the piece of code with “code smell”, the project received the quality mark B.
And after refactoring duplicate parts of the code, the project received an A badge.
What steps are included in the build process on codeclimate.com?
- Repo cloning
- Validation of your .codeclimate.yml (if it is found in the root of your project). I don’t have one, because I’ve got enough of the default checks.
- Running commands from your .codeclimate.yml, which is inside the prepare block.
- Downloading images of analysis tools by default or from your config.
- Run each analysis tool. in this case structure and duplication. These images are used by default.
Code Climate gave me a report and their mega cool metrics that give your project a symbolic mark denoting the quality of the code.
I was very surprised that for each found problem, the guys calculate the time required for correction. This is very cool! And by the way, the quality marks exactly based on the total time for fixing problems. In a project with a large number of lines of code, the time for fixing problems may increase, and the quality may decrease to F. As soon as possible, you should start measuring the quality of your code and correct the situation, not bringing it to the point of absurdity (when all your work can be discarded and you need to write everything from scratch).
How to add a codeclimate icon to your repo?
You need to go to the following URL https://codeclimate.com/github/name_of_organization_or_your_nick/repository_name/badges
Or go to your repository page on codeclimate -> Repo Settings -> Badges.
Key quality parameters used by codeclimate
- Churn — after finished analyzing your default branch, you will see churn metrics for the files in your repository. This churn metric is approximately the number of times a file has changed in the last 90 days, calculated by counting the number of distinct versions of that file across commits from that time. Quality issues can have a greater impact on files that churn frequently, so high churn, low quality files should be top candidates for attention.
- Cognitive Complexity — is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and understand.
- Cyclomatic Complexity — sometimes referred to as McCabe’s complexity, is a count of the linearly independent paths through source code. You can also think of this more simply as “the number of decisions a given block of code needs to make”.
- Duplication — a sequence of source code that occurs more than once, either within a program or across different programs owned or maintained by the same entity. Duplicate code is generally considered undesirable for a number of reasons. A minimum requirement is usually applied to the quantity of code that must appear in a sequence for it to be considered duplicate rather than coincidentally similar. Sequences of duplicate code are sometimes known as code clones or just clones, the automated process of finding duplications in source code is called clone detection.
- Maintainability — is an estimate of technical debt in the repo based on a standardized 10-point assessment of Duplication, Cyclomatic Complexity, Cognitive Complexity, and structural issues.
What languages are supported?
The following languages are supported:
Pretty impressive list! Well done guys! All popular languages are represented here.
Service is good, but local code analysis is a better way
Linter for IDE was created to help, not punish. In the same sense, you need an assistant who can tell you about your problems in the code in the context of structure and duplication from the other side.
As is well known, such checks are not simple and expensive in terms of resources and time, they cannot be launched as often as a linter (practically on the fly). As for really good analysis, you need to view not one line and not one file, but the entire project.
And for this purpose, it is possible to run codeclimate locally in dockers for your project.
Since I am developing in VSCode + Remote-Containers — it is not difficult.
The codeclimate service locally is real!
Below, step by step will be considered the process of starting the codeclimate service locally in an ensemble with VSCode.
Code Сlimate has one drawback — these are the structure and duplication tools docker images.
Each of these images weighs ~ 1.9 gigabytes in compressed form, and when unpacked, they take 5 gigabytes each. At first, I was scared of the image sizes, but the docker images are being reused, so there is nothing to be afraid of. And all tools launched as temporary containers. Many complain that it would be great to divide the images by language, but so far all the analysis tools are gathered together.
So, first of all, you should think about free space on your hard drive.
To download images, you must run the following commands on your host machine:
And do not worry! The most important thing is to execute these commands in turn! Since after downloading the first image of the codeclimate-structure, you download 1.9 GB (you know that the docker is layered), which means that when you download codeclimate-duplication, you will have the missing layers downloaded, which generally will take ~ 100–200 MB which is very nice. In reality, it turns out you do not download 4 GB from the Internet, only ~ 2 GB in total. Next, download the codeclimate image and this is the first stage of preparation.
Configure the .devcontainer folder
We need to change the Dockerfile, since we need to put the Docker client in the Docker, and also add the settings to the runArgs of the devcontainer.json file.
Let’s start with .devcontainer/Dockerfile and add the installation instructions of the docker client.
General view of the docker file, after adding a step for installing docker-ce, you can have a look here.
Now we need to add access to docker daemon (which is running on our host machine) from our VSCode development container. This is done by forwarding a socket as a volume from the host machine to the VSCode docker container, for development. You need to add the following setting in runArgs — the parameter inside .devcontainer/devcontainer.json file.
You can also immediately specify the environment variable CONTAINER_TIMEOUT_SECONDS = 1800, as it is necessary to increase the time during which any of the codeclimate tools can work
Next, you need to specify the command to install and configure the codeclimate wrapper, after creating our VSCode container.
These commands need to be executed after the VSCode container is created since, during the installation, you will need access to the docker daemon through sockets, which will not be available at the build stage.
The full version of .devcontainer/devcontainer.json can be seen here.
Installing the codeclimate wrapper into container
First you need to write a command that will help install the codeclimate wrapper.
I added a scripts folder in which I put everything I needed.
In this folder, there is a script for installing a wrapper:
But before you run this script, you need to check that the system contains codeclimate images, otherwise, the installation process will be delayed for N minutes or hours, depending on the speed of the Internet. Since once again I remind you that the images are not small ~ 2 GB in general.
This script runs check which determines that the command is executed inside the docker. I used a simple check for the presence of /.dockerenv file and this works fine.
As a result, the following command was added to package.json:
Now you can use the npm run codeclimate:install command to install the wrapper.
Run code analysis
Well, now we come to the most important, the goal of the whole article — the launch of code analysis. In this case, you can get a report in three formats: html, json, text. The most convenient of course — this is a report in html, it can be immediately viewed in the browser and see the problem visually. Other formats are useful for your tools.
But in order to successfully run the code analysis inside the docker of the VSCode image, you need to know the real path to the container-mounted repository folder. Codeclimate based on temporary launches of tool containers with mounting sourcecode folder from host machine into these containers. Accordingly, the paths inside the Docker VSCode container will not work.
To determine a real path to the project’s source folder from the host system was written the following script which is located in scripts/volumes/folder:
This script goes through all the running containers, (as the “docker ps” is used) and gets all the Mounts parameters of the running containers. Then script filters output by the “mad-fake-slack” line and finds json with a description of mounting volume folders to the docker, where in the Source field there is a real path. And if the mount folder path is found with the mention of mad-fake-slack, then this path is returned to the standard output using console.log(). If not found, the script terminates the process with code 1, which should interrupt the chain of commands.
As a result, to launch the analysis, the following command was created:
The result of this command is an html report. This report will be saved in the reports folder (this folder is specially created for storing reports).
Since I immediately wanted to make it possible to receive reports in different formats, the following commands were added:
Now, thanks to the codeclimate:analyze command, you can specify the environment variable REPORT_FORMAT=html or REPORT_FORMAT=text or REPORT_FORMAT=json before the command and get the report output in the specified format.
REPORT_FORMAT=html npm run codeclimate:analyze # runs analysis and report output in html format to the reports folder with the name in the form of the current date and time
REPORT_FORMAT=json npm run codeclimate:analyze # runs analysis and report output in json format to the reports folder with the name in the form of the current date and time
REPORT_FORMAT=text npm run codeclimate:analyze # runs analysis and report output in text format to the reports folder with the name in the form of the current date and time
The report is as follows:
The report has links to explanations for each found problem, so that even an inexperienced person can understand and, if desired, correct the found problems. In short, the report is very informative.
In this way, you can set up a local run to analyze the quality of your code. If you are developing in docker using VSCode, then this will not be a problem for you, but, on the contrary, will help to be more confident in observing the generally accepted standards of code quality.
Thanks for reading this article! Everything that has been described in this article, I actively use in the OSS project mad-fake-slack.
Software Architecture 101 - Introduction to the World...
Software Architecture 101 -...
Software Architecture 101 - Introduction to the World of Application Design
Hello, everyone!Today, I would like to introduce you to the world of application design.Designing applications can be called a multidisciplinary...
Reduce GPS Data Errors on Android App with Kalman...
Reduce GPS Data Errors on Android App...
Reduce GPS Data Errors on Android App with Kalman Filter and Accelerometer
One day I got a cool task to improve the accuracy of positioning and distance calculating based on GPS data on Android devices for the taxi service...