7 Docker Anti-Patterns You Need to Avoid – Cloud CVIT.

Docker replaces software development with its simple model of containerization that lets you pack workloads into faster regenerative units. Although the docker is easy to grasp, its use is more obvious than ever. This is especially true when you want to improve the use of your docker to increase performance and efficiency.

Here are seven common docker anti-patterns that you should look for and avoid. While your containers and images may meet your immediate needs, the presence of any of these methods indicates that you are deviating from the principles of containerization, which can be even more damaging.

1. Installing updates inside containers.

Undoubtedly, the most common Docker Anti-Pattern is trying to update containers using techniques derived from traditional virtual machines. Container file systems are temporary, so any changes are made when the container stops. Their state must be reproducible. Dockerfile Used to create images.

That means you shouldn’t run. apt upgrade Inside their containers, they will be different from the picture they were made of. The purpose of the containers is to be freely exchanged. By separating your data from your code and dependencies, you can change the instances of the container at any time.

The patch should be applied from time to time by rebuilding your image, stopping existing containers and starting new ones based on the revised image. Community Tool Chain projects are available to simplify the process and inform you of upstream updates.

2. Running multiple services in one container.

Containers should be free and focused on a specific function. Although you may have previously run your web and database servers on a single physical machine, the fully decoupled approach will see the two components separately in individual containers.

This method prevents images of individual containers from getting too large. You can inspect logs from each service using the built-in docker commands and update them independently of each other.

Multiple containers give you better scalability because you can easily increase the number of copies of individual parts of your stack. Database running slow? Add some more MySQL container examples using your container orchestra, without allocating any additional resources that are already running.

3. Creates image with side effects

Docker image bleeds should be the ideal operation that always gives the same result. Is running docker build Your wider environment should not be affected in the slightest as its sole purpose is to create an image of the container.

However, many teams create Dockerfiles that manipulate external resources. A docker file can take the form of an all-encompassing CI script that publishes releases, creates gut commits, and writes to an external API or database.

These actions are not in the docker file. Doctor image creation is an independent operation that should have its own CI pipeline phase. Release is then prepared. Separate Stage so you can always. docker build Unexpectedly without publishing a new tag.

4. Complicating your mail file.

In a similar vein, it is possible to do too much for doctor files. Limiting your doc file to a minimal set of instructions reduces the size of your image and increases readability and maintenance.

Problems can often occur when using multi-stage docker blades. This feature makes it easy to create complex architectural layouts with reference to more than one base image. Many independent steps can indicate that you are tightly combining concerns and the couple’s process.

Find logical sections in your doctor file that serve specific purposes. Try breaking them down into individual docker files, creating self-contained utility images that can run freely to cover parts of your vast pipeline.

You can create a “builder” image with the dependencies needed to compile your source. Use this image as a step in your CI pipeline, then feed its output as a sample in the next step. Now you can copy the compiled binaries into the final docker image that you use in production.

5. Hard code configuration

Container images, including certificates, secrets, or hard code configuration keys, can cause headaches as well as security risks. The baking settings in your photo compromise Docker’s main attraction, the ability to deploy the same item in more than one environment.

Announce environmental variables and docker secrets for injecting configuration at the container launch site. It retains images as reusable assets and restricts access to sensitive data to runtime only.

This rule still applies to images that are for internal use only. Hard coding secrets indicate that they are also committed to your version control software, potentially at risk of theft in the event of a server breach.

6. Separate development and deployment photos.

You must create only one container image for each change in your application. Maintaining multiple identical images for an individual environment shows that you are not taking advantage of Docker’s “run anywhere” mentality.

From staging to production, it’s best to promote a single image in your environment. It gives you confidence that you are running the same logical environment in each of your deployments, so whatever you do in staging will still go into production.

Having a dedicated “production” image suggests that you may be suffering from some of the other anti-patterns mentioned above. You may have found a complex architectural configuration that could break, or hard-coded production certificates in your image. Images should be separated from the development life cycle stage and not from the deployment environment. Manage the difference between the environment by injecting the configuration using variables.

7. Storing data inside containers.

The temporary nature of the container file system means you should not write data into them. Permanent data generated by users of your application, such as uploads and databases, must be stored in Docker Volume or it will be lost when your containers are restarted.

Other types of useful data should be avoided wherever possible. Stream logs into your container’s output stream, where they can be used. docker logs Command, instead of putting them in a directory that will be lost after the container fails.

Container file system rights can also impose significant performance penalties when editing existing files. Docker’s use of the “copyright” layering strategy means that the files in the lower layers of the file are read from that layer instead of the last layer of your image. If a change has been made to the file, Docker must first copy it to the top layer, then apply the change. This process can take several seconds for large files.

Result

Looking at these anti-patterns will make it easier to reuse and maintain your docker images. The only difference is the use of docker containers and the adoption of containerized workflow. You can easily write a working doc file but a poor plan will limit your ability to take advantage of all possible benefits.

Containers should be autonomous units of function created from a reproductive construction process. They map out the stages of your development process, not the deployment environment, but do not directly facilitate the process itself. Images should be samples produced by the CI pipeline, not the method describing the pipeline.

Adopting containers requires a change of mindset. It’s best to start with the basics, be aware of the big goals, then see how you can incorporate them into your process. Without proper consideration of these aspects, the use of containers can cause headaches in the long run, beyond the flexibility and reliability described by proponents of this approach.

Write a Comment

Your email address will not be published. Required fields are marked *