Is a container solution the right choice for my application?
The company is doing well. The application has finally gained the traction it needs. The load is increasing. But so are the general IT costs. Maybe you’ve had downtime, bug-fixing is taking too long, and the deployment strategy isn’t polished yet. One solution could be to switch to a container-based environment. But is this really the right choice and is it a good fit for your application setup? Read more about the requirements from both an organizational and technical perspective.
First things first: It all starts with a proper development and deployment strategy.
Whether you move to containers or not, many of the problems mentioned above can be solved with the right deployment strategy and processes. You can set up a perfect CI/CD process completely without containers. Already a higher level of automation in the development and deployment process reduces errors and new code can be forwarded to a new environment much faster. Treating the environment as immutable and deploying customizations via the Infrastructure as Code design pattern ensures a consistent environment with the help of version control, automatic updates, and trackable changes. A well-functioning team and good control mechanisms are essential given the ever-changing challenges of running production software.
What does containerization mean for the development process?
Creating the containers is the easy part. The hard part is working out everything around the containers so that the system is ready for production. For example, you probably don’t think about the details of certain issues, such as container networking, when developing your containers:
- It’s important to gain deep knowledge about how to manage connections between containers, whether you’re working with a single host or a clustered solution.
- Depending on whether you are using Docker or working with Kubernetes, the concepts differ. It is important that you understand the differences and limitations of each solution.
- With Docker, there are different types of network models, each of which brings different tradeoffs in user experience, security, and encapsulation.
- If you’re running containers with different hosts, IP mapping can quickly become complicated.
Given this complexity, it’s understandable if you prefer to opt instead for an orchestration platform that takes away much of the complexity and allows configuration via YAML objects. However, even this is not without effort and a good knowledge of the available options is necessary.
What are the requirements from an organizational perspective?
The most important thing is to gather all the knowledge you need to run applications on a container platform: the knowledge of how to create and orchestrate containers, and the knowledge of how the platform’s API works. It is important to have a team that accepts the new way of working and is ready to implement changes where necessary. Due to the high complexity of container orchestration platforms, it may be advisable to outsource everything that does not directly add value to the business: the platform itself and the services that support it can be run by someone else while you focus on your application.
Okay, and how much time does all this take?
This question is not easy to answer because there are many dependencies to consider. It depends on the complexity of the project. For example, a container for a Golang app that doesn’t require C libraries can be done in a few hours. But if you containerize a complex PHP app with Apache, which requires many modules, you can quickly spend two weeks ironing out all the vulnerabilities. Once you have the container, the orchestration also depends on the app, the deviation level of the load, what other services are needed, what Secret and Config objects need to be injected at runtime, if there are any special network configurations. This is of course not an exhaustive list.
Conclusion: It’s worth the effort.
It all seems like a lot of work. In the long run, however, using automation and the right tools allows developers to write, review and deploy code much faster than before. Automating both application and environment deployments saves time and money by knowing the exact status, which leads to repeatability and fewer errors. The “infrastructure as code” pattern leads to setups that are self-documenting. With a reliable partner taking care of the infrastructure, you have more time for your application and your business.