Introduction

If you are a software developer or a user who works with developers, there are several instances that could remind you the phrase, “When your customers are facing issues with respect to certain functionality there can be several reasons for it, nine out of ten times, it could be due to an environmental dependency issue”. Certain things like, accidental deletion of a folder referenced by an application, usage of disparate computing environments or incompatible libraries are some of the most common mistakes that occurs in an application development. But now, thanks to Containerization, there is an easy way out.

So, what is Containerization and how does it help?

A container can be defined as an entity that consists of an application including libraries and configuration files and runtime environment that is bundled into one package. Static version of a container can be termed as an image. Containerization is different from operating system virtualization that provides a lightweight platform consisting of applications and its dependencies rather than a full-fledged operating system.  By containerizing the application platform and its dependencies, any changes in OS distribution and underlying infrastructure is abstracted away.

As containers provide the application with an environment to execute, developers can be sure that if an application works in one system environment (say testing) then it would work in another environment (production). Since cloud providers like AWS, Google cloud and Microsoft lead the IT world, the idea of portability between multiple cloud eco-systems is necessary. There are other benefits of containerization such as ease of deployment using automation, horizontal scaling, high availability, etc.

Sounds interesting? However, what does application containerization involve?

The structure and deployment environment of your existing application is the most important factor that can decide your containerization option. If you are designing any new application to run on containers then it can be easy to follow a microservices architecture approach. But most organizations have designed their architectural and infrastructure with legacy application several years ago. Applications built during this time frame mostly followed a monolith architecture approach. If your application follows any of the below mentioned attributes, then it can be considered as a legacy application and some re-engineering would be required to containerize it.

  • Large and complex system that follows a monolith application pattern
  • Not flexible, testable or modular in nature and cannot pace up with the business growth
  • Development, testing and production environments needs to be synchronized
  • Using local system file for persistent storage
  • Significant downtime for any upgrades or updates in the application
  • Manual deployment process

So, what are my options?

Following the techniques outlined below may benefit you in re-engineering your application for containerization

The Micro Service based approach

Modularity is a concept that is used in containers. It is necessary to check if large and complex legacy application can be functionally decomposed into multiple autonomous services that can be deployed and managed separately. Each of these modules should be independent and any updates can occur without any service interruption. This scales individual services based on load or necessity.

But, most legacy applications are made up of complex and tightly coupled modules which cannot be broken down into smaller ones. In such cases, the entire application can be enclosed within a container.

By remembering SAC (Single Application Container)

Even before the advent of microservices, application designers designed and developed modular services. But now-a-days, well-structured monolith applications have the characteristics of a legacy application. This is because services are deployed to a single application server. If any one of the services needs attention, the entire application server should be stopped and started. In software design paradigm, this is termed as Single Point of Failure (SPOF).

Enclosing more than one application/service within a container could also create confusion as the container might still be performing, even if one service has crashed.

Being Stateless

Containers by nature are easily replaceable.  Any data created during its existence is wiped out by default if a container is replaced. For this reason, all persistent data should be stored outside a container. Most of the legacy applications use host file system for storing application artifacts such as images or files. In such cases, we can mount the host’s file system to specific locations in the container file system, so that data survives between containers.

Managing Configurations

As consistency is important in containerization process, a single image file can be used in all the environments (e.g. DevOps, testing and production). But in several cases, the source code may contain environment specific configurations such as queues names, external system URL’s, database URL’s etc.  A development server might have a different configuration value than a testing or staging server. These values can be specified as environment variables to the containers so that they can be configured at start up time.

Automating and Managing deployments

The main drawback of a legacy system is its inability to automate deployment process.  This is mainly due to hardware, technology or resource limitations. Containerized applications or services can also be manually deployed. But remember, we chose containerization for its benefits such as high availability and scalability. This means that multiple instances of the same application or service can reside in the same or different physical machines. If we have hundreds of such services, manually managing deployment configurations for all these services would be hectic. But thanks to orchestration tools such as Kubernetes that automates, scales and manages containerized applications easily.

Another best practice is to automate the build pipeline using a CI/CD process/tool so that the signal is triggered as soon as the code is merged with the master branch. The resultant artifacts can be containerized and deployed to the desired environment with the help of an orchestration tool.

Summary

There has been a significant rise in the demand for cloud computing and microservices that involves container based technology. Although re-engineering for containerization seems a difficult task, it can be achieved with a well-planned strategy. The most important benefit of using containerization as a virtualization method is its flexibility to operate on cloud. As long as technology improvises, the benefits of using containerization continue to increase for enterprise businesses.