If your IT team finds it takes too long to set up application environments, the answer you’re looking for just might be found in containers—as many other IT teams have already discovered.

The container market is already hot, and it’s growing rapidly. A 451 Research report predicts the market will grow more than 250% by 2020—from $762M in 2016 to $2.7B. The report also predicts that application containers will be the fastest growing segment in the cloud-enabling technologies market.

Containers are gaining in popularity for the way they encapsulate applications and the minimal run-time resources that applications need to function within a container. Containers are ideal for moving applications from on-premises data centers to the cloud and in between cloud environments.

The big pay-off comes when you anticipate a spike in user demand and want to be sure the software will continue to run reliably. Containers allow you to quickly replicate applications from one environment to another so users never suffer inferior application performance. If your application is using services—such as a database, a Web service, middleware, queuing or a back-end application—you can containerize those services too and move them to the new environment along with the application. The migrations take just a few seconds!

Containers also help applications move more smoothly through continuous integration and deployment pipelines—from development to testing, staging and production—particularly for complex builds that require security testing. Another key benefit is that containerized applications run more efficiently, tapping into fewer compute resources and thus reducing compute costs. And in cases where more than one containerized application is running on a server, if one of the applications starts degrading in performance, it won’t impact the other containerized applications running on the same server.


No Hypervisor and Minimal OS Layers

When setting up a container environment, each application gets its own container as do each of the supporting services. Containers are somewhat similar to virtual machine technology, but where virtual machines have their own hypervisor and a full-size OS, containers function without a hypervisor and require minimal OS layers. Virtual machines are also much larger, about 2GB. In contrast, containers range from 20-400MB, making them much more portable.

By leveraging containers, your business can run several application instances without repeating all the layers of the OS that manage the resources. Containers running in an AWS cloud have the added advantage of being able to “wake-up” when a request comes in and then “go to sleep” when done serving that request. This means you only pay for cloud compute resources when the app is active.

After you define a container as to which OS the application uses and the minimum libraries it needs to run, you can replicate the container anywhere, and it will run in the exact same way. For example, you can build an application and containerize it on an AWS platform with a tool such as Docker. If you need to run the application on an Azure platform, you just migrate the Docker engine, and it runs the container the same way on Azure as it did on AWS.

While Docker is one of the leading tools for creating containers, if you are managing a lot of containers, you will want to also use an orchestration tool like Kubernetes, which enables you to seamlessly instantiate your containerized applications in multiple cloud environments. You can then manage the containers and define rules for how traffic will flow among the applications. You can also set up load-balancing and scaling so each container gets the compute resources it needs to deliver services to end users.


Container Use-Case Examples

Containers are particularly helpful when new development work is needed for an application that’s already in production. You can replicate a standardized production environment onto developer laptops in just a few minutes. The environment matches exactly, so when developer changes are moved back into production, the application won’t fall apart. And if there is a problem with a new function added by one of the developers when you go into production, you can just roll back the image to the previous version to get the application running again as you fix the problem.

If your internal IT team does not have experience in building application containers, that’s where Pantek container experts can help. As an example, one our customers delivers services online, using an appliance running a user interface that integrates with Apache Kafka queues and MySQL. Whenever the workload for one box was insufficient, the client used to deploy a whole new appliance. But by moving the interface and its back-end services into their own separate containers, our team of developers enabled the customer to scale the user-interface—without having to buy a whole new appliance each time activity spikes.

We also helped another customer move mobile Android applications with a complicated build process into containers. Leveraging the Docker and Kubernetes tools, we moved the front-end micro-applications and middleware services to node.js and we developed a consistent middleware layer that works across IBM, Azure and AWS clouds. By building the containers, the applications can be loaded onto mobile devices in just a few minutes, rather than 1-2 two days required previously.


Worth a Conversation

For businesses building an application from scratch, creating a container for the application almost always makes sense. Containers may also make sense for existing applications, depending on their complexity.

The process does take time upfront, but then you save a lot more time when it comes to setting up that same application in different environments. Given that the need to move applications is prevalent in today’s continuous DevOps world, we think it’s worth having the conversation with your cloud provider to determine which of your applications should be containerized.