Containers are a powerful enabling technology for cloud adoption. But they can be difficult to understand if you’ve not worked with them before. This post offers a top-line overview of what they are and what they do.
The role of container technology
Largescale cloud migration often needs to walk a fine line between ambition and pragmatism. It usually needs to be completed by a specific deadline, but modernizing workloads to take advantage of cloud capabilities takes time. This is where container technology steps in. It offers a relatively quick and easy way to partially modernize applications, so they perform well in the new environment.
What is a container?
Container technology ‘contains’ IT applications and all the elements needed to run them in a single place. This means an application will run in the same way no matter where it’s hosted. Just like physical shipping containers, these virtual containers are easily moved from place to place, regardless of what’s in them.
Our guide to containers has a great animated video explaining this.
It’s important to understand that “containerizing” an application doesn’t deliver the same benefits as rearchitecting it to be cloud native. However, it does offer an effective way to streamline cloud migration and adoption, while unlocking valuable cloud capabilities.
For instance, by holding an application’s dependencies in one place, containers overcome issues with code portability. They facilitate an efficient and modern approach to software deployment in the cloud.
To maximize the benefits of containers, they need to be used in conjunction with orchestration services. Major public cloud providers have their own integrated container services compatible with popular container software such as Docker. Specialist open-source container orchestration systems like Kubernetes are also widely used.
Benefits of using containers
When an application uses a container instead of a virtual machine (VM), it doesn’t require a dedicated operating system (OS). Instead, it leverages features and resources from a central OS that can host several containers. This means containers are more lightweight than VMs – think megabytes as opposed to gigabytes – and it reduces the overall costs associated with OS licensing and management.
So, containers result in better efficiency, which allows resource to be used where it delivers better value, such as running applications and scaling them when needed. Containers are a lot faster to spin up than VMs too, making them ideal in situations where you need to scale-out quickly to deal with spikes in demand.
Creating a container image holding code, configuration and any dependencies results in an asset that can be deployed consistently in any environment. This means developers can carry out testing in a development environment or on their own machines, knowing everything will work when deployed to the production environment. This consistency between environments reduces the need for troubleshooting which in turn helps increase the velocity of deployment.
Containers are also a great option in situations when an older application needs to move to the cloud but a full rebuild can’t be justified. Rather than opting for a lift and shift which only unlocks limited benefits, containerizing legacy applications improves operability in the cloud and takes less time than rearchitecting.
Another advantage of using containers is the ability to run applications at scale. This involves running many containers across multiple nodes in a cluster to provide high availability and resilience.
There are solutions available to handle this for you. Docker’s own orchestration platform, Swarm, provides clustering which allows you to scale-out and scale-in according to peaks or troughs in demand.
Kubernetes has similar features to Swarm as well as advanced options which help deploy and update applications at scale. Kubernetes was originally designed by Google and it’s a proven enterprise-scale container orchestration solution.
Both Swarm and Kubernetes provide a form of stateful detection, which means your service runs as expected with the correct number of instances. For example, if you always need three instances of a container image running on your cluster, this can be handled via orchestration. Even if an underlying host fails you, the system will ensure three instances of your container are running across the cluster. This is all done automatically and without intervention from your team.
A quicker, easier way to unlock cloud benefits
With containers, cloud migration can be both fast and effective. They make it possible to modernize applications and leverage cloud benefits without having to recode or rebuild. Once you’re in the cloud, this technology is also useful in scenarios where environments need to be created or extended quickly. All in all, containers are a vital part of the cloud adoption mix.