Translating Enterprise Architectures to Container Services

Enterprise-Architecture-Containers

With the premise of containerized applications and thin specialized operating systems to support container workloads sorted, how do we use containers to deliver our software? It seems like we put all our code into one big container, tell the operating system where to find it, maybe expose a port on a network address, and we’ve got a service running that completely contradicts all of the current principles of good design. We’ve gone back to to the monolithic applications of yore that we’ve worked so hard to move away from.

What’s in an Application?

New packaging doesn’t mean we have to throw out all of our current design principles. For simple applications, you can certainly have your application in a single container. But when you hear “one application per container”, most people are talking about applications like Apache web server or MySQL databases that are used as “components” to build larger applications. When we’re talking about the applications that get built and delivered to users, most people are talking about “services”. That’s the naming convention I’m going to use here, but keep in mind that we’ve completely overloaded the word “application” in IT.

Defining our services

Containers then can enforce the principles of separation of function and multi-tier applications. Your user presentation code is dropped into a container that has the web front end of your choice. Business logic is delivered in the application server container. Data persistence in a database container. Your service is all three containers working as a unit. The idea of an N-Tier architecture becomes an N-Container architecture.

service-define

There’s good reason for this. First, from a technology standpoint, containers are management layers around processes. If a running process is our unit of management, then it makes sense to separate different processes and their children into different containers. This allows for different quotas, different capabilities, and different policies to be applied to our components.

Scaling our services

Scaling is our next reason for separating our components into different containers. If we have a monolithic container, whatever resources are available at start time is it. If we get more traffic than we can handle at the database, it all fails. If we can’t handle inbound new page requests quick enough at the front end, it all fails. If we move each component to a separate container, then we can have a much simpler time managing scale out actions. If the front end starts to drop new connections, start a new front end container. If some application logic is slowing overall response times, start a new business logic container. Or 3. Or remove a container when the traffic slows.

Separating the building blocks of our service at the component level allows for easier scaling at each individual tier. We still need to “do the right thing” inside the service to handle scaling, but good design makes it quick to take advantage of scaling within our orchestration system.

service-scale

Resource management is also more effective when we’ve separated our components. Lets contrive an example where our service needs 512MB of memory, 128 at the business logic and 384 for the database. With a single service container, we need to always have 512MB of free memory on a single host to run. With more granular requirements for memory available, we can schedule multiple containers on different hosts and get better density and usage, just like we want in a standard virtualization environment. We can also restart and move components around in case of hardware failure, noisy neighbors, or any other typical infrastructure management challenges.

Composing our services

Saving the best for last, separation of duties at the container boundaries creates opportunities for composability and upgradability that we struggle with in IT today. Clean boundaries means that we have 3 things that make up an application stack: the hosting operating system, the component container, and your code. The most interdependence comes in the last two, but deploying your code into a component container is no different than deploying your code to a framework outside a container. Testing an upgrade to a component is a simple matter of deploying your code to a container that houses the later version. No need for upgrading in place, no need for additional hardware or virtual machines. If the upgrade is successful, the new container can be put in service, duplicated, scaled, etc.

service-compose

The same goes for updated operating systems. Patches can be applied to a new host, existing containers can be migrated to the updated host, and if issues occur, moved back. This means we can create a “mix and match menu” for building services: OS from column A, components from column B, code deploy from column C. We gain more control and reduce the impact of making changes to services.

This also allows for more seamless service upgrade experiences for the end users, since we can take advantage of schemes like rolling updates or A/B deployments easier. If there are issues in the new containers or with the new deployment, we can quickly redeploy the previous containers. The very same features that enable individual scaling of components allows for the replacement of them.

This isn’t a comprehensive review of all of the benefits of containerized component architectures for your services. It is a fairly compelling set that should spur your interest in containers.

Matt Micene Solutions Architect, DLT

Matt Micene is a Solutions Architect and lead engineer for DLT Solutions. He has over 10 years of experience in information technology, ranging from Solaris and Linux architecture and system design to data center design to long, coffee filled nights of system maintenance for various web-based service companies. In his current role, Matt advises customers in the pre-implementation stage, with the adoption and understanding of technologies, such as cloud computing and virtualization. He assists public sector customers with selecting the best building blocks to create environments supporting their missions. A strong advocate for open source, Matt is also a Red Hat Certified Architect (RHCA) and was named RHCE of the Year in 2010.