Technical Insights: Docker at California’s Child Welfare Digital Service

mwadmin Blog, DevOps

By Thomas Ramirez

Note: This is part two of a three-part series regarding work we performed at CWDS, and more specifically our experience with code checking through automation.

As in the traditional IT (physical hardware) world, most workloads required more than one component. In our architecture, each container holds one application component, complete with whatever libraries and supporting tools are needed. However, there’s more to our solution – and why this can only be part of the solution. Docker itself is not part of the operating system (OS). Therefore, before you can run a Docker container, someone needs to install Docker.

Note: Just a quick reminder, modern applications are built-in modular or component fashion, often consisting of lots of smaller (software) parts.

To deliver our modernized solution, we used two popular pieces of automation technology: Ansible and Jenkins. First, let me answer, “What is Ansible?” Ansible is a configuration management (CM) tool and it provides the ability to write a “playbook” which is a cross between a script and installation specifications. When a playbook is run, Ansible goes through the steps one by one (like a script) to see if an appropriate version of the specified element has been installed and configured as defined in the step (like installation specifications). If it has, Ansible goes to the next step. If not, Ansible performs whatever installation and configuration is needed. Similarly, “What is Jenkins?” Jenkins is a continuous integration (CI) tool and it orchestrates processes like running Ansible playbooks in response to certain events like checking code into the master branch or manually clicking a button. So, in summary, to deliver our robust, container and Docker environment, we used Ansible to create a series of playbooks, and Jenkins to coordinate the automation and timing.

Note: For those of you who like more information: Modern source code repositories usually support the creation and maintenance of multiple “branches” or versions of the source code that can be used for different purposes and merged together. The “master branch” is the copy that should only have code that works properly in it.

To make this all work together – from developer to operations (Build/Run) it requires a combination of these technologies. For us, these technologies allowed the DevOps team to wire up our CI tool so that, once a developer believed that the code he or she was working on was ready and merged it into the master branch, the tool would build Docker images and test them.

If all tests passed, the images would be pushed to DockerHub and the tool would perform the deployment to the target environment. Since the target environment could be brand new, the deployment would run CM scripts to make sure the VMs were set up correctly, install and start the container from DockerHub, and notify staff that the CI process had completed. A DevOps engineer, developer, or even manager could manually run the deployment process. Simply put, this means that shortly after a developer believed a feature had been completed, it is available for final approval, and then ready to be pushed to test, production or any other environment. It also means that the environmental needs of the application were previously documented in the CM and CI scripts as part of developing the automation – the scripts themselves are also version controlled.

What we’ve developed at CWDS can be reused on the new SOMS M&O project. We can and will use CM and CI technologies to define and build environments, as well as to perform the deployments of the custom applications to help modernize their IT services.

Next month, I’ll talk about some aspects of extreme programming (XP).