Announcing Faster Container Builds Powered by Kubernetes

October 5, 2016 · By Joseph Schorr

CoreOS Tectonic enables everyone to operate their server systems like Google does; we call this GIFEE (Google’s Infrastructure For Everyone). Built on Kubernetes, the production-grade container orchestration system, Tectonic enables you to manage complex containerized application infrastructure from application source code to live load balancer.

A key piece of Tectonic is Quay: the container registry by CoreOS. It is the cornerstone for building, analyzing and distributing container images and is used by thousands of companies, large and small, as part of their containerized infrastructure. Quay is even used to build and deploy itself, demonstrating the power of the engineering tools we develop at CoreOS.

Quay's build system transforms Dockerfiles and source code into container images automatically. It is an important piece of infrastructure for our customers (and ourselves) and we are happy to announce that, starting today, it has gotten a major upgrade: All builds on Quay are now executed on a Tectonic cluster, running on Packet bare metal servers.

Put simply: with this change builds on Quay just got a lot better for everyone, as we have taken advantage of the speed and automatic scaling of Kubernetes.

Quay builder KVM diagram
Each build in Quay is assigned to a builder, which executes within a KVM instance

Quay.io users should see significant and immediate improvements:

  • Builds now have a much faster startup time, roughly an 80% improvement. Instead of waiting for an AWS EC2 instance to boot (which can take upwards of three minutes), build start times are closer to 15 seconds

  • Leveraging Kubernetes results in much more efficient use of compute resources, which allows us to offer more resources for each individual build

Quay EC2 Timing Diagram
Container builds with Kubernetes start 80% more quickly than the previous system

For Quay Enterprise customers (support coming soon!), there will be additional benefits:

  • The build cluster can automatically scale based on demand. No more manual scaling necessary.

  • By leveraging Kubernetes, no other setup is needed: Simply point Quay Enterprise to a valid Kubernetes cluster and it just works as expected.

  • Quay Enterprise customers will get the same hardened security associated with Quay.io builds, with each build running in its own micro-virtual machine, fully isolated from the machine conducting the build.

With builds powered on a baremetal provider like Packet, Quay users experience faster build times without needing to do anything.

“As container based stacks become more commonplace and complex, services like Quay play an increasingly critical role in everything from developer velocity to production stability and security," said Zachary Smith, CEO, Packet. “Like most cloud native applications, Quay runs best on 'Google style' infrastructure: dedicated, bare metal servers with advanced hardware security features like Trusted Computing and a lightning fast network. We're thrilled to have Quay running on Packet, and can't wait to see what users all over the world do with faster, more secure builds."

How it works

Container builds typically begin on Quay via a webhook called from a code repository, such as GitHub. The webhook call contains the commit to build, along with metadata such as the commit message and author, which will be displayed in the Quay UI.

Once a build has been triggered, it is immediately placed into a queue to be picked up for processing:

Build Triggers Diagram
New code commits can automatically trigger a new container build

On each instance in the Quay cluster is a build manager, which monitors the build queue and periodically checks for new builds that have been triggered. Whenever possible, a build is retrieved by the build manager from the queue and scheduled onto the builders using the execution API. Previously, this scheduling was done via the Amazon EC2 API. With Kubernetes, the Quay build managers instead make use of the Kubernetes API to schedule a Job for each build. Each Job consists of metadata for the job (such as a one-time use token for authorization), as well a single Pod executing a container that runs a virtual machine containing the Quay build agent.

Build Queue Diagram
One KVM instance is run per build

It’s important that each build run in a virtual machine due to the requirements of docker build. Docker builds must run as root, and while solutions do exist to enforce security for root containers (such as user namespacing), they also introduce incompatibilities that are not present on most developer’s laptops when performing builds. To ensure the same experience, the Quay build system runs all builds as root at all times, wrapping them in a virtual machine to ensure security of the cluster while maintaining compatibility.

Build Instance Diagram
Containers are built inside an ephemeral KVM instance and stored in Quay

Things get more interesting within the virtual machine. Inside the virtual machine is our containerized build agent, which connects to the build manager, requests the details of the build (using the token mentioned above), performs the build and push, and streams all logs back to the build manager. The build agent is never given any information unrelated to the build itself, and is never trusted to run a second build. Once a build has completed, the Job is removed via the Kubernetes API, destroying its Pod, the VM and all data found inside.

Interested in learning more? Stay in touch with CoreOS

The CoreOS team is at LinuxCon and ContainerCon in Berlin this week, so please stop by the booth area to learn more.

Get started with Quay hosted today.

If you are interested in learning more and staying in touch with CoreOS about Quay, Quay Enterprise, or Tectonic, please fill out your information.