If you haven’t been to our Web site in some time, you’ll notice that today Quay, by CoreOS, got a new look.
Along with our new look, today we are excited to share some updates.
Quay Enterprise Pricing Update
To go along with our new look, we are also announcing new Quay Enterprise (formerly CoreOS Enterprise Registry) pricing. Our new pricing reflects your feedback, an overwhelming preference for a model that allows you to use as many containers and repositories as needed without worry. New subscriptions start at $450 per month for a basic install, and scale with your support and geographic replication requirements. Yearly pricing and purchase order options are also available.
Quay Enterprise Fully Integrated Into Tectonic
In addition, Quay Enterprise is now fully integrated into Tectonic, CoreOS’ Universal Kubernetes Solution to deploy, manage and secure containers. With the purchase of Tectonic, you will receive Quay Enterprise as an application delivered on top of the Kubernetes plus CoreOS stack. For more information about Tectonic, please see details here.
For those that aren’t ready for the full Tectonic stack, Quay Enterprise is still available as a standalone product, runnable via containers behind your firewall.
Meet Our Team at an Upcoming Event
If you want to learn more about Quay, Quay Enterprise, and Tectonic, join us at one of the upcoming events in New York, Washington D.C., San Francisco, or Barcelona.
Kubernetes Meetup New York
Meet Joey Schorr, lead software engineer on Quay, at the inaugural Kubernetes New York meetup at 6:30 p.m. ET on November 5, 2015. Joey will present Application Lifecycle on Kubernetes via Quay. He will show how to build a simple hello world app, make a code change, have Quay pick that up, build it, and update it on Kubernetes.
Barak Michener, backend developer at CoreOS, will be able to talk to you all about Quay and more at LISA 15 conference from USENIX on November 8-13 in Washington, D.C. Interested in working on Quay? Our recruiting team will be there too!
Meet us at KubeCon, where we’ll have a booth and you can learn all about Quay and Tectonic. Stop by to also hear talks from Brandon Philips, CoreOS CTO, and Eugene Yakubovich, maintainer of flannel, the software-defined networking solution by CoreOS.
DockerCon Europe 2015
Stop by the Quay booth at DockerCon Europe, November 16-17, to talk with the Quay and Tectonic teams. Learn more about how to use the most secure container registry, and pick up your Quay, CoreOS and Tectonic t-shirts and stickers.
Tectonic Summit New York 2015
CoreOS is taking over New York City the first week of December. Request your invite to join us for the leadership event for container infrastructure. You’ll hear first hand accounts of how companies like Viacom, Verizon Labs, SoundCloud and more are using containers, as well as learn from some of the industry's finest container experts from Deis, ClusterHQ, Sysdig, Google, Intel and more.
The Quay container registry enables you to build, store and distribute your containers in your own private environment or in the cloud. Today Quay supports container images for Docker and rkt.
We are pleased to introduce the newest features in Quay.io’s enterprise offering. Delivered by CoreOS, Quay.io is a private hosted and enterprise container registry, ideal for secure hosting of private docker and rkt repositories. New Quay.io features include OpenStack support, various optimizations and a streamlined release process, which enable easier setup and management of the enterprise registry on premises.
In the New York City area? We invite you to the New York City CoreOS Meetup tonight, July 28, 6 p.m. ET at Work-Bench and we welcome you to come by and ask the Quay.io team any questions. We’re also featuring Gabriel Monroy (@gabrtv), CTO of Engine Yard and creator of Deis, who will speak about Deis, Kubernetes and CoreOS. Eugene Yakubovich (@eyakubovich), software engineer and maintainer of flannel at CoreOS, will speak about what’s under the hood of Tectonic.
We are excited to continue delivering dynamic features requested in the Quay.io container registry as both a standalone enterprise product and as a feature of Tectonic. Read on for more details on the new features.
Support for OpenStack Keystone and OpenStack Swift
One of the key features of Quay.io Enterprise includes support for popular authentication and storage mechanisms, which reduces the initial setup time and complexity of integration for developer and operations teams.
Today we’re pleased to announce support for two key pieces of OpenStack: OpenStack Authentication (Keystone) and OpenStack Storage (Swift). Each can be selected in the Quay.io Enterprise Setup Wizard, and be set up to provide authentication for users of the registry and/or storage for the underlying registry data. OpenStack support comes together with existing support in Quay.io for auth (LDAP, JWT custom) and storage (S3, GCS, RADOS).
Streamlined Release Process
In order to improve customer communication, the Quay.io team has been working to streamline a brand new release process. A changelog is now available in the Super User panel of the application. Every release in the changelog corresponds to a tag that can be pulled directly (i.e. docker pull quay.io/coreos/registry:v1.10.0).
In addition to specific versions, the latest tag is still available to make retrieving the latest version easy and direct. For every release of the registry, there will also be a corresponding release for the build worker. By running a builder with the same version/tag as the registry, compatibility between the two can be verified.
Quay.io and rkt
We are dedicated to the continued development and support of rkt. Today CoreOS is a part of an industry group that is dedicated to the creation of a standard specification for containers, known as the Open Container Initiative (previously called Open Container Project).
The specification is collectively being worked on as an industry, and CoreOS CTO Brandon Philips is one of the maintainers. Once the specification is ready and fulfills the requirements of the most rigorous security and production environments we will plan to support the specification. Companies can continue to choose Quay.io to securely host modern container runtime images, such as Docker or rkt repositories.
June 3, 2015 · By Micha "mies" Hernandez van Leuffen
Today's guest post has been written by Micha "mies" Hernandez van Leuffen, the founder and CEO of wercker, a platform and tool for building, testing and deploying in the modern world of microservices, containers and clouds.
The landscape of production has changed: monolithic is out, loosely coupled microservices are in. Modern applications consist of multiple moving parts, but most of the existing developer tooling we use was designed and built in the world of monolithic applications.
Working with microservices poses new challenges: your applications now consist of multiple processes, multiple configurations, multiple environments and more than one codebase.
Containers offer a way to isolate and package your applications along with their dependencies. Docker and rkt are popular container runtimes and allow for a simplified deployment model for your microservices. Wercker is a platform and command line tool built on Docker that enables developers to develop, test, build and deliver their applications in a containerized world. Each build artifact from a pipeline is a container, which gives you an immutable testable object linked to a commit.
In this tutorial, we will build and launch a containerized application on top of Kubernetes. Kubernetes is a cluster orchestration framework started by Google, specifically aimed at running container workloads. We will use quay.io from CoreOS for our container registry and wercker (of course!) to build the container and trigger deploys to Kubernetes.
The workflow we will create is depicted below:
Workflow from build to deployment.
This tutorial assumes you have the following set up:
A fork of the application we will be building which you can find on GitHub.
You've added the above application to wercker and are using the Docker stack to build it.
The application we will be developing is a simple API with one endpoint, which returns an array of cities in JSON. You can check out the source code for the API on GitHub. The web process listens on port 5000; we'll need this information later on.
Now, let's create our Kubernetes service configuration and include it into our repository.
We define the port that our application is listening on and use the public IP addresses that we got upon creating our Kubernetes cluster. We're using Google Container Engine, which allows for createExternalLoadBalancer. If you're using a platform which doesn't support createExternalLoadBalancer then you need to add the public IP addresses of the nodes to the publicIPs property.
Next, we're going to define our pipeline, which describes how wercker will build and deploy your application.
wercker.yml - build pipeline
On wercker, you structure your pipelines in a file called wercker.yml. It’s where you define the actions (steps) and environment for your tasks (tests, builds, deploys). Pipelines can either pass or fail, depending on the results of the steps within. Steps come in three varieties; steps from the wercker step registry, inline script steps and internal steps that run with extra privileges.
Pipelines also come with environment variables, some of which are set by default, others you can define yourself. Each pipeline can have its own base container (the main language environment of your application) and any number of services (databases, queues).
Now, let's have a look at our build pipeline for the application. You can check out the entire wercker.yml on GitHub.
build:box:google/golangsteps:# Test the project-script:name:go testcode:go test ./...# Statically build the project-script:name:go buildcode:CGO_ENABLED=0 go build -a -ldflags '-s' -installsuffix cgo -o app .# Create cities-controller.json only for initialization-script:name:create cities-controller.jsoncode:./create_cities-controller.json.sh# Copy binary to a location that gets passed along to the deploy pipeline-script:name:copy binarycode:cp app cities-service.json cities-controller.json "$WERCKER_OUTPUT_DIR"
The box is the container and environment in which the build runs. Here we see that we're using the google/golang image as a base container for our build as it has the golang language and build tools installed in it. We also have a small unit test inside of our code base which we run first. Next we compile our code and build the app executable.
As we want to build a minimal container, we will statically compile our application. We disable the ability to create Go packages that call C code with the CGO_ENABLED=0 flag, rebuild all dependencies with the -a flag, and finally we remove any debug information with the -ldflags flag, resulting in an even smaller binary.
Next, we create our Kubernetes replication controller programmatically based on the git commit using a shell script. You can check out the shell script on GitHub.
The last step copies the executable and Kubernetes service definitions into the $WERCKER_OUTPUT_DIR folder, and the contents of this folder gets passed along to the /pipeline/source/ folder within the deploy pipeline.
wercker.yml - push to quay.io
We're now ready to set up our deploy pipelines and targets. We will create two deploy targets. The first will push our container to Quay.io, the second will perform the rolling update to Kubernetes. Deploy targets are created in the wercker web interface and reference the corresponding section in the wercker.yml.
Deploy targets in werker.
In order to add any information such as usernames, passwords, or tokens that our deploy target might need, we define these as environment variables for each target. These environment variables will be injected when a pipeline is executed.
Quay.io is a public and private registry for Docker image repositories. We will be using Quay.io to host the container image that is built from wercker.
deploy:box:google/golangsteps:# Use the scratch step to build a container from scratch based on the files present-internal/docker-scratch-push:username:$QUAY_USERNAMEpassword:$QUAY_PASSWORDcmd:./apptag:$WERCKER_GIT_COMMITports:"5000"repository:quay.io/wercker/wercker-kubernetes-quayregistry:https://quay.io
The deploy section of our wercker.yml above consists of a single step. We use the internal/docker-scratch-push step to create a minimal container based on the files present in the $WERCKER_ROOT environment variable (which contains our binary and source code) from the build, and push it to Quay.io. The $QUAY_USERNAME and $QUAY_PASSWORD parameters are environment variables that we have entered on the wercker web interface. For the tag, we use the git commit hash, so each container is versioned. This hash is available as an environment variable from within the wercker pipeline.
The cmd parameter is the command that we want to run on start-up of the container, which in our case is our application that we've built. We also need to define the port on which our application will be available, which should be the same port as in our Kubernetes service definition. Finally, we fill in the details of our Quay.io repository and the URL of the registry.
If you take a look at your Quay.io dashboard you will see that the final container that was pushed is just 1.2MB!
wercker.yml - Kubernetes rolling update
For this tutorial, we assume you've already created a service with an accompanying replication controller. If not, you can do this via wercker as well. See the initialize section in the wercker.yml
Let's proceed to do the rolling update on Kubernetes, replacing our pods one-by-one.
The environment variables are again defined in the wercker web interface. The $KUBERNETES_MASTER environment variable contains the IP address of our instance.
Kubernetes credentials defined in the pipeline.
We execute the rolling update command and tell Kubernetes to use our Docker container from Quay.io with the image parameter. The tag we use for the container is the git commit hash.
In this tutorial, we have showcased how to build minimal containers and use wercker as our assembly line. Our final container was just 1.2MB, making for low-cost deploys!
Though the go programming language compiles to single binaries, making our life easier, our learnings can be applied to other programming languages as well.
Using wercker's automated build process we've not only created a minimal container, but also linked our artifact versioning to git commits in Quay.io.
Pairing our versioned containers with Kubernetes' orchestration capabilities results in a radically simplified deployment process, especially with the power of rolling updates!
In short, the combination of Kubernetes, Quay.io and wercker is a powerful and disruptive way of building and deploying modern-day applications.
In this article we've just scratched the surface of developing container-based microservices. To learn more about Kubernetes check out the getting started guides. For more information on Quay.io, see the documentation site. You can sign up for wercker here and more information and documentation is available at our dev center. The source code for our final application including its wercker.yml is available on GitHub.
At CoreOS we aim to deliver best-in-class deployment infrastructure to the application container industry. Today at CoreOS Fest we're announcing new features in Quay, our hosted private enterprise container registry.
With an updated build system and UI, Quay’s first-to-market features include a new caching layer, image tagging history and secure hosting for private container repositories like rkt and Docker, enabling a simpler, faster and easier way to create and use application containers. In addition, today Quay adds official support for the App Container specification (appc), allowing companies application container portability and choice. Read on for more details.
New UI and features add even more performance and reliability
Quay is an advanced image registry delivered by CoreOS for containerized applications and a key part of infrastructure that can be run behind a firewall, allowing companies to maintain security and take advantage of container-based systems. With a simple but powerful UI, DevOps and developers spend less time managing the containers and more time creating and using them.
“We have been long-term customers of Quay because of the security elements involved from day one,” said Frank Macreery, co-founder and CTO of Aptible. “The new features such as the build system and time machine propel Quay’s technology to the forefront of deployment infrastructure.”
New Quay features available today include:
Clean User Interface: A simplified UI is faster, fully responsive on mobile, and includes a new search system making it even quicker and easier to create and maintain container repositories.
New caching layer, faster builds: Quay now pre-calculates caching information, so builds will happen even faster and production deployments can be developed incrementally.
Time machine: Users can now see the history of all the tags in their repository for up to a two week period, and have the ability to revert tags to a previous state. With this new functionality, users can be more confident than ever when pulling a tag from Quay in production environments: newer images can be pushed with the safety of undoing the operation in just a few clicks, as well as being able to fix any unprecedented issues quickly and efficiently.
Support for git submodules: Highly requested by customers, now more teams are able to make use of the Quay build infrastructure.
Support for Bitbucket and GitLab: With additional generic Git support, now any Git repo can be built via a webhook callback.
Support for encrypted passwords: Users can now create and use encrypted versions of their passwords on the Docker command line, compensating for the fact that Docker stores passwords in plaintext in .dockercfg files.
Quay and Tectonic - looking ahead
Moving forward, we will continue to invest heavily in the container registry as both a standalone product and as a feature of Tectonic.
In order to continue to provide our customers with the highest quality products, we are delivering an upgraded build experience and added functionality to Quay today so that more and more companies can maintain security and effectively create and use application containers.
Quay now supports converting and fetching images that adhere to the App Container specification (appc), which outlines how to define and build containerized applications. These images can be used with rkt, a container runtime designed for server environments with the most rigorous security and production requirements. These technologies enable portability within the container ecosystem, giving companies real flexibility when choosing how to run their containerized applications.
Get started with Quay here. Today at CoreOS Fest in San Francisco, our team will discuss what’s new at 1 p.m. PT. For those unable to make it in person, sessions will be recorded and posted online soon.
Today we round up our newest features and shine a spotlight on them. Since joining the CoreOS team, we have been working hard on features to improve the Quay.io experience. Highlights include squashed images (an experimental feature) for faster downloading, added build features for more control with build automation tools, and improved notification features for better communications within teams.
Have you ever had to deploy a repository to a large number of machines, and wished for a faster way to do so?
We're happy to announce a new experimental feature which allows you to download a flattened version of a tag in your repository as a single layer. Instead of downloading each layer of a repository, one by one, you can download and load a single image of your repository. This cuts down on the network roundtrips associated with a pull, and even better, allows you to download your repository without using any bandwidth for deleted files. For example, if you have an image with a go binary, you can rm the go compiler after the build, and we’ll only serve the remaining files. Additionally, after you pull this image once, we will cache it so that subsequent pulls of the same image are even faster!
Using the squashed image is super simple: If you ask for tag foobar, then executing the download command will result in a new image named foobar.squash, which can be run the exact same way as its base tag:
There are some best practices for using this feature optimally. First, since you will not benefit from layering any more, you should only use this to pull to machines which will only pull the image once, and will not perform upgrades. Ephemeral or auto-scaling machines are prime candidates. Second, to take advantage of the caching, you should let the first pull of the squashed image complete before pulling it on subsequent nodes. If you plan to pull to tens, hundreds, or thousands of machines, it is best to prime the cache before doing so.
You can find the command to pull and load a squashed image in the main tree view for your repository. Please replace the placeholder credentials with the credentials of one of your own robots in order to run it. We have tons of tests which verify that these images are correct, but please test and verify that this feature works for you before adopting it in production.
The ability to build your repositories automatically from GitHub has been wildly popular. With great popularity comes great feature requests, such as: to only build the branches you specify, to test builds in different branches, and to have access to the .git directory that would be present in a clone. We’ve got two and a half of those features ready!
Building select branches
As of now, you can filter the branches which you want to build from GitHub. We allow you to specify any regular expression, which if it matches the branch name, will allow the build to proceed.
Manually triggering a branch
We also allow you to manually trigger a build from any of the branches in your repository.
Finding the revision
As for the .git directory, we were originally unable to provide it because we are not building clones; we’re building archives. After digging into your requests a bit more, it seems that the feature that you really need is to be able to tell which sha you’re actually building, at build time. So, in the interim, we've created a synthetic .git directory which will allow you to run the commands git rev-parse --short HEAD and git rev-parse HEAD. We hope this will allow you to continue forward with your build automation tools.
Communication and Notification Improvements
In order to best facilitate communication with and between your teams, we've also added notification types for Slack, Flowdock, and Hipchat. Using the API token style integrations provided by these various tools, you can now receive your repository notifications, such as build status and pushes, directly in your team chat provider.
Lastly, we’ve made some ease-of-use updates to other parts of Quay.io, such as:
Inviting users to join teams can now be done by email address, and users must confirm that they would like to join
Ability to regenerate a robot’s credentials, in case you accidentally leak them (e.g. by running docker login without specifying quay.io)
Added support for the docker search command. Queries can be issued like so: docker search quay.io/somesearchterm
We look forward to hearing your feedback on the newest updates. Sign up for a free trial of Quay.io here. Or if you are interested in storing your docker containers behind the firewall, try CoreOS Enterprise Registry.