AMP for Docker
Over the past few weeks I’ve been working on Docker support for our own Application Management Platform (AMP) aka AMP for Docker. This is now available and enables single-click deployment of multi-component applications to Docker containers, and gives automated runtime management such as auto-scaling to add/remove Docker containers on-the-fly. This means that AMP Blueprints can now run on top of Docker for super-fast portable deployments.
My original goal when I started on this was to use Docker to accelerate dev-test of complex multi-machine blueprints, but where it’s proving really useful is in running integration tests against management policies for scaling and failover. And because our Docker support is built upon Brooklyn’s location model, you can switch to public or private cloud locations or traditional fixed-IP targets and run without Docker, or even install Docker first as part of the blueprint!
What is Docker?
Docker is an open source tool for packaging an application and its dependencies in a virtual container that can run on any Linux server. It is a lightweight and portable alternative to a full-blown VM. The promise of Docker is that the same container can be run anywhere: the container a developer builds and tests on a laptop will run at scale, in production, on VMs, bare-metal servers, public clouds, private clouds, or combinations of the these. Common use cases for Docker are:
- Automating the packaging and deployment of applications
- Automated testing and continuous integration/deployment
- Deploying and scaling web apps, databases and backend services
So I thought Docker would be a good friend of AMP which is itself built on open source project Brooklyn.
Deploying apps on Docker
AMP provides a control plane for application deployment and runtime management. Through jclouds or native cloud integrations, AMP can deploy an application blueprint to a wide range of cloud providers and cloud APIs.
With Docker support we can now deploy to Docker containers. Instead of creating new VMs through a cloud provider’s API, AMP creates new containers through the Docker API. These containers can be treated in a similar way to traditional VMs, with several huge benefits:
- Speed: containers are lightning fast to start, because they share the kernel with the host OS.
- Light-weight: many containers can be run on the same host – docker adds little overhead over co-locating processes.
- Isolation: containers are truly isolated from each other – for example, processes can think they are running as root without affecting other containers.
AMP also supports deploying Docker itself. For example, you can create new VMs in your favourite cloud and automatically install Docker onto them. Those VMs can then be targets for creating Docker containers when deploying application blueprints. I tested primarily on IBM SoftLayer CCI instances, and also tested on selection of other clouds.
Jclouds Docker provider
The first natural step for this integration was to create a new jclouds provider for Docker in jclouds-labs/docker. This is of course in its early stages – I’ve just issued pull request in response to JCLOUDS-500 – but has proven usable even in relatively complex scenarios. I’d love to hear your opinions. It reminds me of the Virtualbox jclouds provider I worked on years ago when I began contributing to jclouds. However, in my opinion the remote API (restful-ish, but yeah!) and support for images are big wins over Virtualbox.
Give it a go!
To try this out, take a look at brooklyn-docker. This is work in progress, so expect many improvements over the coming weeks. Up-to-date instructions will be maintained there. Using the jclouds provider you can configure a docker location by including a new section in your ~/.brooklyn/brooklyn.properties.
There are example application blueprints at brooklyn-docker in the docker-examples project, which you can deploy to Docker containers.
To deploy Docker itself on, say, a new VM in IBM SoftLayer you can use the brooklyn-docker example SingleDockerHostExample.
This makes it even easier to get started with Docker, or with a cluster of VMs running Docker. We are working hard to further simplify the deployment of application blueprints to Docker containers, and the automatic management of the Docker hosts on your favourite cloud or fixed infrastructure. Our goal is to create dynamic Docker Cloud locations in AMP in line with the work we are doing with Waratek and we will blog about this shortly.
For example, this is a screenshot of the AMP blueprint WebClusterDatabaseExample. This represents a 3-tier application blueprint deployed using brooklyn on a Docker location. This application is composed of a load-balancer (nginx), clustered jboss server, and mysql server. The following figure shows the Brooklyn console while the application blueprint is running:
And the corresponding console output produced by brooklyn:
To test out the jclouds-docker provider directly, take a look at my jclouds-labs fork. Use the provider name “docker”, and configure the endpoint.
Technical challenges and solutions
Often the devil is in the details, and this was no exception. There were a number of technical issues but standing on the shoulders of giants – aka the Apache jclouds and Docker communities – is amazingly helpful and worthwhile. I will only talk about some illustrative small obstacles I stumbled upon during my journey:
In order to mimic the behavior of the nodes that jclouds is able to manage, we need to make the Docker containers similar to any other VM. Fortunately, to have that is not much work: the only prerequisite is that the node (vm or container, it doesn’t matter) needs to be ssh’able. This involved creating a custom image, as the vanilla base images did not have sshd.
Docker’s network management is a very nice feature, in my opinion. Each container can have access to the internet via NAT. To access a container’s ports, one can set up port-forwarding through the docker API. One can use the `docker inspect container` API to get the mapping between the container’s ports and the host’s ports. This is wired up to jclouds “inboundPorts”. If you ask jclouds to open up access to particular ports, then the Docker port mapping will automatically be configured. The actual port mappings can then be queried through jclouds. As you can imagine, this is very handy for the AMP integration: having access to the port mapping is fundamental to deploying most applications on Docker. In AMP, the mapped ports can be advertised. Other software components and services can be configured to point at these ports (e.g. to configure a load-balancer).
This work gives AMP blueprints the flexibility and the portability offered by Docker Containers. It’s great for everything from our testing (spinning up fresh containers amazingly fast hugely speeds up our automated live tests) to production use-cases (deploying complex multi-component applications such as OpenGamma).
There are exciting opportunities for AMP runtime management policies, especially when you combine deploying Docker itself with creating Docker containers. Policies such as auto-scaling can choose to create new containers on the optimal host (taking into account affinity and anti-affinity for performance and HA); and can choose to horizontally scale the set of VMs comprising the Docker cluster.
Enterprises can run their production apps over a pool of machines running Docker, improving resource utilisation beyond what is possible with VMs. If an app needs to scale rapidly, it can do so much faster than with VMs. Specifying priorities along with minimum requirements for apps takes this even further: one can scale-back some apps (for a few minutes while more VMs start up) so that high-priority apps can burst within seconds or even milliseconds.
To find out more about Docker integration with AMP or jclouds, please contact us or look for andreaturli on IRC at #jclouds, #docker or #brooklyncentral.
Update April 14, 2014
We recently contributed to the official Docker documentation to help people on getting started with Docker on IBM SoftLayer which we are delighted to report has now been published here.
28 March, 2015
28 March, 2015
Learnt about https://t.co/SGIixtRH6i over lunch at #cdatx < cool company
28 March, 2015
28 March, 2015
28 March, 2015
Via @gabrtv #cdatx - L4: Orchestration L3: Aggregation L2: Cordination L1: Execution