Spray server in a Docker container

Docker is a pretty new, but very exciting project; with Docker you can create lightweight, self-sufficient containers with any application inside, and later run the containers on a variety of hosts. The same container can be run locally, as well as on production on a large scale. Moreover, running the containers is fast, and has little overhead, in terms of memory, CPU and disk space.

Docker + spray

“Any application” can of course also mean a Java-based server! In this example we’ll create a Docker image running a simple REST server implemented using the excellent Spray toolkit, which itself is based on Akka and implemented in Scala.

The server (Spray-based)

The sources of the project can be found on GitHub. The server project consists of a single source file, DockedServer. The DockedServer is an object extending the App trait, which means that it is runnable as a top-level java class and has an automatically defined public static void main(...). Inside the class we start a server which binds to 0.0.0.0:8080, and serves three types of requests:

  • GET /hello
  • GET /counter/[name]
  • POST /counter/[name]?amount=[number]

Apart from a simple hello-world server, you can also see how Spray integrates with Akka actors, Futures and how type-safe parameter handling works.

The project is built with sbt; the build file is fairly simple. It uses the sbt-assembly-plugin to generate a single, runnable, fat-jar (notice the mainClass and assemblySettings settings). No (servlet) containers, or application servers, just a simple jar!

Hence, after cloning the repository, if you run sbt assembly, in the target/scala-2.11 directory you should see a docker-spray-[...].jar file, which you can run with:

java -jar docker-spray-[...].jar

and test if the server works locally:

curl "http://localhost:8080/hello"

Creating a Docker image

Now that we have a runnable jar, we can create a Docker image. The image will contain the Spray-based server, and all required OS-level dependencies (e.g. Java). The image will be runnable on any host with Docker installed (the host can have any configuration), without any need for further modification.

Firstly we need Docker of course; Docker has a great interactive tutorial and reference/installation instructions, so I recommend reading these to understand the basics.

As I use OSX, to install the docker daemon I had to use the boot2docker wrapper, which manages a VirtualBox Ubuntu VM. Inside the VM the Docker daemon runs, and the VM serves as the host for running docker containers. All of that is covered in the installation instructions, and boils down to a single brew install command. You don’t need that additional step if you are running Linux.

To create an image, the most convenient option is to write a Dockerfile, which describes the steps required to create the image from scratch. The Dockerfile for our example is quite short. First we specify that the base image is going to be dockerfile/java, a “trusted” Ubuntu build which has java 7 installed.

You can think of Docker as “Git for vm images”. After each statement from the Dockerfile is processed, a commit is made, which can then be used as a base for creating another image (correspond to branching). That’s exactly what we are doing with dockerfile/java (forking/branching off that repository).

Other important statements are:

  • ADD, which copies the fat-jar to the image
  • ENTRYPOINT, which specifies what command should be run when the image is run (makes the image behave almost as an executable)
  • EXPOSE, which exposes a port from the container to the host

We can now build the image. Remember to run sbt assembly first, to get the fat-jar. Then, run:

docker build .

Note the id of the last image produced – that’s the one we’ll want to run. You can also list the available images with docker images.

Finally, we can run a container basing on the image we have just created. To run, execute:

docker run --rm -p 9090:8080 [image id]

The parameters are:

  • --rm – removes the container after run completes. The run will complete when we interrupt the process (CTRL+C). It is also possible to run a container as a daemon with the `-d` option.
  • -p 9090:8080 – here we are remapping the ports. Port 8080 of the container will get exposed as port 9090 of the host. Note that when using boot2docker, you also need to map the ports of the host VM to the Mac (see the installation docs)
  • [image id] – the image id basing on which the container will be started

Docker stack

We can start multiple containers (servers) side-by-side from the same image, each completely isolated from each other. Testing a cluster has never been easier!

Also, it is possible to push the image to the Docker index, so that others can use the image or extend it! You can have both public and private images.

Cloud

Since recently it is also possible to deploy Docker containers on Amazon Elastic Beanstalk. With a simple descriptor, you can very quickly get a managed environment, with auto-scaling, load-balancing, monitoring, rolling updates and other goodies.

To deploy a container based on an image which is pushed to the index, you need to create a Dockerrun.aws.json file, and upload it when configuration your EB application. In our example, the file is very simple:

{
  "AWSEBDockerrunVersion": "1",
  "Image": {
    "Name": "adamw/spray-example",
    "Update": "true"
  },
  "Ports": [
    {
      "ContainerPort": "8080"
    }
  ]
}

It specifies from where to get the image, and which port is exposed. The port gets automatically remapped to port 80 in the web EB instance. The spray-example image deployed without problems; within a couple of minutes I was able to start serving requests!

Summing up

Docker is only starting as a project, and we will certainly see a lot of interesting applications of the technology in the near future. The git-like way of building new images from existing ones, ease of starting a new container and the isolation are all very convenient for development and deployment. Definitely a space to watch.

  • Rory

    I recently came across the idea of using Docker/Scala to provide fully ‘packaged’ apps. This is neat, now all I need to do is write a sbt plugin to package them automatically :)

  • Rory

    It appears that its already been done: https://github.com/marcuslonnberg/sbt-docker

  • Jason Scott

    Hi, great blog… I’ve followed all the steps and managed to get it working, but I’ve got a question about the port forwarding…

    I’m on a mac so using boot2docker. So first the app is running on port 8080. We have started docker mapping 8080 to port 9090. Now this is still within boot2docker. So I additionally had to open a session with “boot2docker ssh -L 9090:localhost:9090″ to be able to access the app from the mac on 9090.

    Is this correct… In the docker documentation on setting up boot2docker it says I can use this to setup port forwarding:

    “for i in {49000..49900}; do

    VBoxManage modifyvm “boot2docker-vm” –natpf1 “tcp-port$i,tcp,,$i,,$i”;
    VBoxManage modifyvm “boot2docker-vm” –natpf1 “udp-port$i,udp,,$i,,$i”;

    done”

    It states that the manual port forward should not be necessary with this…

    Any ideas? I’m assuming that the VBox ports from 49000..49900 are just used by the docker deamon and client and I still have to manually setup the port forward between the mac and the boot2docker vm.
    Just seems a little clunky!

    Regards… Jason.

  • http://www.warski.org/ Adam Warski

    If your boot2docker vm forwards ports 49000..49900 then you should map your container’s port 8080 to e.g. 49000 (or any other port in that range). The app will then be available under that port.

  • Jason Scott

    Just wondering why the app seems to be served by nginx when hosted on Elastic Beanstalk but when running locally its using spray-can. Maybe EB sits an nginx server in front of the docker container???

  • http://www.warski.org/ Adam Warski

    That’s probably because EB automatically places a proxy when deploying apps – for example to map the internal EC2 instance IP to the external, public IP (that would be my guess at lest). In EB you just need to expose “something” at port 80, the rest is mapped automatically (e.g. through nginx) to the public endpoint.