Docker and AWS
  • Article
  • Aug.9.2016

Dockerising our apps

  • Aug.9.2016
  • Reading time mins

                                                                         

 

This blog series will show how Docker, together with Atlassian products, can be used in organisations to reduce waste and ultimately increase productivity. In part one of the series, we show the creation of a JIRA container pre-populated with good data.

Project motivation

I remember how the news about Docker went viral on the net back in early 2013 when it was first released. It was the subject of discussion in most technical newsletters, user groups and forums. What made it a real hotcake was how it allowed users to wrap their software and all its dependencies in a container which could be shared with anyone. Users could now, with a single command, start up environments fully installed with the necessary software. This is the official statement from the Docker website:

Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

When the excitement about Docker was high, I decided to install and play around with it. I remember the first test I did was to start another lightweight Ubuntu instance from my machine which I could access and make configuration changes to. That was really fun.

Another thing that made Docker so popular was its ease of use. It was well-designed to work in a similar fashion to git, based on client/server architecture. Users are able to work locally to create an image, commit the image and push it to Dockerhub. People can then search the hub for your images, pull them and run… I bet by now that sounds similar to git, except that we’re talking about images instead of source codes (smile).

The advent of Docker opened up a new fight in the continuous integration world. As you might have guessed, CI vendors tried to provide ways of deploying software directly to the Docker. Atlassian was, proudly, among the first to embrace Docker. Bamboo has a task that allows users to build an image, run a container and push and/or pull Docker images from a Docker registry. To the best of my knowledge, no other vendors were able to offer such a complete integration with Docker. This started with Bamboo 5.0. Ever since, many CI vendors have embraced this technology more and have even secured funding to diversify their products. Here’s another example.

Moreover, Laas and PaaS providers like AWS and DigitalOcean have started coming up with their own Docker orchestration services. AWS, for example, has devised a new ECS service allowing you to schedule the start of EC2 instances capable of deploying and running your Docker containers. Other orchestration services are Kubernetes, Docker Swam, Deis, Tumtum, Mesos and host of others.

All the above serves to show that mastering Docker is well worth it – and that it will be here to stay for a long time!

Docker: a high-level overview

For us to understand Docker a little more, we will use the official pictures from its website to illustrate. As it is said, a picture is worth thousand words:

Screen Shot 2016-07-26 at 12.52.55 Screen Shot 2016-07-26 at 12.53.05

 

The image on the left depicts a particular scenario of installing VMs on top of the host machine, which in turn has a layer to install the guess OSes. The corresponding image on the right shows the typical architecture of Docker. As you can readily see, the major difference is that VM has an additional hypervisor layer; a monitor that allows you to install and manage a farm of guess OSes. As you can imagine, this can be resource-intensive, as each OS can be heavy and take up computing resources.

The Docker solution, on the other-hand, shrinks or collapse the hypervisor and guess OS layer into what is called the Docker engine. This engine is a lightweight runtime that utilises the Linux kernel to start containers which run on an isolated process.

Speaking of kernels, the below diagram gives a clearer picture of where we focus on the kernel, and the containers running on it.

 

kernel

This shows that on this OS kernel, we have two containers running isolated from each other. One container is a lightweight BusyBox OS with nothing much on it. The other is a Debian machine loaded with emacs and Apache. In a similar way, we can add another container loaded with any kind of software we need and share it around.

Installing Docker

Installing Docker is quiet straightforward through a .dmg. The installation comes with the following items:

  • Docker CLI client for running Docker engine
  • Docker machine
  • Docker compose
  • Kitematic, the docker GUI
  • Docker quick start
  • Oracle VM VirtualBox

But why do we then need VM virtualBox if, as we have shown from the architecture in the last chapter, we don’t need a guess OS to run Docker? This is because the Docker engine daemon uses linux-specific kernel features, which means we can’t use OSX to run the engine. The Docker machine that comes with the installation will create and attach a very lightweight linux VM loaded with Docker engine.

Dockerising: a one-minute demo

In order to extend our skills a bit further, we are going to Dockerise JIRA, and extend it by pre-populating it with some data.

The core idea behind the one-minute demo application is to provide users with one command that can be triggered by users to start a JIRA instance pre-populated with the relevant data. This will enable us to demo our services to customers.

For this project, we have defined requirements for the instance in Excel sheets – things like the project name, boards, issue details and sprints. We then use a Groovy script to transform the contents of the Excel sheet in to several json files. The json files created form the data requirement to pass over to the REST POST method to create JIRA projects, issues, boards and start and end a sprint. For us to be able to run the REST on an instance, we need a running JIRA. The JIRA in our case will be spun up from a Docker instance.

Looking at the project, we will create two containers. each starting a fresh JIRA instance and running a script to transform data in to JSON & upload via REST. So how do we link the containers together? This is where the Docker Compose comes in to play, wiring multiple containers and determining the order in which they will be started.

In a nutshell, this our current structure:

structure

The Docker file here contains all the prescriptions of our JIRA instance. It starts a fresh JIRA instance, loads it with a bundled h2 database that initialises the installation and creates a user admin identified with password admin. This was made possible by moving the dbconfig.xml file and h2db.mv.db to the JIRA home and home/database directory respectively.

DockerFile: JIRA
# Ubuntu
FROM ubuntu:16.04
MAINTAINER Sultan Maiyaki
RUN apt-get update
RUN apt-get install -q -y curl
#version of JIRA
ARG JIRA=7.1.7
# install java 8
ENV DEBIAN_FRONTEND noninteractive
ENV VERSION 8
ENV UPDATE 91
ENV BUILD 14
ENV JAVA_HOME /usr/lib/jvm/java-${VERSION}-oracle
ENV OPENSSL_VERSION 1.0.2g
RUN apt-get update && apt-get install ca-certificates curl \
gcc libc6-dev libssl-dev make \
-y --no-install-recommends && \
curl --silent --location --retry 3 --cacert /etc/ssl/certs/GeoTrust_Global_CA.pem \
--header "Cookie: oraclelicense=accept-securebackup-cookie;" \
| tar xz -C /tmp && \
mkdir -p /usr/lib/jvm && mv /tmp/jdk1.${VERSION}.0_${UPDATE} "${JAVA_HOME}" && \
curl --silent --location --retry 3 --cacert /etc/ssl/certs/GlobalSign_Root_CA.pem \
| tar xz -C /tmp && \
cd /tmp/openssl-1.0.2g && \
./config --prefix=/usr && \
make clean && make && make install && \
apt-get remove --purge --auto-remove -y \
gcc \
libc6-dev \
libssl-dev \
make && \
apt-get autoclean && apt-get --purge -y autoremove && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN update-alternatives --install "/usr/bin/java" "java" "${JAVA_HOME}/bin/java" 1 && \
update-alternatives --install "/usr/bin/javaws" "javaws" "${JAVA_HOME}/bin/javaws" 1 && \
update-alternatives --install "/usr/bin/javac" "javac" "${JAVA_HOME}/bin/javac" 1 && \
update-alternatives --set java "${JAVA_HOME}/bin/java" && \
update-alternatives --set javaws "${JAVA_HOME}/bin/javaws" && \
update-alternatives --set javac "${JAVA_HOME}/bin/javac"
# Downloading JIRA
RUN /usr/sbin/useradd --create-home --home-dir /usr/local/jira --shell /bin/bash jira
RUN mkdir -p /opt/jira # JIRA installation directory
RUN tar zxf /root/jira_software.tar.gz --strip=1 -C /opt/jira
RUN mkdir -p /opt/jira-home # JIRA home directory
RUN mkdir -p /opt/jira-home/database
RUN echo "jira.home = /opt/jira-home" > /opt/jira/atlassian-jira/WEB-INF/classes/jira-application.properties
# copy all the necessary files
COPY h2db.mv.db /opt/jira-home/database
COPY dbconfig.xml /opt/jira-home/
RUN rm -f /opt/jira-home/.jira-home.lock
CMD ["/opt/jira/bin/start-jira.sh", "-fg"]

Now the script directory contains the Groovy script (a directory for the jsons that will be created) and a Docker file which looks pretty much like:

Docker: scripts
# Ubuntu
FROM ubuntu:16.04
MAINTAINER Sultan Maiyaki
RUN apt-get update
RUN apt-get  install -q -y curl
RUN mkdir -p /opt/manual_import
RUN mkdir -p /opt
# copy all the necessary files
COPY ["manual_import", "/opt/manual_import"]
COPY ["entrypoint.sh", "/opt"]
WORKDIR /opt
RUN chmod a+x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

It doesn’t do much other than running an entry point script that, in turn, executes our REST call. Eventually when our Groovy script is mature enough, the entry point will face execute the Groovy to generate an updated JSON, before calling the REST services.

Building and running

As mentioned earlier, we use Docker compose to wire the two containers. We  also link in such a way that the JIRA container must be started first before the script container is run. This is achieved with the link directive in our Docker Compose file. It looks like this:

Docker Compose
version: '2'
services:
  1mindemo_jirah2:
    build: .
    ports:
     - "8080:8080"
    expose:
     - "8080"
  
  scripts:
    build: scripts/ 
    links:
     - 1mindemo_jirah2

The file is in yaml format. As you can see, the scripts service has a link to the 1mindemo_jira service.

The next step will be to run our Docker. You can follow the below steps:

Clone the repo, navigate to the root directory and run the command below for the first time: docker-compose up --build

Screen Shot 2016-07-26 at 12.54.45

After the JIRA process has been started, all the scripts to populate the data are run. The good thing here is that the service are distinguished by colours – JIRA process blue, scripts yellow.

Screen Shot 2016-07-26 at 12.56.04

Screen Shot 2016-07-26 at 12.57.07

Screen Shot 2016-08-05 at 11.21.57

Subsequently, you can use the same command without the build option docker-compose up . You can check the url to access your instance by running the command docker-machine ip. This will return the ip, which you can then use as thus: http://192.168.1.100:8080 (assuming the returned ip was 192.168.1.100).

If you want to access on localhost, you will need to enable port forwarding. This can be done by editing the network settings of the light virtualbox started by Docker. You then click the port forwarding and add 8080 to the list.

Screen Shot 2016-07-21 at 16.28.59

Screen Shot 2016-07-21 at 16.29.12
You can verify that your service is started by checking the Docker process with the command docker ps -a:

docker_ps

You can login to JIRA on port 8080 with username “admin” and password “admin”:

Screen Shot 2016-07-21 at 16.29.24You can shut down the service by running the command:

docker rm -f container_id

In our previous screenshot showing our services, we can use the commands below to stop the containers:

docker rm -f 968c4b165bdb
docker rm -f 9ec1ccc8c854

If you only have those containers running, you can stop then with a single command:

docker rm -f $(docker ps -qa)

What’s next?

So – we’ve reviewed Docker and how powerful it can be, and have seen it used in the one-minute demo project. From this point, You can get access to the repo, install Docker, and with a single command, start your demo environment within a minute.

In the next few blogs, we shall explore the following:

  • Deploying our Docker container on AWS. AWS has a new service, released around 2015, called elastic container service (ECS), that allows us spin up an EC2 instance that is Docker optimised. We could also use the AWS elastic beanstalk (EBS) to achieve the same.
  • Once we are able to establish deployment on AWS, we can show how Bamboo can automate the whole process.

Stay tuned!

Related resources

View all resources