Wednesday, June 1, 2016

Beyond "Hello World" with Docker and Azure




I am relatively new to Docker containerization and have been working through numerous tutorials and reading plenty of great information from various blogs and websites especially Dockers own excellent website.

In my studies thus far I have been able to learn a great deal from these online sources however one tutorial that seems to be eluding me is the tutorial showing the leap from a non-trivial (Hello World) application to a multi-tier App moving seamlessly from Development to Cloud specifically focusing on MS Azure cloud. I did find a good tutorial at Real Python for which I based this tutorial upon.

A key difference between the two posts is that in this post we will be deploying to MS Azure. I will be working from my Mac running OSX El Capitan  (the steps should be similar for Windows users). We will be deploying a Python (Flask) web App with a Postrgres DB backend to Docker on a development machine we will include a reverse proxy and a web server and once configured and executing correctly we will use the Docker Azure Driver and deploy to Microsoft’s Cloud.

Before starting this tutorial you should have already established some of the basics of both Docker and Azure. I will not be providing introductory details on these topics as there are many excellent resources already covering this material. However this tutorial is self contained and is designed to work with minimal prerequisites.

Prerequisites:


Docker tools check:

Open a terminal window and a quick pre-test of your environment should reveal version numbers equal to or higher than the below. If the version numbers are lower or the components are not available download the Docker-Toolbox and install.

$ > docker-machine --version
docker-machine version 0.7.0, build a650a40

$ > docker-compose --version
docker-compose version 1.7.1, build 0a9ab35

$ > docker --version
Docker version 1.11.1, build 5604cbe

$ > git clone -b ForBeyondHelloBlog 
--single-branch https://github.com/dphiggs01/simple-assessment.git


The source code:

Next use the last command above to clone the project. This will get us a base project that we can deploy. Once we have the code checked out we will examine all the relevant Docker files in detail. Although this is a Python Application we will not be covering any Python and those with limited or no Python experience will be able execute and understand the Docker and Azure aspects of this tutorial as there are no language dependencies.

docker-compose.yml
web:
  build: ./wsgi
  expose:
    - "8000"
  links:
    - postgres
  volumes:
    - /usr/src/app/static
  env_file: assess.env
  command: /usr/local/bin/gunicorn --workers 2 --bind :8000 app:app

nginx:
  build: ./nginx/
  ports:
    - "80:80"
  volumes_from:
   - web
  links:
    - web

data:
  image: postgres:latest
  volumes:
    - /var/lib/postgresql
  command: "true"

postgres:
  image: postgres:latest
  volumes_from:
    - data
  ports:
    - "5432:5432"

The docker-compose.yml file configures 4 Docker Containers (web, nginx, data, & postgress) we briefly describe each below:

  1. The web container is built based upon the details of the Dockerfile found in the .wsgi directory. This file contents is the single line FROM python:3.4-onbuild this instruction tells Docker to install Python version 3.4 from the Python Official Repository on Docker Hub. –onbuild directive provides some nice additional help for the developer. It instructs Docker to install any Python library found in requirments.txt file and to copy any source in the current directory to /usr/src/app/ on the container machine. With these tasks complete we open port 8000 for our web server and instruct gunicon to start and to execute our python app. Additionally the links directive links the web container to the postgres container and volumes directive makes the /usr/src/app/static directory available to be accessible by another container (nginx in our example) 
  2. The nginx similarly uses the Dockerfile in the .nginx directory. Pulling the latest stable version of nginx form the official repo and copying our custom configuration file to the appropriate directory on the container. The ports directive configures port 80 in the container and Port 80 on the Docker machine to accept requests for the nginx process. Finally volumes_from and links directives gives nginx access to the web containers data. 
  3. The data container is defined solely to persist our database content. The data we put here will not be lost if we modify our actual postgres container. It may be interesting to note that even though we are defining a postges:lastest image for our container we are not actually running postgres in this container we simply use the image to create the container and then the command “true” exits the container, but since we share the volumes the data is still available. This technique is a Docker best practice. 
  4. And finally the postgres container is also based on the official Docker Hub repo and makes the default postgres ports available. 
 With this configuration we are ready to get to work with our Docker-machine.

Docker Machine:

Next we will use docker-machine to create an environment for development. The docker-machine create command essentially downloads boot2docker and starts a VM with Docker running in it. Use docker-machine ls command to confirm the environment is running.

> docker-machine create -d virtualbox devl
Running pre-create checks...
Creating machine...
(devl) Copying boot2docker.iso to ./machines/devl/boot2docker.iso...
(devl) Creating VirtualBox VM...
(devl) Creating SSH key...
(devl) Starting the VM...
(devl) Check network to re-create if needed...
(devl) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running 
on this virtual machine, run: docker-machine env devl

$ > docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         
devl      -        virtualbox   Running   tcp://192.168.99.100:2376   


Execute docker-machine env command to see the environment configuration and execute eval to set the environment locally.

$ > docker-machine env devl
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/danhiggins/.docker/machine/machines/devl"
export DOCKER_MACHINE_NAME="devl"
# Run this command to configure your shell: 
# eval $(docker-machine env devl)

$ > eval $(docker-machine env devl)

Docker Compose:

 With the environment set. cd into the working directory of the project. i.e. The directory containing the docker-compose.yml file

$ > docker-compose build
$ > docker-compose up -d

Create the database for our application with the below command
$ > docker-compose run web /usr/local/bin/python create_db.py

Find the ip for the container and go to your web browser http://ip
$ > docker-machine ip devl
192.168.99.100


The Browser should display the home page of the application and request you to enter an email.
If you have reached here you have successfully deployed your app on your local machine!!


Moving to the Cloud:

The final step is to deploy the application that we have running on our local machine to the Azure Cloud to do this we will use the docker-machine Azure driver. The first step is to create an environment variable with your AZURE_SUBSCRIPTION_ID. You can find your ID on the Azure Dashboard by clicking on "Subscriptions" in the left side menu. The Azure Driver has many options however the defaults are quite reasonable for basic usage. To see the options enter the below
$ > docker-machine create --driver azure --help

$ > export AZURE_SUBSCRIPTION_ID={{YOUR_ID_GOES_HERE}}
$ > docker-machine create --driver azure --azure-open-port 80 
--azure-resource-group {{YOUR_RESOURCE_GRP_GOES_HERE}} incubate
Running pre-create checks...
(incubate) Completed machine pre-create checks.
Creating machine...
(incubate) Querying existing resource group.name="BusinessInnovationTeam"
(incubate) Resource group "BusinessInnovationTeam" already exists.
(incubate) Configuring availability set.  name="docker-machine"
(incubate) Configuring network security group.  name="incubate-firewall"
(incubate) Querying if virtual network already exists.
name="docker-machine-vnet" location="westus"
(incubate) Configuring subnet.  name="docker-machine" 
vnet="docker-machine-vnet" cidr="192.168.0.0/16"
(incubate) Creating public IP address.  name="incubate-ip" static=false
(incubate) Creating network interface.  name="incubate-nic"
(incubate) Creating virtual machine.  username="docker-user" 
osImage="canonical:UbuntuServer:15.10:latest" name="incubate" 
location="westus" size="Standard_A2"
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running 
on this virtual machine, run: docker-machine env incubate

$ > docker-machine env incubate
$ > eval $(docker-machine env incubate)

$ > docker-compose build
$ > docker-compose up -d

$ > docker-compose run web /usr/local/bin/python create_db.py

$ > docker-machine ip incubate