Sunday, June 11, 2017

Declarative environments with Docker-Compose

Using  Docker compose
Manipulating environments and iterating on projects
Scaling services and cleaning up
Building declarative environments


Some of the common scenarios:
Have you ever joined a team with an existing project and struggled to get  your development environment  set up or  IDE  configured?If someone asked you to provision a test environment for their project,could you enumerate all the questions you would need to ask to get the job done?Can you imagine how paniful it is for development teams and system administartors to resynchronize when environments change?All of these are common and high-effort tasks.They can be time-intensive while adding little value to a project.In the worst case,they give rise  to policies or procedures that limit developer flexibility,slow the iteration cycle and bring  paths of least resistance to the forefront of technical decision making.

Docker compose (also called Compose)  and how you can use it to solve these common problems.

Docker compose:UP
Compose is a tool for defining,launching and managing services,where  a service is defined as one or more replicas of a Docker container.Services and systems of services are defined in YAML files and managed with the command-line program docker-compose.With Compose you can use simple commands to accomplish these tasks:

Build Docker images
Launch containerized applications as services
Launch full systems of services
Manage the state  of individual services in a system
Scale services up or down
View logs  for the collection of containers making a service

Compose  lets you stop focusing on individual containers and instead describe full environments  and service component interactions.A compose file might describe four or five  unique  services that are interrelated  but should maintain isolation and may scale independently .This level of interaction covers most of the every day use cases  for system management.For that  reason,most interactions with Docker will be through compose.

Installations of docker-compose
 Compose lets you stop focusing on individual container  and instead describe full environment and service component interaction.A compose file might describe four or five unique services that are interrelated but should maintain isolation and may scale independently.This level  of interaction covers most of the everyday use cases for system management.For that the reason,most  interactions with Docker will be through Compose.
   By this point you have almost certainly installed Docker,but you may not have installed compose.You can find up-to-date installation instructions for your environment at https://docs.docker.com/compoe/install/.Official support for windows has not been implemented at the time of this writing.But  many users have successfully installed compose  on windows through  pip(a python package manager).Check the official site for up-to-date information.You  may be pleasantly surprised to find that compose  is a single binary file and that installation instructions are quite simple.Take  the time to install compose now.

  The best way to develop an appreciation for any tool  is to use it.The rest of this  section will get you started with a few situational examples.

Onboarding with a simple development environment
 Suppose you have started a new job as  a software developer with a forward-loking team that owns a  mature project.If you have been in a similar situation before,you may anticipate that your going to spend a few days installing and configuring your IDE and getting an operational development environment running on your workstation.But on your first day at this job,your peers give you three simple  instructions to get started:

Install Docker
Install Docker Compose
Install and use Git to clone the development environment
Rather  than ask you to clone a development environment here,I’ll have  you create a new directory named wp-example and copy the following docker-compose.yml file into that directory:



As you may be able to tell from examining the file,your going to launch a WordPress service and an independent database.This is an iteration of a bsic example.Change to the directory  where you created the docker-compose.yml file and start  it all up with the following command:

  Docker-compose  up

This should result in output similar to the following:

 Creating wpexample_db_1…..
 Creating  wpexample_wordpress_1….
  ………..

You  should be able to open http://localhost:8080/(or replace   “localhost” with your virtual machine’s IP address)in a web browser and discover a fresh WordPress installation.This example is fairly simple  but does  describe a multi-service architecture.Imagine a typical three –or four tier web application that consists of a webserver,application code,a database and maybe a cache.Launching  a local copy of such an environment might typically take a few days-longer if the person doing  work is less familiar with some of the components.With Compose,it’s  as simple as acquiring the  docker-compose.yml file and running docker-compose up.
  When you have finished having fun with our WordPress instance,you  should clean up.You can shutdown the whole environment by pressing Ctrl-C.Before you remove all the containers that were created,take a moment to list them with both the docker and docker-compose commands:
    Docker  ps
    Docker-compose ps

Using docker displays a list  of two pr more containers in the standard  fashion.But listing the containers with docker-compose includes only the  list of containers that are  defined by the docker-compose.yml in the current directory.This form is more refined and succinct.Filtering the list in this way also helps you focus on the containers that make up the environment your currently working on.Because  moving on,taeke the time to clean up environment.
   Compose has an rm  command that’s very similar to the docker rm  command.The  difference is that docker-compose rm will remove all services or a specific  service defined by the environment.An other minor difference is that the –f option doesn’t force the removal of running containers.Instead,it suppresses a verification stage.
  So,the first step in cleaning up os to stop the environment.You can either docker-compose stop or  docker-compose kill for this purpose.Using stop is preferred to kill for reasons explained.Like other compose commands,these can be passed a service name to target for shutdown.
    Once you have stopped the services,clean up with the docker-compose rm command.Remember,if you omit the –v option,volumes may become orphaned:

   Docker-compose rm  -v
     Compose will display a list of the container that are going  to be removed and prompt you for verification.Press the Y key to proceed.With the removal of these containers,your ready to learn how compose manages state  and tips for avoiding orphan services while iterating.
    This WordPress sample is trivial.Next,you will see how you might use Compose to model a much  more complicated environment.

A complicated architecture:distribution and Elasticsearch integration
   You  launch four related components that together provide a Docker registry that’s configured to pump event data into an Elasticsearch instance and provide a web interface for searching those events.



Setting up the example required image builds and careful accounting while linking containers together.You can quickly re-create the example by cloning an existing environment from version control and  launching it with compose:

  Git  clone https://github.com/dockerinaction/ch11_notifications.git
    Cd ch11_notifications
   Docker-compose up –d
  
When you run the last command,Docker will spring to life building various images and starting containers.It differs from  the first example in that you use the   -d option.This option launches the containers in detached mode.It operated exactly like the –d option on the docker run command.When  the containers are detached ,the log output of each of the containers will not be streamed to your  terminal.
 If you need to access that data,you could use the docker logs  command for a specific container,but that does not scale well if your running several conatiners.Instead use the docker-compose logs commands to get the aggregated log stream for all containers or some subset of the services manages vy compose For example:if you want to see all the logs for all services,run this:

   Docker-compose logs

This command will automatically follow the logs,so when you have finished,press Ctrl-C  to quit.If you want tosee only one or more service services,then name those services:

   Docker-compose  logs pump elasticsearch

In this example,you launched the complete environment with a single command and viewed the output with a single command.Being able to operate at such a high level is nice,but the more powerful fact is that  your also in  possession of the various sources and can iterate locally with the same ease.
  Suppose you have  another service that you would like to bind on port 3000.This would conflict with the calaca service in this example.Making the change is as simple as changing ch11_notifications/docker-compose.yml and running docker-compose up again.Take a look at the file:



Change the last line where it reads 3000:3000  to 3001:3000 and save the file.With the change made,you can rebuild the environment by simply running docker-compose up   -d again.When you do,it will stop the currently running containers,remove those containers,create new containers  and reattach any volumes that may have been mounted on the previous generation of the environment.When possible,Compose will limit the scope of restarted containers to those that have been changed.
   If the sources for your services change,you can rebuild one or all of your services with a single command.To rebuild all the services in your environment,run  the following:

Docker-compose build

If  you only need to rebuild one or some subset of your services,then simply name the service.This command will rebuild both the calaca and pump services:

  Docker-compose  build calaca pump

At this point,stop and remove the container you created for these services:

   Docker-compose rm  -vf

By working with these  examples,you have touched on the bulk of the development workflow.There are a few surprises:Docker Compose lets the person or people who define the environment worry about the details of working with Docker and frees users  or  developers to focus on the contained applications.

Iterating within an Environment
  Learning how compose fits into your workflow requires a rich example.This section uses an environment similar to one you might find in areal API product.You will work through scenarios and manage the full life cycle for many services.One scenario will guide you through scaling independent services,and another will teach you about state management.Try not tofocus too much on how the environment  is implemented.
    The environment your onboarding with in this section is an API  for working  with coffee shop metadata.It’s the brain child of a hot new startup catering to local entrepreneur and freelancers.At least it is for the purpose of the example.The environment structure.



Download this example from the GitHub  repository:

Git clone https : //github.com/dockerinaction/ch11_coffee_api.git

When you run this command,Git will download the most recent copy of the example and place it in a  new directory named  ch11_coffee_api under your current directory.When your ready,change into  that  directory to start working with the environment.

Build,start and rebuild services
    With the sources and environment description copied from version control,start the development workflow by building any artifacts that are declared in the environment .You can do that the following  command:

   Docker-compose  build

The output from  the build command will include several lines indicating that specific services  have been skipped  because they use an image.This environment is made up of four components.Of those,only one require a build step:the coffee API.You should see from the output that when Compose built this service,it is triggered a Dockerfile build and created an image.The build step runs a docker build command for the referenced services.

The coffee API’S source and Dockerfile  are contained in the coffee folder.It’s a simple  flask-based  python application  that listens on port 3000.The other services  in the environment are out-of-the-box components sourced from images on Docker Hub.
   With the environment built,check out the resulting images that have been loaded into Docker.Run docker images and look for an image named ch11coffeeapi_coffee.Compose  uses labels and  prefixed  names to identify  images and containers that were created  for a given environment.In this case the image produced for the coffee  service  is prefixed with ch11coffeeapi_ because that’s the derived name for the environment.The name comes from the directory where the docker-compose.yml file is  located.
    You have built  a local artifact for the coffee API,but the environment may references images that aren’t present on your system.You can pull all those with a single compose command:

  Docker-compose  pull

  This  command  will pull the most recent images for the tags referenced in the environment.At this  point,all the required  artifacts should be available  on your machine.Now you can start services,Start with the db service and pay special and pay attention to the logs


Docker compose up  –d   db

Note:Before compose  started the db service ,it started  the dbstate service,This happens because Compose is aware of all the defined services in the environment ,and the db service has a dependency  on the dbstate service .When compose starts any particular service ,it will start all the services in the dependency change for that service.This means that as you iterate  and you only need to start or restart a portion of your environment,Compose will ensure that it comes up with all dependencies attached.
   Now that you have seen that compose is aware of service dependencies,start up the whole environment:
  
    Docker-compose up

When you use an unqualified docker-compose up command,Compose will create or re-create every service in the environment and start them all.If compose detects any services that haven’t been built or services that use missimg images,it will trigger a build  or fetch the appropriate image (just like docker run).In this case,you may have noticed  that this command  re-created the db service even though it was  already running .This is done to ensure that everything has been brought up in a  functioning state.But if you know that the dependencies of a particular service are operating correctly,you can  start or restart service with out its dependencies.To do so ,include the  --no-dep flag.

Suppose for example,that you made a minor adjustment to the configuration for the proxy service  (contained in  docker-compose.yml) and wanted to restart the proxy only.You might simply run the following:

 Docker-compose  up  --no-dep  -d proxy

This command will stop an y proxy containers that might be running,remove those containers and then create and  start a  new container  for the proxy service.Every other service in the system will remain unaffected.If you had omitted the   --no-dep flag,then every service  would have been re-created  and restarted because every service in this environment is either a direct or transitive dependency of proxy.

The  --no-dep flag can come in handy when your  starting systems where components have long-running startup procedures and your experiencing race conditions.In those cases,you might start those  first to let them initialize before starting the rest of the services.

With the environment running,you can try experimenting with and iterating  on the  project.Load up http://localhost:8080/api/coffeeshops (or use your virtual machine IP address) in a web browser.If everything is working properly,you should see a JSON document that looks something like this:

{
“coffeeshops”:[]
}

This endpoint lists all coffee shops in the system.You can see that the list is empty.  Next,add some content to become a bit more familiar with the API your working on.Use the following cURL command to add content to your database.

Curl   -H  “content-type : application/json”  \
          -X POST  \
           -d  ‘{“name “:Albina  Press”,”address”:”5012 Southeast Hawthrone Boulevard,Portland OR”,”zipcode”:97215,”price”:2,”max_seats”:40,”power”:true,”wifi”:true}’ \
http://localhost:8080/api/coffeeshops

You may need to substitute your virtual machine IP address for “Localhost”.The new coffee shop should  be in your database now.You can test  by reloading  /api/coffeeshops/ in your browser.The result should look like the following response:

{
“coffeeshops”:
{
“address”:5012 Southeast Hawthrone Boulevard.Portland ,OR,
  “id”:35,
  “max_seats”: 40,
  “name”:”Albina Press’,
  “power”:true,
“price”:2,
“wifi”:true,
“zipcode”:97215
}
}
}
Now,as is common in the development life cycle,you should add a feature to the coffee API.The current implementation only lets you create and list coffee shops.It would be nice to add a basic ping handler for health checks from a load balancer.Open http://localhost:8080/api/ping (or use your virtual machine Ip address) in a web browser to see how the current application responds.

Your going to add a handler for this path and have the application return the host name where the API is running.Open  ./coffee/api/api.py in your favourite editor and add the following code to the end  of the file:

@api.route(‘/ping’)
def ping();
return os.getenv (‘HOSTNAME’)

If your having problems with the next step in the example,if your not in the mood to edit files,you can check out a feature branch on the repository where the changes have already been made:

Git  checkout  feature-ping

Once you have made the change and saved the file (or checked out the updated branch),rebuild and recreate the service with the following  commands:

Docker-compose build coffee
Docker-compose up  -d

The first command will run a docker build command for the coffee API again and generate an updated image.The second command will re-create  the environment.There’s no need to worry about the coffee shop data you created.The managed volume that was created to store the database will be detached and reattached seamlessly to the new database container.When the command is finished,refresh the web page that you loaded for api/ping earlier .It should display an ID  of a familiar style.This is the container ID that’s running the coffee API.Remember,Docker injects the container ID into  the HOSTNAME environment variable.

  In  this section you cloned a mature project and were able to start iterating on its functionality  with a minimal learning curve.Next you will scale,stop and tear down services.

Scale and remove services
 One of the most impressive and useful features of compose is the ability to scale a service up and down.When you do,compose creates  more replicas of the containers providing the service.Fantastically,these replicas  are automatically cleaned up when you scale down.But as you might expect ,containers that are running when you stop an environment will remain until the environment is rebuilt  or cleaned up.In this section you will learn how to scale up,scale down and clean up your services.

Continuing  with the coffee API example,you should have the environment running.You can check with the docker-compose ps command  introduced earlier.Remember,compose commands should be executed from the directory where your docker-compose.yml file is located.If the environment isn’t running (proxy,coffee and db services running)then bring it up eith docker-compose up  -d.

Suppose you were managing a test or production environment  and needed to increase the parallelism of the coffee service.To do so,you would only to need to point your machine at your target environment   and run a single command.In the parameters of this example,your working with your development environment.Before scaling up,get a list of the  containers providing the coffee service:

Docker-compose  ps  coffee

The output should look something  like the following:

Name                                                 command               state                    ports  
Ch11coffeeapi_coffee_1            ./entrypoint.sh            Up      0.0.0.0:32807 à 3000/tcp

Notice the far-right column,which details the host-to-container port mapping for the single container running the service.You can access  the coffee API served by this container directly (without going through the proxy)by using this public port (in this case,32807).The port number will be different  on your computer.If you load  the ping handler for this  container,you will see  the container  ID running the service.Now that you have established a baseline for your system,scale up the coffee service with the following command:

Docker-compose  scale  coffee=5

The  command  will log  each container that it creates.Use the docker-compose ps command again to see all the containers running the coffee service:

Name                                                command           state                       ports Ch11coffeeapi_coffee_1      ./entrypoint.sh           Up  0.0.0.0:32807 à3000/tcp
.                                          2        .                                          .            32808 à3000/tcp
.
.
.

As you can see,there are now five containers running the coffee API.These are all identical with the exception  of their  container IDs  and name.These containers even use identical host port mappings.The reason  this example works is that the coffee  API’s  internal port 3000 has been mapped to the host’s ephemeral port (port 0).when you bind to port 0,the OS will select an available port Is predefined range.If instead it were always bound to port 3000 on the host,then only one container could be running at a time.


Test the ping hanlder on a few different  conatiners (using the dedicated port for the container)before moving on.This example  project is used throughout the remainder of the book.At this point,however there’s not much else to do but scale back down to single instance.Issue a similar command to scale down:

Docker-compose  scale  coffee=1

The logs from the command  indicate which instances are being stopped and removed.Use the docker-compose ps command again to verify the state of your environment.

Name                                         command             state            ports
Ch11coffeeapi_coffee_1       ./entrypoint.sh      Up            0.0.0.0:32807 à 3000/tcp

Before moving on to learning about persistent state,Clean up the environment so you can start fresh with docker-compose rm.

Iteration and persistent state
        At the end of  last section you stopped and removed all the services and any managed volumes.Before that you also used Compose to re-create the environment,effectively removing and rebuilding all the containers.This section is focused on the nuances of the workflow and edge cases that can have  some undesired effects.
    First,a note about managed volumes.Volumes are a major concern of state management.Fortunately, Compose makes working with managed volumes trivial in iterative environments.When a service is rebuilt,the attached managed volumes are not removed.Instead  they are reattached   to the replacing containers for that service.This means that your free to iterate without losing your data.Managed volumes are finally cleaned up when the last container is removed using docker-compose rm and the –v flag.


The bigger issue with state management and Compose is environment state.In highly iterative environments you will be changing  several things,including the environment configuration.Certain types  of changes can create problems.
   For example,if you rename  or remove a service definition in your docker-compose.yml,then you lose the ability t manage it with compose.Trying this back  to the coffee API,the coffee service was named api during development .The environment was in a constant state of flux and at some point when the api service was running,the service was renamed to coffee.When that happened,compose was no longer aware of the api service.Rebuilds and relaunches worked only on the new coffee service,and the api service was orphaned.

You can discover this state when you use docker ps to list the running container and notice that contianers  for old versions of the service are running when  none should be.Recovery  is simple enough.You can either  use docker commands to directly  clean up the environments or add the orphan service definition back to the docker-compose.yml and clean up the compose.

Linking problems and the network
   The last thing  to note about using  compose to manage systems of services is remembering the impact of container-linking limitations.

 In the coffee  API sample project,the proxy service has a link dependency on the coffee service.Remember that Docker  builds container links  by creating firewalls rules and injecting service discovery information  into the dependent container’s environment variables and /etc/hosts file.
  In highly iterative  environments,a user  may be tempted  to relaunch only a specific service.That can cause problems if another service is dependent  on it.
For example,if you were to bring up the coffee API environment and then selectively  relaunch the coffee service,the proxy service would no  longer be able to reach its upstream dependency.When containers are re-created or restarted,they come back with different IP addresses.That change makes the information that was injected in to the proxy service stale.
   It may seem burdensome at times,but the best way to deal with this issue in environments without dynamic service discovery is to relaunch whole environments,at a minimum targeting  services that don’t acts as upstream dependencies.This is not an issue in robust systems that use a dynamic service discovery mechanism or overlay network.Multi-host networking .
     So far,you have used Compose in the context of an existing project .When starting from scratch,you have a few more things to consider.
  
Starting a new project:Compose YAML in three samples
   Defining an environment is no trivial task,requiring insight and forethought .As project requirements,traffic shape,technology,financial constraints and local expertise change,so will the environments for your project.For that reason,maintaining clear separation of concerns between the environments and your project is critical.Failing to do so often means that iterating on your environments requires iterating on the code that runs there.This section demonstrates how the features of the Compose YAML can help you build the environments you need.
   The remainder of this section will examine portions of the docker-compose.yml file included with the Coffee API sample.Relevant excerpts are included in the text.

Prelaunch builds,the environments,metadata and networking
      Begin by examining  the coffee service.This service uses a compose managed build,environment variable injection,linked dependencies,and special networking configuration.The service definition for coffee follows:

  

When you have an environment that’s closely tied to specific image sources,you might want to automate the build phase of those services with compose.In the coffee API sample project this is done for the coffee service.But the use case extends beyond typical development environment needs.
    If  your environments  use data-packed volume containers to inject environment configuration you might consider using a Compose  managed  build phase for each environment.Whatever  the reason,these are available  with a simple YAML file .You can also provide an alternative  Dockerfile name using the dockerfile key.

  The python-based application requires a few environment  variables to be set so that it can integrate  with a database.Environment  variables can be set for a service with the environment  key and a nested list or dictionary of key-value pairs.In 2  the list form is used.

   Alternatively  you can provide one or many files containing environment  variable definitions with the env_file key.Similar to environment variables,containers  metadata can be  set with a nested list or dictionary for the labels key. The dictionary form is used at 3

  Using detailed metadata can make working with your images and containers much easier,but remains  an optional practice.Compose  will use labels to store metadata for service accounting.

    Last 4 shows where this service customizes networking by exposing a port,binding to a host port and declaring a linked dependency.
   The expose key accepts a list of container ports  that should be exposed by firewall rules.The ports key accepts  a list of strings that describe port  mapping in the same format  accepted by the –p  option  on the docker run command.The links commands accept a list of links definitions in the format accepted by the docker run  --link flag.

Known artifacts and bind-mount volumes
    Two  critical components in the coffee API sample are provided by images downloaded from Docker Hub.These are the proxy service,which uses an official NGINX repository and db service,which uses the official Postgre repository.Official repositories are reasonably  trustworthy,but it’s a best  practice to pull and inspect third-party images before deploying them in sensitive environments.Once you have established trust in an image,you should use content-addressable images to ensure no untrusted artifacts are deployed.
   Services can be started  from any image with the image key.Both the proxy and db services are image-based and use content-addressable images:



The coffee API project uses both a database and loadbalancer  with only minimal configuration .The configuration that’s provided comes in the form of volumes.
    The proxy uses a volume to bind-mount a local configuration file into NGINX dynamic configuration location.This is a simple way to inject  configuration without the trouble of building completely new images.
   The db service uses the volumes_from key to list services that defines required voulmes.in this case db declares a dependency on the dbstate service,a volume container service.
   In general,the YAML keys are closely related to features  exposed on the docker run command .You can find a full reference at https://docs.docker.com/compose/yml/.

Volume containers and extended services
    Occasionally you will encounter  a common service archetype.Examples might include a NodeJS service,Java service,NGINX-based load balancer  or a volume container.In these cases,it may be appropriate to manifest those archtypes as parent services  and extend and specialize those for particular instances.
     The coffee API sample project defines a volume container  archetypes named data.The archetypes is a service  like any other.In this case it specifies  an image to start from,a command to run,a UID  to run as and label metadata.
Data:
    Image:gliderlabs/alpine
    Command: echo Data container
     User:999:999
     Labels:
       Com.dockerinaction.chapter:”11”
        Com.dockerinaction.example:”coffee API”
        Com.dockerinaction.role:”Volume container”

Alone,the service does nothing except define sensible defaults for a volume container.Note that it doen’t define any volues.That specialization is left to each volume container that extends the archtype:

Dbstate:
   Extends:
    File:docker-compose.yml  (reference to parent service in another file)
    Service: data
     Volumes:
-       /Var/lib/postgresql/data/pgdata

    The db state service defined a volume container that extends the data service.Service extensions must specify both the file and service name being extended.The relevant keys are extended  and nested file and service.Service extensions work in a similar fashion to Dockerfile builds.First the archetype containers is built and then it is committed.The child is a new container built form the freshly  generated layer.Just like  a Dockerfile build,these child containers inherit all the parent’s attributes including metadata.

 The dbstate service defines the managed volume mounted at /var/lib/postgresql/data/pgdata with the volumes key.The volumes key accepts a list of volumes specifications allowed by the docker run  -v flag .

 Docker Compose is a critical tool for anyone who has made  Docker a core component of their  infrastructure.With it,you will be able to reduce iteration time,version control environmets and orchestrate adhoc service interactions with declarative documents.

No comments:

Post a Comment