Posts tagged with docker

    MABUG - Running Apps with Docker

    Presentation give to the Mid-Atlantic Banner User Group, a collection of 36 universities, on October 30, 2017 on the Virginia Tech campus. Open the speaker notes for what each demo covered.

    This was a fun talk to give, as I tried to cover more of “securing the software supply chain” through demos. I’d like to do a more focused version that starts with the assumption that the audience already knows what containers are and spend time demoing/diving in to Notary and signing of images. Anywho… here are the slides (after a quick #dockerselfie)!

    Docker Selfie

    DockerCon EU 2017 - Dino Behind the Scenes

    Another DockerCon come and gone! As always, it was a blast and unfortunate to return to “normal life.” Haha! One of the fun things about this one was my adventure as a dinosaur!

    Where’d the idea come from?

    The original idea came from a conversation I had with Abby Fuller after her last talk at DockerCon Austin. I don’t recall how we originally got on the idea, but the conversation quickly turned to, “What would happen if a dino crashed your talk? What could we do?” Word quickly got to Ashlynn Polini, who runs all of DockerCon, and simply said it needed to happen! We came up with a few ideas…

    • Demonstrate the old way - dinos are old. Get on stage and use their short arms to pound away on the keyboard. Obviously not the best way. Then show the newer, better, faster way to build apps/images
    • Demonstrate dino orchestration - if a dino can lead an orchestra, why not let it show off container-based orchestration?
    • Dino apps need love too! - since Docker’s big push is to Modernize Traditional Apps, why not use dinos to help sell it? Could just roam around the conference center “proteseting” they need love too!

    Prep and Go Time!

    Obviously, we took Option #3 above. Dino suits were bought, packed, and brought to DockerCon! We cut up some boxes, stapled them together, and made custom signs! Cheesy? Absolutely! But, that’s the point!

    Dino signs

    Then, it was time to go to work! Some frequent questions we were asked…

    • Was it hot? Uh… yeah. We could only do about 20 min intervals before being covered in sweat!
    • Could you see? Uh… sorta. Key points are to know the area and look out for shadows. If they stop, good chance they’re taking pics!
    • Was it worth it? Absolutely! DockerCon is all about the experience and having fun. This was just another way to show that off!
    • Will you do it again? We’ll just have to wait and see, eh? ;)

    Why we did it!

    Obviously, it was fun! But, there is a reason Docker is putting a bigger emphasis on modernizing traditional applications. You don’t have to use Docker for only new apps. HUGE gains are to be found with just changing how you’re running your existing apps. Numerous presentations from large companies highlighted this… some saving up to 70% in infrastructure costs for existing apps. This is a big deal!

    Show me the pics!

    We joked about making a slight competition between #dockerselfie and #dinoselfie. Obviously, it’s impossible to beat the queen Betty Junod, but it was still fun! Here are some of the pics I pulled from Twitter!

    So… always remember… Dino apps deserve love too!

    Pro Tip - Fail chained commands in Dockerfile RUN

    You’ve built a Docker image. Great! It runs. But, not everything is there. Why not? Could it be a bad RUN command? This is the exact scenario I came across recently when helping someone debug an issue with their image builds.

    To see the problem, lets use the following Dockerfile:

    FROM alpine
    
    RUN wget http://example.com/some-file-that-doesnt-exist.tar; \
        tar xf some-file-that-doesnt-exist.tar; \
        rm some-file-that-doesnt-exist.tar; \
        mkdir /app
    

    When building the image, we get the following:

    > docker build --no-cache .
    Sending build context to Docker daemon  2.048kB
    Step 1/2 : FROM alpine
     ---> 76da55c8019d
    Step 2/2 : RUN wget http://example.com/some-file-that-isnt-there.tar;     tar xf some-file-that-isnt-there.tar;     rm some-file-that-isnt-there.tar;     mkdir /app
     ---> Running in 4dd8833bdb02
    Connecting to example.com (93.184.216.34:80)
    wget: server returned error: HTTP/1.1 404 Not Found
    tar: can't open 'some-file-that-isnt-there.tar': No such file or directory
    rm: can't remove 'some-file-that-isnt-there.tar': No such file or directory
     ---> 69356809831b
    Removing intermediate container 4dd8833bdb02
    Successfully built 69356809831b
    

    Result: The build succeeded, even though there were major errors. That tells us that the result of the RUN command had a non-error exit status.

    Looking at the RUN command, we see that the multiple commands are separated by semicolons. This causes all commands to run separately and ONLY the exit status of the last command is used to determine if the RUN succeeded or failed. Since the mkdir succeeded, the entire RUN passed. Doh!

    Instead of using semicolons, we should use && between our commands. Once the first command fails, the entire RUN fails.

    FROM alpine
    
    # Swapped semicolons between commands with && between commands
    RUN wget http://example.com/some-file-that-doesnt-exist.tar && \
        tar xf some-file-that-doesnt-exist.tar && \
        rm some-file-that-doesnt-exist.tar && \
        mkdir /app
    

    And what happens now when we build?

    > docker build --no-cache .
    Sending build context to Docker daemon  2.048kB
    Step 1/2 : FROM alpine
     ---> 76da55c8019d
    Step 2/2 : RUN wget http://example.com/some-file-that-isnt-there.tar &&     tar xf some-file-that-isnt-there.tar &&     rm some-file-that-isnt-there.tar &&     mkdir /app
     ---> Running in fc42a0768ac2
    Connecting to example.com (93.184.216.34:80)
    wget: server returned error: HTTP/1.1 404 Not Found
    The command '/bin/sh -c wget http://example.com/some-file-that-isnt-there.tar &&     tar xf some-file-that-isnt-there.tar &&     rm some-file-that-isnt-there.tar &&     mkdir /app' returned a non-zero code: 1
    

    VASCAN - Docker and Security

    Presentation given to the Virginia Alliance for Security Computing and Networking on September 28, 2017 on the Virginia Tech campus. Open the speaker notes for what each demo covered.

    Container Day Presentations

    I had the incredible opportunity to organize and run the inaugural NRV ContainerDay event on September 2, 2017. It was a blast and gave me a much greater appreciation for those that organize large events. Below are two of my presentations. Open the speaker notes for what each demo covered.

    Why Containers/Docker?

    Creating Effective Images

    Using Docker Secrets during Development

    Docker Secrets is an incredibly powerful and useful feature that helps build secure applications. If you haven’t checked out the great talk from Riyaz Faizullabhoy and Diogo Mónica at DockerCon about how they truly put security first in Docker, you really SHOULD stop and watch it now.

    Now that you’ve watched that, you know how great secrets are and why you should be using them! They’re awesome! But… how do we get used to using them during development? Here are four ways (according to me anyways) on how to use secrets during development:

    1. Run a Swarm
    2. Use secrets in Docker Compose
    3. Mount secret files manually
    4. Dynamically create secrets using a “simulator”

    There are definitely pros and cons to each method, so let’s dive in and look at each method!

    Note that the methods below are intended for DEV environments, not production. When using non-swarm methods, secrets aren't very secretive. Friends don't let friends use real credentials locally! :)

    Method One: Run a Swarm

    In your local environment, you could simply spin up a Swarm (docker swarm init and then docker stack deploy -c docker-stack.yml app)

    • Pros
      • Exact same setup that would be used in non-development environments
      • Could scale out your local environment with multiple nodes to add capacity
    • Cons
      • Can’t use the build directive in your stack file to build an image for your development environment
      • If using more than one node, you likely won’t be able to mount your source code into the container for faster development
      • Can get confusing if you have a stack file for production but a different one for development

    Method Two: Use secrets in Docker Compose

    I wasn’t aware of this feature until Bret Fisher told me, so it’s quite possible many others don’t know too! As of Docker Compose 1.11 (PR #4368 here), you can specify secrets in your Docker Compose without using Swarm. It basically “fakes” it by bind mounting the secrets to /run/secrets. Cool! Let’s take a look!

    Let’s assume we have a project structure that looked like this…

    docker/
      app/
        Dockerfile
      secrets/
        DB_USERNAME
        DB_PASSWORD
        DB_NAME
    src/
    

    Our docker-compose.yml file could look like this…

    version: "3.1"
    
    services:
      app:
        build: ./docker/app
        volumes:
          - ./src:/app
        secrets:
          - db-username
          - db-password
          - db-name
    secrets:
      db-username:
        file: ./docker/secrets/DB_USERNAME
      db-password:
        file: ./docker/secrets/DB_PASSWORD
      db-name:
        file: ./docker/secrets/DB_NAME
    

    Running this with docker-compose up will make the secrets available to the app service at /run/secrets.

    • Pros:
      • Don’t need a running swarm
      • Can use familiar docker-compose up (and other Compose tools) to spin up the dev environment
      • Can use build directive and mount source code into the container
      • Even though the secrets aren’t delievered via Swarm, the app doesn’t know and doesn’t care
      • The compose file looks similar to a stack file that might be used in production
      • All secrets are explicitly declared, making it easy to know what secrets are available
    • Cons
      • Need a file per secret. More secrets = more files
      • Have to look at filesystem to see what secret values

    Method Three: Mount secret files manually

    The previous method helped us move away from using a full Swarm for local development and has a compose file that looks similar to a stack file that might be used for production. But, to some folks, the additional secrets config scattered througout the compose file litters things up a little bit.

    Since Docker secrets are made available to applications as files mounted at /run/secrets, there’s nothing preventing us from doing the mounting ourselves. Using the same project structure from Method 2, our docker-compose.yml would be updated to this:

    version: "3.1"
    
    services:
      app:
        build: ./docker/app
        volumes:
          - ./docker/app/secrets:/run/secrets
          - ./src:/app
    

    Now, our docker-compose.yml file is much leaner! But, we still have a bunch of “dummy secret” files that we have to keep in our code repo. Sure, they’re not large, but they do clutter up the repo a little bit.

    • Pros
      • Don’t need a full swarm
      • Can use familiar docker-compose up (and other Compose tools) to spin up the dev environment
      • Can use build directive and mount source code into the container
      • Even though the secrets aren’t delievered via Swarm, the app doesn’t know and doesn’t care
      • Less clutter in the compose file
    • Cons
      • Need a file per secret. More secrets = more files
      • Have to look at filesystem to see what secrets are available and their values
      • Compose file doesn’t look like a stack file anymore (not using the secrets directive)

    Method Four: Dynamically create secrets using a “simulator”

    So, we’ve been able to move away from using a full Swarm, but are still stuck with a collection of dummy secret files. It would be nice to not have those in the code repo. So… I’ve created a “Docker Secrets Simulator” image that “converts” environment variables to secrets. Using this approach, I can define everything within the docker-compose file and no longer need a lot of extra files. I only need to add one more service to my docker-compose file. Here’s what the updated compose file looks like…

    version: "3.1"
    
    services:
      secret-simulator:
        image: mikesir87/secrets-simulator
        volumes:
          - secrets:/run/secrets:rw
        environment:
          DB_USERNAME: admin
          DB_PASSWORD: password1234!
          DB_NAME: development
      app:
        build: ./docker/app/
        volumes:
          - ./src:/app
          - secrets:/run/secrets:ro
    
    volumes:
      secrets:
        driver: local
    

    The mikesir87/secrets-simulator image converts all environment variables to files in the /run/secrets directory. To make them available to the app service, I simply created a persistent volume and mounted it to both services. You’ll also notice that I mounted the volume as read-only for the app, preventing accidental changes.

    • Pros
      • Don’t need a full swarm
      • Can use familiar docker-compose up (and other Compose tools) to spin up the dev environment
      • Even though the secrets aren’t delievered via Swarm, the app doesn’t know and doesn’t care
      • The compose file looks similar to a stack file that might be used in production
      • All secrets are explicitly declared, making it easy to know what secrets are available
      • All values for the secrets are in one place
    • Cons
      • The compose file doesn’t look like a stack file (not using the secrets directive)

    Does the compose file need to look like a stack file?

    There are definitely arguments for both sides here and is probably worth a post of its own. Personally, I don’t want them to look the same as I deploy apps differently from how I develop them. Few reasons…

    During development, I’m typically…

    • Using the build directive in docker-compose, which isn’t supported in a stack file
    • Mounting source code, which isn’t supported in a stack file (if you’re going to be using > 1 node)
    • Providing dummy secrets (either through dummy files or my new simulator)
    • Not using the config directive to worry about container placement, replicas, etc.

    While in production, my stack files are…

    • Not going to use build and code mounting, but using fully constructed images
    • Going to have deploy directives for container placement, replica, and restart condition configuration, etc.
    • Going to use secrets defined externally, not files sitting on the host. They might be created like so:
    gpg --decrypt db-password.asc | docker secret create db-password -

    Since there are enough differences, I don’t feel I need to keep my compose files looking like stack files. They’re just too different. But, it’s just my two cents though… :)

    Conclusion

    Regardless of the method you use, if you’re planning to use Swarm in production, it’s good to get in the habit of using Docker Secrets in local development. For me, I want everything to be in one place during development, hence why I made my new mikesir87/secrets-simulator image. But, let me know what you think! If you have other ideas, let me know!

    Docker is NOT a Hypervisor

    This post is not intended to slam any single individual, entity, or organization. It's merely an observation/rant/informational post. I, too, have fallen victim to this idea in the past.

    The other day, I was reading through Hacker News and saw a comment that said…

    The container engine takes the place of the hypervisor.

    While I obviously shouldn’t put a lot of weight into this one comment, I get this question all the time as I’m doing “Docker 101” presentations. So, it’s obviously something that’s confusing people. What makes it harder is this image that I see everywhere (and I use to use in my own presentations)…

    The wrong Containers vs VMs image

    What’s wrong with this image?

    This graphic gives the impression that the “Container Engine” is in the execution path for code, as if it “takes the place” of the hypervisor and does some sort of translation. But, apps DO NOT sit “on top” of the container engine.

    Even Docker itself has used a slight variant of this image in a few blog posts (1 and 2). So, it’s easy to get confused.

    A “more correct” version

    Personally, I think the graphic should look something more like this…

    The correct Containers vs VMs image

    What’s Different?

    • The Docker Daemon is out of the execution path - when code within a container is running, the container engine is not interpreting the code and translating it to run on the underlying OS. The binaries are running directly on the machine, as they are sharing the same kernel. A container is simply another process on the machine.
    • Apps have “walls” around them - all containers are still running together on the same OS, but “walled” off from each other through the use of namespaces and (if you’re using them) isolated networks
    • The Docker Daemon is just another process - the daemon is simply making it much easier to get images and create the “walls” around each of the running apps. It’s not interpreting code or anything else. Just configuring the kernel with namespaces and network config to let the containers do their thing.

    Why’s it matter?

    It’s tough learning new stuff! But, it’s harder to understand something new when the picture you’re painting for yourself is wrong. So, let’s all try to do a better job and help paint the correct picture from the start.

    Additional Resources

    There are some fantastic articles and resources out there to really learn what’s going on under the hood! Here are just a few of my favorite…

    Have feedback?

    Thoughts? Feedback? Let me know on Twitter or in the comment section below!

    Introducing Docker Multi-Stage Builds

    Docker has recently merged in support to perform “multi-stage builds.” In other words, it allows you to orchestrate a pipeline of builds within a single Dockerfile.

    NOTE: The support for multi-stage builds is available starting with Docker 17.05. You can also use play-with-docker.com to start playing with it now.
    Update 4/20 - Added section about named stages

    Example Use Cases

    When might you want to use a multi-stage build? It allows you to do an entire pipeline within a single build, rather than having to script the pipeline externally. Here’s a few examples…

    • Java apps using WAR files
      • First stage uses a container with Maven to compile, test, and build the war file
      • Second stage copies the built war file into an image with the app server (Wildfly, Tomcat, Jetty, etc.)
    • Java apps with standalone JARs (Spring boot)
      • First stage uses a container with Gradle to build the mega-jar
      • Second stage copies the JAR into an image with only a JRE
    • Node.js app needing processed JavaScript for client
      • First stage uses a Node container, installs dev dependencies, and performs a build (maybe compiling Typescript, Webpack-ify, etc.)
      • Second stage also uses a Node container, installs only prod dependencies (like Express), and copies the distributable from stage one

    Obviously, these are just a few example of two-stage builds. But, there are many other examples.

    What’s it look like?

    This feature is still being actively developed, so there will be further advances (like naming of stages). For now, this is how it looks….

    Creating stages

    Each FROM command in the Dockerfile starts a stage. So, if you have two FROM commands, you have two stages. Like so…

    FROM alpine:3.4
    # Do something inside an alpine container
    
    FROM nginx
    # Do something inside a nginx container
    

    Referencing another stage

    To reference another stage in a COPY command, there is currently only one way to do it. Another PR is being worked on to name stages. Until then…

    COPY --from=0 /app/dist/app.js /app/app.js
    

    This pulls the /app/dist/app.js from the first stage and places it at /app/app.js in the current stage. The --from flag uses a zero-based index for the stage.

    Let’s build something!

    For our example, we’re going to build a Nginx image that is configured with SSL using a self-signed certificate (to use for local development). Our build will do the following:

    1. Use a plain alpine image, install openssl, and create the certificate keypair.
    2. Starting from a nginx image, copy the newly created keypair and configure the server.
    FROM alpine:3.4
    RUN apk update && \
         apk add --no-cache openssl && \
         rm -rf /var/cache/apk/*
    COPY cert_defaults.txt /src/cert_defaults.txt
    RUN openssl req -x509 -nodes -out /src/cert.pem -keyout /src/cert.key -config /src/cert_defaults.txt
    
    FROM nginx
    COPY --from=0 /src/cert.* /etc/nginx/
    COPY default.conf /etc/nginx/conf.d/
    EXPOSE 443
    

    In order to build, we need to create the cert_defaults.txt file and the default.conf file.

    Here’s a sample openssl config file that will create a cert with two subject alternate names for app.docker.localhost and api.docker.localhost.

    [ req ]
    default_bits        = 4096
    prompt              = no
    default_md          = sha256
    req_extensions      = req_ext
    distinguished_name  = dn
    
    [ dn ]
    C=US
    ST=Virginia
    L=Blacksburg
    OU=My local development
    CN=api.docker.localhost
    
    [ req_ext ]
    subjectAltName = @alt_names
    
    [ alt_names ]
    DNS.1 = api.docker.localhost
    DNS.2 = app.docker.localhost
    

    and the Nginx config…

    server {
      listen         443;
      server_name    localhost;
    
      ssl   on;
      ssl_certificate       /etc/nginx/cert.pem;
      ssl_certificate_key   /etc/nginx/cert.key;
    
      location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
      }
    
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
        root   /usr/share/nginx/html;
      }
    }

    Build it!

    Now… if you run the docker build…

    docker build -t nginx-with-cert .
    

    and then run the app…

    docker run -d -p 443:443 nginx-with-cert
    

    … you should have a server up and running at https://api.docker.localhost/ or https://app.docker.localhost/ (may need to add entries to your hosts file to map those to your machine)! Sure, it’s still self-signed, but it did the job!

    Running on Play-With-Docker (PWD)

    I’ve posted this sample to a GitHub repo (mikesir87/docker-multi-stage-demo) to make it easy. From an instance on PWD, you can simply run…

    git clone https://github.com/mikesir87/docker-multi-stage-demo.git && cd docker-multi-stage-demo
    docker build -t nginx-with-cert .
    docker run -d -p 443:443 nginx-with-cert
    

    Using Named Stages

    You either reference stages by using offsets (like --from=0) or by using names. To name a stage use the syntax FROM [image] as [name]. Here’s an example…

    FROM alpine:3.4 as cert-build
    ...
    
    FROM nginx
    COPY --from=cert-build
    

    Conclusion

    Docker multi-stage builds provide the ability to create an entire pipeline where the artifact(s) of one stage can be pulled into another stage. This helps build small production containers (as build tools aren’t packaged) and prevents the need to create an external script to build the pipeline.

    Managing Secrets in Docker Swarm

    Docker 1.13 was released just a few days ago (blog post here). With it came several improvements to Docker Swarm. One highly anticipated improvement (at least by me anyways) is secrets management. In this post, we’ll see how to add secrets to a swarm and how to make them available to running services.

    Using Docker to Proxy WebSockets on AWS ELBs

    At the time of this blog post, AWS ELBs don’t support WebSocket proxying when using the HTTP(s) protocol listeners. There are a few blog posts on how to work around it, but it takes some work and configuration. Using Docker, these complexities can be minimized.

    In this post, we’ll start at the very beginning. We’ll spin up an ELB, configure its listeners, and then deploy an application to an EC2 instance.

    Create a Docker 1.12+ Swarm using docker-machine

    DockerCon 2016

    In case you missed it, DockerCon 2016 was amazing! There were several great features announced, with most of it stemming from orchestration is now built-in. You get automatic load balancing (the routing mesh is crazy cool!), easy roll-outs (with healthcheck support), and government-level security by default (which is crazy hard to do by yourself).

    In case you’re a little confused on how to spin up a mini-cluster, this post will show you how. It’s pretty easy to do!

    Pushing to ECR Using Jenkins Pipeline Plugin

    I’ve been recently spending quite a bit of time in the DevOps space and working to build out better CI/CD pipelines, mostly utilizing Docker. In this post, I demonstrate building out a pipeline that will create a simple Docker image and push it to Amazon’s EC2 Container Registry.