disclosure


Google Voice Vulnerability

I’m the type of person that likes to figure out how stuff is architected and built. Whether it’s a building or software, it doesn’t matter.

One day, I was interested to find out how the process of adding a forwarding number works and if it might be possible to mess with it. Luckily for me, I did find such a way.

Google Voice


Google Voice Vulnerability

I’m the type of person that likes to figure out how stuff is architected and built. Whether it’s a building or software, it doesn’t matter.

One day, I was interested to find out how the process of adding a forwarding number works and if it might be possible to mess with it. Luckily for me, I did find such a way.

security


Google Voice Vulnerability

I’m the type of person that likes to figure out how stuff is architected and built. Whether it’s a building or software, it doesn’t matter.

One day, I was interested to find out how the process of adding a forwarding number works and if it might be possible to mess with it. Luckily for me, I did find such a way.

alarm-manager


android


wakelock


continuous integration


integration


jenkins


Pushing to ECR Using Jenkins Pipeline Plugin

I’ve been recently spending quite a bit of time in the DevOps space and working to build out better CI/CD pipelines, mostly utilizing Docker. In this post, I demonstrate building out a pipeline that will create a simple Docker image and push it to Amazon’s EC2 Container Registry.

stash


java


Marshalling Interfaces in JAX-RS

In Java, interfaces are used all over the place. Occasionally, these will need to be marshalled into XML/JSON. However, JAX-RS creates its own JAXBContext on its own. So, you may see exceptions like this…

Foo is an interface, and JAXB can't handle interfaces.
    this problem is related to the following location:
        at Foo
        at public Foo[] Response.getFoo()
        at Response
 
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.getMarshaller(AbstractJAXBProvider.java:160) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.writeTo(AbstractJAXBProvider.java:122) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.AbstractWriterInterceptorContext.writeTo(AbstractWriterInterceptorContext.java:129) [resteasy-jaxrs-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.ServerWriterInterceptorContext.writeTo(ServerWriterInterceptorContext.java:62) [resteasy-jaxrs-3.0.6.Final.jar:]
        ...

Ugly, huh? Here’s how to fix it…

maven


servlet 3.0


web resources


github


Adding Images to GitHub Wiki Repo

[Update on April 9, 2014] – updated URL patterns to reflect change in raw GitHub domain names

Adding images to GitHub is pretty straight forward if you can host the images somewhere or if you can put them into your main repository. But, what if you want them ONLY in the wiki? Here’s how to do it!

images


Adding Images to GitHub Wiki Repo

[Update on April 9, 2014] – updated URL patterns to reflect change in raw GitHub domain names

Adding images to GitHub is pretty straight forward if you can host the images somewhere or if you can put them into your main repository. But, what if you want them ONLY in the wiki? Here’s how to do it!

wiki


Adding Images to GitHub Wiki Repo

[Update on April 9, 2014] – updated URL patterns to reflect change in raw GitHub domain names

Adding images to GitHub is pretty straight forward if you can host the images somewhere or if you can put them into your main repository. But, what if you want them ONLY in the wiki? Here’s how to do it!

dns


dreamhost


cdi


Using Weld (CDI), JSF, and JAX-RS in Tomcat

We’ve recently been in the transition from using Spring in a Tomcat environment to Java EE7 in Wildfly? (still finalizing the container of choice).

Since we’re in transition, we’d like to run the application in both Tomcat and Wildfly, without having to make changes to the bundled application. It should just work on deploy. Here’s how to do it…

ee7


Using Weld (CDI), JSF, and JAX-RS in Tomcat

We’ve recently been in the transition from using Spring in a Tomcat environment to Java EE7 in Wildfly? (still finalizing the container of choice).

Since we’re in transition, we’d like to run the application in both Tomcat and Wildfly, without having to make changes to the bundled application. It should just work on deploy. Here’s how to do it…

javaee


Marshalling Interfaces in JAX-RS

In Java, interfaces are used all over the place. Occasionally, these will need to be marshalled into XML/JSON. However, JAX-RS creates its own JAXBContext on its own. So, you may see exceptions like this…

Foo is an interface, and JAXB can't handle interfaces.
    this problem is related to the following location:
        at Foo
        at public Foo[] Response.getFoo()
        at Response
 
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.getMarshaller(AbstractJAXBProvider.java:160) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.writeTo(AbstractJAXBProvider.java:122) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.AbstractWriterInterceptorContext.writeTo(AbstractWriterInterceptorContext.java:129) [resteasy-jaxrs-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.ServerWriterInterceptorContext.writeTo(ServerWriterInterceptorContext.java:62) [resteasy-jaxrs-3.0.6.Final.jar:]
        ...

Ugly, huh? Here’s how to fix it…

Using Weld (CDI), JSF, and JAX-RS in Tomcat

We’ve recently been in the transition from using Spring in a Tomcat environment to Java EE7 in Wildfly? (still finalizing the container of choice).

Since we’re in transition, we’d like to run the application in both Tomcat and Wildfly, without having to make changes to the bundled application. It should just work on deploy. Here’s how to do it…

jax-rs


Marshalling Interfaces in JAX-RS

In Java, interfaces are used all over the place. Occasionally, these will need to be marshalled into XML/JSON. However, JAX-RS creates its own JAXBContext on its own. So, you may see exceptions like this…

Foo is an interface, and JAXB can't handle interfaces.
    this problem is related to the following location:
        at Foo
        at public Foo[] Response.getFoo()
        at Response
 
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.getMarshaller(AbstractJAXBProvider.java:160) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.writeTo(AbstractJAXBProvider.java:122) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.AbstractWriterInterceptorContext.writeTo(AbstractWriterInterceptorContext.java:129) [resteasy-jaxrs-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.ServerWriterInterceptorContext.writeTo(ServerWriterInterceptorContext.java:62) [resteasy-jaxrs-3.0.6.Final.jar:]
        ...

Ugly, huh? Here’s how to fix it…

Using Weld (CDI), JSF, and JAX-RS in Tomcat

We’ve recently been in the transition from using Spring in a Tomcat environment to Java EE7 in Wildfly? (still finalizing the container of choice).

Since we’re in transition, we’d like to run the application in both Tomcat and Wildfly, without having to make changes to the bundled application. It should just work on deploy. Here’s how to do it…

jsf


Using Weld (CDI), JSF, and JAX-RS in Tomcat

We’ve recently been in the transition from using Spring in a Tomcat environment to Java EE7 in Wildfly? (still finalizing the container of choice).

Since we’re in transition, we’d like to run the application in both Tomcat and Wildfly, without having to make changes to the bundled application. It should just work on deploy. Here’s how to do it…

tomcat


Using Weld (CDI), JSF, and JAX-RS in Tomcat

We’ve recently been in the transition from using Spring in a Tomcat environment to Java EE7 in Wildfly? (still finalizing the container of choice).

Since we’re in transition, we’d like to run the application in both Tomcat and Wildfly, without having to make changes to the bundled application. It should just work on deploy. Here’s how to do it…

weld


Using Weld (CDI), JSF, and JAX-RS in Tomcat

We’ve recently been in the transition from using Spring in a Tomcat environment to Java EE7 in Wildfly? (still finalizing the container of choice).

Since we’re in transition, we’d like to run the application in both Tomcat and Wildfly, without having to make changes to the bundled application. It should just work on deploy. Here’s how to do it…

configuration


Using JDBC Security Domain in Wildfly

As I was going through this task, I ran into the whole “there’s so much documentation, but none of it is working or makes sense” problem that’s so common with the JBoss Application Server. So, this post is designed to help out!

jdbc


Using JDBC Security Domain in Wildfly

As I was going through this task, I ran into the whole “there’s so much documentation, but none of it is working or makes sense” problem that’s so common with the JBoss Application Server. So, this post is designed to help out!

security-domain


Using JDBC Security Domain in Wildfly

As I was going through this task, I ran into the whole “there’s so much documentation, but none of it is working or makes sense” problem that’s so common with the JBoss Application Server. So, this post is designed to help out!

wildfly


Using JDBC Security Domain in Wildfly

As I was going through this task, I ran into the whole “there’s so much documentation, but none of it is working or makes sense” problem that’s so common with the JBoss Application Server. So, this post is designed to help out!

jaxb


Marshalling Interfaces in JAX-RS

In Java, interfaces are used all over the place. Occasionally, these will need to be marshalled into XML/JSON. However, JAX-RS creates its own JAXBContext on its own. So, you may see exceptions like this…

Foo is an interface, and JAXB can't handle interfaces.
    this problem is related to the following location:
        at Foo
        at public Foo[] Response.getFoo()
        at Response
 
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.getMarshaller(AbstractJAXBProvider.java:160) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.plugins.providers.jaxb.AbstractJAXBProvider.writeTo(AbstractJAXBProvider.java:122) [resteasy-jaxb-provider-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.AbstractWriterInterceptorContext.writeTo(AbstractWriterInterceptorContext.java:129) [resteasy-jaxrs-3.0.6.Final.jar:]
    at org.jboss.resteasy.core.interception.ServerWriterInterceptorContext.writeTo(ServerWriterInterceptorContext.java:62) [resteasy-jaxrs-3.0.6.Final.jar:]
        ...

Ugly, huh? Here’s how to fix it…

agile


How we do QA Testing in Agile

There are many different ideas and approaches for doing QA testing, many of which depend on what project management style you’re using, the developers on the team, and if you have a QA/functional testing team. However, this is how I’ve found it to be successful, based on my observations working on the CREST team at Virginia Tech.

bacabs


How we do QA Testing in Agile

There are many different ideas and approaches for doing QA testing, many of which depend on what project management style you’re using, the developers on the team, and if you have a QA/functional testing team. However, this is how I’ve found it to be successful, based on my observations working on the CREST team at Virginia Tech.

scrum


How we do QA Testing in Agile

There are many different ideas and approaches for doing QA testing, many of which depend on what project management style you’re using, the developers on the team, and if you have a QA/functional testing team. However, this is how I’ve found it to be successful, based on my observations working on the CREST team at Virginia Tech.

docker


Using Docker Secrets during Development

Docker Secrets is an incredibly powerful and useful feature that helps build secure applications. If you haven’t checked out the great talk from Riyaz Faizullabhoy and Diogo Mónica at DockerCon about how they truly put security first in Docker, you really SHOULD stop and watch it now.

Now that you’ve watched that, you know how great secrets are and why you should be using them! They’re awesome! But… how do we get used to using them during development? Here are four ways (according to me anyways) on how to use secrets during development:

  1. Run a Swarm
  2. Use secrets in Docker Compose
  3. Mount secret files manually
  4. Dynamically create secrets using a “simulator”

There are definitely pros and cons to each method, so let’s dive in and look at each method!

Note that the methods below are intended for DEV environments, not production. When using non-swarm methods, secrets aren't very secretive. Friends don't let friends use real credentials locally! :)

Method One: Run a Swarm

In your local environment, you could simply spin up a Swarm (docker swarm init and then docker stack deploy -c docker-stack.yml app)

  • Pros
    • Exact same setup that would be used in non-development environments
    • Could scale out your local environment with multiple nodes to add capacity
  • Cons
    • Can’t use the build directive in your stack file to build an image for your development environment
    • If using more than one node, you likely won’t be able to mount your source code into the container for faster development
    • Can get confusing if you have a stack file for production but a different one for development

Method Two: Use secrets in Docker Compose

I wasn’t aware of this feature until Bret Fisher told me, so it’s quite possible many others don’t know too! As of Docker Compose 1.11 (PR #4368 here), you can specify secrets in your Docker Compose without using Swarm. It basically “fakes” it by bind mounting the secrets to /run/secrets. Cool! Let’s take a look!

Let’s assume we have a project structure that looked like this…

docker/
  app/
    Dockerfile
  secrets/
    DB_USERNAME
    DB_PASSWORD
    DB_NAME
src/

Our docker-compose.yml file could look like this…

version: "3.1"

services:
  app:
    build: ./docker/app
    volumes:
      - ./src:/app
    secrets:
      - db-username
      - db-password
      - db-name
secrets:
  db-username:
    file: ./docker/secrets/DB_USERNAME
  db-password:
    file: ./docker/secrets/DB_PASSWORD
  db-name:
    file: ./docker/secrets/DB_NAME

Running this with docker-compose up will make the secrets available to the app service at /run/secrets.

  • Pros:
    • Don’t need a running swarm
    • Can use familiar docker-compose up (and other Compose tools) to spin up the dev environment
    • Can use build directive and mount source code into the container
    • Even though the secrets aren’t delievered via Swarm, the app doesn’t know and doesn’t care
    • The compose file looks similar to a stack file that might be used in production
    • All secrets are explicitly declared, making it easy to know what secrets are available
  • Cons
    • Need a file per secret. More secrets = more files
    • Have to look at filesystem to see what secret values

Method Three: Mount secret files manually

The previous method helped us move away from using a full Swarm for local development and has a compose file that looks similar to a stack file that might be used for production. But, to some folks, the additional secrets config scattered througout the compose file litters things up a little bit.

Since Docker secrets are made available to applications as files mounted at /run/secrets, there’s nothing preventing us from doing the mounting ourselves. Using the same project structure from Method 2, our docker-compose.yml would be updated to this:

version: "3.1"

services:
  app:
    build: ./docker/app
    volumes:
      - ./docker/app/secrets:/run/secrets
      - ./src:/app

Now, our docker-compose.yml file is much leaner! But, we still have a bunch of “dummy secret” files that we have to keep in our code repo. Sure, they’re not large, but they do clutter up the repo a little bit.

  • Pros
    • Don’t need a full swarm
    • Can use familiar docker-compose up (and other Compose tools) to spin up the dev environment
    • Can use build directive and mount source code into the container
    • Even though the secrets aren’t delievered via Swarm, the app doesn’t know and doesn’t care
    • Less clutter in the compose file
  • Cons
    • Need a file per secret. More secrets = more files
    • Have to look at filesystem to see what secrets are available and their values
    • Compose file doesn’t look like a stack file anymore (not using the secrets directive)

Method Four: Dynamically create secrets using a “simulator”

So, we’ve been able to move away from using a full Swarm, but are still stuck with a collection of dummy secret files. It would be nice to not have those in the code repo. So… I’ve created a “Docker Secrets Simulator” image that “converts” environment variables to secrets. Using this approach, I can define everything within the docker-compose file and no longer need a lot of extra files. I only need to add one more service to my docker-compose file. Here’s what the updated compose file looks like…

version: "3.1"

services:
  secret-simulator:
    image: mikesir87/secrets-simulator
    volumes:
      - secrets:/run/secrets:rw
    environment:
      DB_USERNAME: admin
      DB_PASSWORD: password1234!
      DB_NAME: development
  app:
    build: ./docker/app/
    volumes:
      - ./src:/app
      - secrets:/run/secrets:ro

volumes:
  secrets:
    driver: local

The mikesir87/secrets-simulator image converts all environment variables to files in the /run/secrets directory. To make them available to the app service, I simply created a persistent volume and mounted it to both services. You’ll also notice that I mounted the volume as read-only for the app, preventing accidental changes.

  • Pros
    • Don’t need a full swarm
    • Can use familiar docker-compose up (and other Compose tools) to spin up the dev environment
    • Even though the secrets aren’t delievered via Swarm, the app doesn’t know and doesn’t care
    • The compose file looks similar to a stack file that might be used in production
    • All secrets are explicitly declared, making it easy to know what secrets are available
    • All values for the secrets are in one place
  • Cons
    • The compose file doesn’t look like a stack file (not using the secrets directive)

Does the compose file need to look like a stack file?

There are definitely arguments for both sides here and is probably worth a post of its own. Personally, I don’t want them to look the same as I deploy apps differently from how I develop them. Few reasons…

During development, I’m typically…

  • Using the build directive in docker-compose, which isn’t supported in a stack file
  • Mounting source code, which isn’t supported in a stack file (if you’re going to be using > 1 node)
  • Providing dummy secrets (either through dummy files or my new simulator)
  • Not using the config directive to worry about container placement, replicas, etc.

While in production, my stack files are…

  • Not going to use build and code mounting, but using fully constructed images
  • Going to have deploy directives for container placement, replica, and restart condition configuration, etc.
  • Going to use secrets defined externally, not files sitting on the host. They might be created like so:
gpg --decrypt db-password.asc | docker secret create db-password -

Since there are enough differences, I don’t feel I need to keep my compose files looking like stack files. They’re just too different. But, it’s just my two cents though… :)

Conclusion

Regardless of the method you use, if you’re planning to use Swarm in production, it’s good to get in the habit of using Docker Secrets in local development. For me, I want everything to be in one place during development, hence why I made my new mikesir87/secrets-simulator image. But, let me know what you think! If you have other ideas, let me know!

Docker is NOT a Hypervisor

This post is not intended to slam any single individual, entity, or organization. It's merely an observation/rant/informational post. I, too, have fallen victim to this idea in the past.

The other day, I was reading through Hacker News and saw a comment that said…

The container engine takes the place of the hypervisor.

While I obviously shouldn’t put a lot of weight into this one comment, I get this question all the time as I’m doing “Docker 101” presentations. So, it’s obviously something that’s confusing people. What makes it harder is this image that I see everywhere (and I use to use in my own presentations)…

The wrong Containers vs VMs image

What’s wrong with this image?

This graphic gives the impression that the “Container Engine” is in the execution path for code, as if it “takes the place” of the hypervisor and does some sort of translation. But, apps DO NOT sit “on top” of the container engine.

Even Docker itself has used a slight variant of this image in a few blog posts (1 and 2). So, it’s easy to get confused.

A “more correct” version

Personally, I think the graphic should look something more like this…

The correct Containers vs VMs image

What’s Different?

  • The Docker Daemon is out of the execution path - when code within a container is running, the container engine is not interpreting the code and translating it to run on the underlying OS. The binaries are running directly on the machine, as they are sharing the same kernel. A container is simply another process on the machine.
  • Apps have “walls” around them - all containers are still running together on the same OS, but “walled” off from each other through the use of namespaces and (if you’re using them) isolated networks
  • The Docker Daemon is just another process - the daemon is simply making it much easier to get images and create the “walls” around each of the running apps. It’s not interpreting code or anything else. Just configuring the kernel with namespaces and network config to let the containers do their thing.

Why’s it matter?

It’s tough learning new stuff! But, it’s harder to understand something new when the picture you’re painting for yourself is wrong. So, let’s all try to do a better job and help paint the correct picture from the start.

Additional Resources

There are some fantastic articles and resources out there to really learn what’s going on under the hood! Here are just a few of my favorite…

Have feedback?

Thoughts? Feedback? Let me know on Twitter or in the comment section below!

Introducing Docker Multi-Stage Builds

Docker has recently merged in support to perform “multi-stage builds.” In other words, it allows you to orchestrate a pipeline of builds within a single Dockerfile.

NOTE: The support for multi-stage builds is available starting with Docker 17.05. You can also use play-with-docker.com to start playing with it now.
Update 4/20 - Added section about named stages

Example Use Cases

When might you want to use a multi-stage build? It allows you to do an entire pipeline within a single build, rather than having to script the pipeline externally. Here’s a few examples…

  • Java apps using WAR files
    • First stage uses a container with Maven to compile, test, and build the war file
    • Second stage copies the built war file into an image with the app server (Wildfly, Tomcat, Jetty, etc.)
  • Java apps with standalone JARs (Spring boot)
    • First stage uses a container with Gradle to build the mega-jar
    • Second stage copies the JAR into an image with only a JRE
  • Node.js app needing processed JavaScript for client
    • First stage uses a Node container, installs dev dependencies, and performs a build (maybe compiling Typescript, Webpack-ify, etc.)
    • Second stage also uses a Node container, installs only prod dependencies (like Express), and copies the distributable from stage one

Obviously, these are just a few example of two-stage builds. But, there are many other examples.

What’s it look like?

This feature is still being actively developed, so there will be further advances (like naming of stages). For now, this is how it looks….

Creating stages

Each FROM command in the Dockerfile starts a stage. So, if you have two FROM commands, you have two stages. Like so…

FROM alpine:3.4
# Do something inside an alpine container

FROM nginx
# Do something inside a nginx container

Referencing another stage

To reference another stage in a COPY command, there is currently only one way to do it. Another PR is being worked on to name stages. Until then…

COPY --from=0 /app/dist/app.js /app/app.js

This pulls the /app/dist/app.js from the first stage and places it at /app/app.js in the current stage. The --from flag uses a zero-based index for the stage.

Let’s build something!

For our example, we’re going to build a Nginx image that is configured with SSL using a self-signed certificate (to use for local development). Our build will do the following:

  1. Use a plain alpine image, install openssl, and create the certificate keypair.
  2. Starting from a nginx image, copy the newly created keypair and configure the server.
FROM alpine:3.4
RUN apk update && \
     apk add --no-cache openssl && \
     rm -rf /var/cache/apk/*
COPY cert_defaults.txt /src/cert_defaults.txt
RUN openssl req -x509 -nodes -out /src/cert.pem -keyout /src/cert.key -config /src/cert_defaults.txt

FROM nginx
COPY --from=0 /src/cert.* /etc/nginx/
COPY default.conf /etc/nginx/conf.d/
EXPOSE 443

In order to build, we need to create the cert_defaults.txt file and the default.conf file.

Here’s a sample openssl config file that will create a cert with two subject alternate names for app.docker.localhost and api.docker.localhost.

[ req ]
default_bits        = 4096
prompt              = no
default_md          = sha256
req_extensions      = req_ext
distinguished_name  = dn

[ dn ]
C=US
ST=Virginia
L=Blacksburg
OU=My local development
CN=api.docker.localhost

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = api.docker.localhost
DNS.2 = app.docker.localhost

and the Nginx config…

server {
  listen         443;
  server_name    localhost;

  ssl   on;
  ssl_certificate       /etc/nginx/cert.pem;
  ssl_certificate_key   /etc/nginx/cert.key;

  location / {
    root   /usr/share/nginx/html;
    index  index.html index.htm;
  }

  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root   /usr/share/nginx/html;
  }
}

Build it!

Now… if you run the docker build…

docker build -t nginx-with-cert .

and then run the app…

docker run -d -p 443:443 nginx-with-cert

… you should have a server up and running at https://api.docker.localhost/ or https://app.docker.localhost/ (may need to add entries to your hosts file to map those to your machine)! Sure, it’s still self-signed, but it did the job!

Running on Play-With-Docker (PWD)

I’ve posted this sample to a GitHub repo (mikesir87/docker-multi-stage-demo) to make it easy. From an instance on PWD, you can simply run…

git clone https://github.com/mikesir87/docker-multi-stage-demo.git && cd docker-multi-stage-demo
docker build -t nginx-with-cert .
docker run -d -p 443:443 nginx-with-cert

Using Named Stages

You either reference stages by using offsets (like --from=0) or by using names. To name a stage use the syntax FROM [image] as [name]. Here’s an example…

FROM alpine:3.4 as cert-build
...

FROM nginx
COPY --from=cert-build

Conclusion

Docker multi-stage builds provide the ability to create an entire pipeline where the artifact(s) of one stage can be pulled into another stage. This helps build small production containers (as build tools aren’t packaged) and prevents the need to create an external script to build the pipeline.

Managing Secrets in Docker Swarm

Docker 1.13 was released just a few days ago (blog post here). With it came several improvements to Docker Swarm. One highly anticipated improvement (at least by me anyways) is secrets management. In this post, we’ll see how to add secrets to a swarm and how to make them available to running services.

Using Docker to Proxy WebSockets on AWS ELBs

At the time of this blog post, AWS ELBs don’t support WebSocket proxying when using the HTTP(s) protocol listeners. There are a few blog posts on how to work around it, but it takes some work and configuration. Using Docker, these complexities can be minimized.

In this post, we’ll start at the very beginning. We’ll spin up an ELB, configure its listeners, and then deploy an application to an EC2 instance.

Create a Docker 1.12+ Swarm using docker-machine

DockerCon 2016

In case you missed it, DockerCon 2016 was amazing! There were several great features announced, with most of it stemming from orchestration is now built-in. You get automatic load balancing (the routing mesh is crazy cool!), easy roll-outs (with healthcheck support), and government-level security by default (which is crazy hard to do by yourself).

In case you’re a little confused on how to spin up a mini-cluster, this post will show you how. It’s pretty easy to do!

Pushing to ECR Using Jenkins Pipeline Plugin

I’ve been recently spending quite a bit of time in the DevOps space and working to build out better CI/CD pipelines, mostly utilizing Docker. In this post, I demonstrate building out a pipeline that will create a simple Docker image and push it to Amazon’s EC2 Container Registry.

jekyll


ecr


Pushing to ECR Using Jenkins Pipeline Plugin

I’ve been recently spending quite a bit of time in the DevOps space and working to build out better CI/CD pipelines, mostly utilizing Docker. In this post, I demonstrate building out a pipeline that will create a simple Docker image and push it to Amazon’s EC2 Container Registry.

aws


Using Docker to Proxy WebSockets on AWS ELBs

At the time of this blog post, AWS ELBs don’t support WebSocket proxying when using the HTTP(s) protocol listeners. There are a few blog posts on how to work around it, but it takes some work and configuration. Using Docker, these complexities can be minimized.

In this post, we’ll start at the very beginning. We’ll spin up an ELB, configure its listeners, and then deploy an application to an EC2 instance.

Pushing to ECR Using Jenkins Pipeline Plugin

I’ve been recently spending quite a bit of time in the DevOps space and working to build out better CI/CD pipelines, mostly utilizing Docker. In this post, I demonstrate building out a pipeline that will create a simple Docker image and push it to Amazon’s EC2 Container Registry.

arquillian


Using Arquillian Drone and Graphene in Standalone Mode

I’ve been using Arquillian and its testing framework for a few years now and absolutely love it! It’s super easy to manage a server’s lifecycle, deploy applications, and then test them. Drone and Graphene’s extensions also make it incredibly easy to write browser-based tests without getting too down and dirty with the Selenium WebDriver API (which is a little messy).

Since I love Drone and Graphene, it would be nice to use the page abstractions/fragments on non-Java apps (sure, you can use Arquillian Cube too… but that’s another post). This post will go over what’s needed to run Drone and Graphene in standalone mode.

drone


Using Arquillian Drone and Graphene in Standalone Mode

I’ve been using Arquillian and its testing framework for a few years now and absolutely love it! It’s super easy to manage a server’s lifecycle, deploy applications, and then test them. Drone and Graphene’s extensions also make it incredibly easy to write browser-based tests without getting too down and dirty with the Selenium WebDriver API (which is a little messy).

Since I love Drone and Graphene, it would be nice to use the page abstractions/fragments on non-Java apps (sure, you can use Arquillian Cube too… but that’s another post). This post will go over what’s needed to run Drone and Graphene in standalone mode.

graphene


Using Arquillian Drone and Graphene in Standalone Mode

I’ve been using Arquillian and its testing framework for a few years now and absolutely love it! It’s super easy to manage a server’s lifecycle, deploy applications, and then test them. Drone and Graphene’s extensions also make it incredibly easy to write browser-based tests without getting too down and dirty with the Selenium WebDriver API (which is a little messy).

Since I love Drone and Graphene, it would be nice to use the page abstractions/fragments on non-Java apps (sure, you can use Arquillian Cube too… but that’s another post). This post will go over what’s needed to run Drone and Graphene in standalone mode.

swarm


Managing Secrets in Docker Swarm

Docker 1.13 was released just a few days ago (blog post here). With it came several improvements to Docker Swarm. One highly anticipated improvement (at least by me anyways) is secrets management. In this post, we’ll see how to add secrets to a swarm and how to make them available to running services.

Create a Docker 1.12+ Swarm using docker-machine

DockerCon 2016

In case you missed it, DockerCon 2016 was amazing! There were several great features announced, with most of it stemming from orchestration is now built-in. You get automatic load balancing (the routing mesh is crazy cool!), easy roll-outs (with healthcheck support), and government-level security by default (which is crazy hard to do by yourself).

In case you’re a little confused on how to spin up a mini-cluster, this post will show you how. It’s pretty easy to do!

docker-machine


Create a Docker 1.12+ Swarm using docker-machine

DockerCon 2016

In case you missed it, DockerCon 2016 was amazing! There were several great features announced, with most of it stemming from orchestration is now built-in. You get automatic load balancing (the routing mesh is crazy cool!), easy roll-outs (with healthcheck support), and government-level security by default (which is crazy hard to do by yourself).

In case you’re a little confused on how to spin up a mini-cluster, this post will show you how. It’s pretty easy to do!

elb


Using Docker to Proxy WebSockets on AWS ELBs

At the time of this blog post, AWS ELBs don’t support WebSocket proxying when using the HTTP(s) protocol listeners. There are a few blog posts on how to work around it, but it takes some work and configuration. Using Docker, these complexities can be minimized.

In this post, we’ll start at the very beginning. We’ll spin up an ELB, configure its listeners, and then deploy an application to an EC2 instance.

websocket


Using Docker to Proxy WebSockets on AWS ELBs

At the time of this blog post, AWS ELBs don’t support WebSocket proxying when using the HTTP(s) protocol listeners. There are a few blog posts on how to work around it, but it takes some work and configuration. Using Docker, these complexities can be minimized.

In this post, we’ll start at the very beginning. We’ll spin up an ELB, configure its listeners, and then deploy an application to an EC2 instance.

secrets


Managing Secrets in Docker Swarm

Docker 1.13 was released just a few days ago (blog post here). With it came several improvements to Docker Swarm. One highly anticipated improvement (at least by me anyways) is secrets management. In this post, we’ll see how to add secrets to a swarm and how to make them available to running services.

meetup


ci/cd