The next time you’re at an art show or art gallery of any kind, walk up to an artist and ask them to install an electrical outlet in the wall on which their painting is displayed.
After ignoring the inevitable look of bewilderment and snarky response, grab some tools and start hacking the wall apart. Tell the artist that you just need an outlet here so you can recharge your phone.
Be sure to ask them to help.
I’m not 100% certain how this will end, but I can imagine it will involve security guards, police and possibly handcuffs.
As ridiculous as this scenario sounds, though, I feel like this is what I’ve been doing with Docker for the last few months.
I wanted to access my app via the container’s IP address.
When using the default “bridge” network, a docker container has it’s own IP address. This can be viewed via the “docker inspect <container name>” command.
That being the case, it would make sense to access the app via that IP address, right?
I thought it made sense, and I expected to be able to use the containers IP address.
So, I dug into the documentation.
I asked on twitter and in various slack communities.
Finally, I asked on the Docker forums. And the response I got?
Basically, a big “nope” and “don’t do that.”
But I wasn’t happy with that answer.
I continued to suggest that it should work this way, even though it doesn’t.
Sure, it may be possible on linux … or with kubernetes… or some other strange mechanism that I’m not aware of, yet.
It took a while to sink in. Eventually, I realized what the real problem was, though.
I shouldn’t even be trying to do this.
After a few hours of … “conversation” … with my friend Fred, via the WatchMeCode community slack, I finally started to understand that my expectations of Docker were wrong.
I use a lot of of virtual machines to host services that I don’t want on my laptop directly. Things like Oracle, SQL Server, an LDAP server, etc.
Virtual machines are great at handling these.
Lately, though, I’ve been moving some of these services to Docker. RabbitMQ, MongoDB and my Oracle XE installation are all in Docker containers, for example.
Life is great when I publish the needed ports from the container through my localhost. Everything works peachy.
My experience with virtual machines clouded my Docker expectations, though.
I saw that Docker was technically running on top of a virtual machine, and I immediately went into a virtual machine mentality and expectation set.
But here’s the thing…
A Docker container is not a virtual machine.
Yes, it may technically be running on a virtual machine, but that is an implementation detail – hidden behind the scenes – not a feature to be used.
An application hosted in a Docker container is a “virtualized” (containerized) application, not a virtual machine.
Yes, you can find Docker images that include a full linux distribution in them.
These distributions are there to support the application that is contained within, however. They are not to be executed as if they were a full, general purpose virtual machine.
This perspective matters, because it sets the expectations for a containerized app.
Imagine for a moment, that you have two node / express apps that both use port 3333.
What happens when you try to run both of them on the same machine?
You get a big fat error, saying the port is already in use.
Now, does it matter that the application is hosted in a Docker container?
Not really – at least, not from the perspective of the host machine when you try to bind port 3333 to more than one container.
You can’t run two apps on the same port of the same IP address. Not with any app on your machine directly, and not with Docker.
And why, you might ask? Because …
A Docker container is application virtualization.
And application virtualization is not the same as building a complete virtual machine to run a single service.
A Docker container happens to contain everything that the application needs, to run. But the intent and behavior of that container is to run the application on behalf of your host machine.
It’s a black box that allows you to cleanly separate applications that need different versions of the same dependency.
It gives you the ability to stop and start a server or service as if it were running on your computer directly, without having it installed or configured on your computer.
Docker’s job is to let you sandbox different apps in a system, so they don’t clobber each other’s configuration.
A Docker container is art hanging on a wall.
Art occupies a physical space. And due to those pesky laws of physics, one piece of art will prevent another from occupying the same space.
This was the mistake I made in my view of Docker.
I expected each Docker container to be a separate art gallery. In reality, each container is a piece of art hanging on the wall.
I should have been asking the artist about the intent or meaning of their painting, enjoying it for what it is.
Instead, I wanted to know why there wasn’t an outlet in the wall. I wanted the artist to help me with construction work – something that they may know nothing about.
I decided to try and tear up the wall on which the art was mounted, because I thought the wall was a fundamental part of the installation.
And the installation, as I saw it, wasn’t living up to my very inaccurate expectations.