A few days ago I deleted all of my Docker containers, images and data volumes on my development laptop… wiped clean off my hard drive.
And yes, I panicked!
But after a moment, the panic stopped; gone instantly after I realized that when it comes to Docker and containers, I’ve been doing it wrong.
Wait, You Deleted Them … Accidentally?!
If you build a lot of images and containers, like I do, you’re likely going to end up with a very large list of them on your machine.
Go ahead and open a terminal / console window and run these two commands:
Chances are, you have at least half a dozen containers with random names and more than a few dozen images with many of them having no tag info to tell you what they are. It’s a side effect of using Docker for development efforts, rebuilding images and rerunning new container instances on a regular basis.
No, it’s not a bug. It’s by design, and I understand the intent (another discussion for another time).
But, the average Docker developer knows that most of these old containers and images can be deleted safely. A good docker developer will clean them out on a regular basis. And great docker developers… well, they’re the ones that automate cleaning out all the old cruft to keep their machine running nice and smooth without taking up the entire hard drive with Docker related artifacts.
Then, there’s me.
DANGER, WILL ROBINSON
For whatever reason, I realized it had been a while since I had cleaned out my Docker artifacts. So I did what I always do: hit google and the magic answers of the internet for all my shell scripting needs.
My first priority was to remove all untagged images. A quick search and click later, I had a script that looked familiar pasted into my terminal window and I was hitting the enter button gleefully.
It wasn’t until a moment later – when I ran “docker images” again, and saw that I still had a dozen untagged images – that I figured out something was wrong.
Looking back at the page from which I copied the script, I saw the commands sitting under a heading that I had previously ignored. It read,
“Remove all stopped containers.”
Well, good news! All of my containers were already stopped, so guess what happened?
The panic hit hard as I quickly re-ran “docker ps -a” to find an empty list.
The Epiphany And The Evanescent Panic
As fast as my panic had set in, it left. Only a mild annoyance with myself making such a simple mistake remained. And the only reason I had a mild amount of annoyance was knowing that I would have to recreate the container instances I need.
That only takes a moment, though, so it’s not a big deal.
In the end, the panic was gone due to my realization of something that I’ve read and said dozens of times.
From the documentation on Dockerfile best practices:
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
I’ve used the word ephemeral, when talking about Docker containers, at least a dozen times in the last month.
But it wasn’t until this accidental moment of panic that I realized just how true it should be, and how wrong I was in my use of containers.
The Not-So Nuclear Option
The problem I had was the way in which I was using and thinking about containers, and this stemmed from how I viewed the data and configuration stored in them.
Basically, I was using my containers as if they were full-fledged installations on my machine or in a virtual machine. I was stopping and starting the same container over and over to ensure I never lost my data or configuration.
Sure, some of these containers used host-mounted volumes to read and write data to specific folders on my machine. For the most part, however, I assumed I would never lose the data in my containers because I would never delete them.
Well, that clearly wasn’t the case anymore…
I see now that what I once told a friend was “the nuclear option” of deleting all stopped containers, is really more like a dry-erase marker.
I’m just cleaning the board so I can use it again.
A Defining Moment
My experience, moment of panic and realization generated this post on twitter:
idea: if deleting all of your #Docker containers would cause you serious headache and hours of work to rebuild, you’re doing Docker wrong
— Derick Bailey (@derickbailey)
And honestly, this was a very defining experience, in reflection.
Reading and talking about how a Docker container is something that I can tear down, stand up again and continue from where I left off is one thing.
But having gone through this, I can see it directly applied to my own efforts, now.
Now the only minor annoyance that I have is rebuilding the container instances when I need them. The data and configuration are all easily re-created with scripts that I already have for my applications. At this point, I’m not even worried anymore.
That’s how Docker should be done.