Several years ago, I found myself sitting in a classroom on a Saturday morning.
It was an exciting day for me, attending my first code camp. I was surrounded by other developers with a shared enthusiasm for what we do, and had already seen several outstanding presentations.
The subject for the classroom in which I now sat was Resharper – a plugin for Visual Studio that added an incredible amount of power and flexibility for editing and restructuring code. I knew the basics, already, having installed it on my computer at work. But it was not something I felt I knew how to use efficiently.
Sure, I could click the squiggly red lines and the pop up menu that came with Resharper. But I knew there was more to it than this. I knew there had to be a reason beyond these simple, surface level features that made so many developers so excited about it.
That’s what made this Saturday morning session so important – knowing that I was about to see the real power of this tool.
With fluidity, the presenter demonstrated feature after feature. I was enthralled by the ease at which he edited and restructured code. More than anything else, though, I marveled at how he used Resharper’s features without once touching a mouse or trackpad. There were no arrow keys used to move around the menu system, either. It was all keyboard shortcuts and commands.
Furiously, I scribbled notes. I knew this was just the thing I needed, and I wanted to copy every technique shown. I could see in my mind, just how easy life as a developer was about to become!
Fast forward to Monday morning at the office. Opening my project in Visual Studio, I start editing code, excited about the opportunity to use Resharper with my new-found efficiency.
A moment later, I saw my opportunity. Checking my cheat sheet, I pressed a few buttons on my keyboard, and …
Wait. That wasn’t the feature I wanted.
Quickly, I checked my notes and tried another key combination. Again, it wasn’t the feature I wanted. I tried again. And again. And a few keyboard clicks later, I hit a brick wall.
Every last ounce of enthusiasm and excitement I had was draining – fast – to be replaced with a sinking feeling, like I was struggling in quicksand. Reluctantly, I reached for my mouse and clicked the drop down menu to find the option I was looking for.
It wasn’t enough to have the tool or to know the features existed.
There was a level of efficiency that eluded me, still. One that I wanted. One that I had seen the previous Saturday. But one that I could not seem to achieve.
And so it is with many of today’s tools for modern software development. Simply having the tools available – and even the ability to use them – doesn’t create efficiency.
Take Docker as a more modern example.
Once you learn the basics, it can offer a tremendous amount of value in development, testing, deployment and beyond. But the value it offers doesn’t imply anything about efficiency of use.
It only takes a few instructions in a Dockerfile to create a working Node.js application, for instance.
But with this Dockerfile – pulled directly from a recent project I built – efficiency is not a word that comes to mind.
The problem is that efficiency can only be measured in light of a goal. So what’s the goal here? To write a Dockerfile in as few instructions as possible? I would think not.
Rather, the goal should be to write a Dockerfile that runs the application as expected, and is able to be built and re-built as quickly as possible.
If you were to build a Dockerfile like the one shown, your project would run quite well. However, every time you need to rebuild the Docker image, you will incur another full round of “npm install”.
This doesn’t sound too bad until you realize that every line of code change and every tweak to the environment configuration requiers a rebuild and another round of “npm install”.
You can start to work around this with a host-mounted volume for editing code, of course. In fact, this is encouraged. It means you don’t have to rebuild your image every time you change a line of code.
But now you’re faced with a new problem: host-mounted volumes are notoriously slow. What was once a 2 to 3 minute install for your dependencies is now more like 5 to 10 minutes. And every time you decide you need to rebuild your base image or start a new container in which you want to install development dependencies, you’re stuck waiting for “npm install”. Again.
How, then, do you create efficiency in building and maintaining your Docker image, to match the value that Docker brings at runtime and deployment?
The short answer is to use the tools you have, more effectively.
When used correctly, Docker can cache your Node.js modules for you, eliminating the npm delay from your Docker projects almost entirely.
This isn’t the same kind of caching as the web, though. You’re not using a better CDN, a browser cache or proxying the HTTP requests to another server. You’re not switching package managers, either.
What kind of cache is it, then, if not the kind of cache that we build with web apps or a different package manager?