I’ve recently been moving a lot of my dev workflows over to be containerized by default. What this means is that I try to make it so that each project can run in isolation in a container before I go forward with any kind of development.
There are a lot of reasons for doing this but for me it had to do with ensuring that when I opened up my projects on different computers - whether they were personal computers I was using or a remote host - that I got deterministic results. At the beginning, I found this to be tedious but as I’ve migrated more and more projects I’ve found there to be a fair amount of shared logic / functionality.
One thing I’ve found myself doing over and over is setting up containers for apps that run with npm - whether that app is using webpack, typescript, parcel, etc.
So I wanted to share how I’m running these kinds of projects inside Docker containers.
If you’re not familiar with Docker, I’d recommend reading a little bit about it but all you really need to follow this tutorial is have Docker up and running on your machine (this will allow us to build and run containers with the following snippets).
Node (also known as npm) apps are relatively standalone so the only thing we really need to stand up a basic node app is the node image. Here’s a Dockerfile that pulls in node, opens up port 3000 on the container, and copies the directory the Dockerfile is in into the container so it has access to it.
# gets the node container image FROM node:latest WORKDIR /home/app USER $UID # opens and exposes port 3000 ENV PORT 3000 EXPOSE 3000 # copies the current directory into the image COPY . . RUN echo "starting operation"
If you just want to get started, this is enough. You could build and run this container with other commandline flags to, for instance, create a bash shell within it and link your host device’s port to the container’s port 3000 so that you can communicate with it.
Personally, though, I find this kind of flag copy pasta to be a bit cumbersone. Instead I prefer to use
docker-compose to encode some of this configuration for me.
In this docker-compose, I create a service, tell the service that it will use the Dockerfile in the current directory to build itself, link port
1339 of my host device to
3000 of my container, map the current directory to the container’s
/home/app directory, and give it access to
/etc/passwd with readonly perms for perm issues. This may seem like a lot at the outset, but I’ve found that most of my services rarely need more than this so it’s easily shareable and because it inherently lives in code, is easier to keep track of and make changes to.
version: "3" services: p5js_dev_env: build: . container_name: p5js ports: - 1339:3000 volumes: - ./:/home/app - /etc/passwd:/etc/passwd:ro
To build and run a new container using the Dockerfile and docker-compose.yml we just created, we can use:
docker-compose build && docker-compose run --rm --service-ports p5js_dev_env bash
This tells docker to:
- build the containers within docker-copose
- run docker-compose with
- –rm - remove the container when we cancel out of the interactive shell
- –service-ports - open up the container’s network to the host device network (so we can do our port mapping)
- bash - open a bash shell within the new container
So this will build up a container with npm / node already installed, pipe in the files you have in your directory, and open up a bash shell that you can use to control that environment. In my case, my app has an npm script
start so I’d usually just run
npm start as the next command to get my project up and running.