There is little doubt that Docker has improved developer experience over the years but it's also easy to argue that continually rebuilding images to pull in changes takes away from developer experience. So let's look at how we can introduce hot reloading in our React and Express apps, using Docker.
Important: this is great for development but should not be used in production!
First, this will be our directory structure, where client
is a React app, and server
is an Express app:
|client
|--Dockerfile
|
|server
|--Dockerfile
|
|-docker-compose.yml
Next, let's install React, Express, and create our necessary docker files. Since our focus is on introducing hot reloading, we'll just use generators to create our apps:
npx create-react-app client
npx express-generator --no-view server // No need for views in this example
touch client/Dockerfile
touch server/Dockerfile
touch docker-compose.yml
We'll want to add nodemon
as a dev dependency to our Express app, in order to introduce automatic reloading:
// Do this within the server directory
yarn add -D nodemon
And we'll need to update server/package.json
to ensure nodemon
is used to start our app:
"scripts": {
"start": "nodemon ./bin/www" // Changed from node -> nodemon
},
As well as server/bin/www
to use port 3001
:
var port = normalizePort(process.env.PORT || 3001);
In order to proxy requests to the API, we'll need to add the proxy
property in our client/package.json
file:
{
...
"proxy": "http://server:3001",
...
}
The value of server
will resolve to the container's IP, thanks to Docker's bridge networking.
Great, now we can add content to our Dockerfile(s):
// client/Dockerfile
FROM node
WORKDIR /usr/src/client
COPY package.json .
RUN yarn install
COPY src ./src
COPY public ./public
EXPOSE 3000
CMD [ "yarn", "start" ]
// server/Dockerfile
FROM node
WORKDIR /usr/src/server
COPY package.json .
RUN yarn install
COPY . .
EXPOSE 3001
CMD [ "yarn", "start" ]
The above is a pretty standard setup for Dockerfiles; we're using a node image, setting our working directory, copying the necessary files, exposing the port, and setting the entry command.
To orchestrate building these images, we'll need to setup our docker-compose.yml
file:
version: '3'
services:
client:
build: client
ports:
- "3000:3000"
restart: always
command: yarn start
volumes: # This is where the magic happens!
- ./client/src:/usr/src/client/src
- ./client/public:/usr/src/client/public
server:
build: server
ports:
- "3001:3001"
restart: always
command: yarn start
volumes: # This is where the magic happens!
- ./server:/usr/src/server
- /usr/src/server/node_modules/
The key part above is the volumes
setting; this binds your local files into your docker container. So looking at the line ./client/src:/usr/src/client/src
, we can see that all local files under src
will be bound into your container's src
directory. The power is that if you make a change locally, it's persisted to your container, which those files are being watched and thus it triggers the hot reload.
Alright, we should be all set! Let's give it a try by running:
docker-compose up
Once docker builds the images and kicks off the containers, you will be able to access the client application at http://localhost:3000. To confirm hot reloading is working, we can make a small change in client/src/App.js
and watch the changes automatically take place in our browser. Similarly, we can add a route to our Express server and it'll be made available immediately thanks to nodemon.
It's important to note that changing files outside those bound via ourvolumes
mount will not be reflected until the image is rebuilt. In other words, hot reloading will only care about the files defined in ourvolumes
definition withindocker-compose.yml
.
That should be it! This approach makes working in docker much more streamlined and is a pattern which I often use.