Dario Venneri

Learn Docker by creating a dev environment for Laravel

Apr 18th 2025

In this article you will get to learn docker and create an environment with multiple containers to develop laravel applications.

We're not gonna need anything installed on our machine but docker, everything we need, composer, npm, etc will run inside containers. We're also actually gonna use npm run dev and make it work from inside the container and with the reload too.

Many tutorials I encountered focus on teaching docker in general and start with explaining the settings you would mostly use in production. But would you really use in production something you aren't comfortable using while developing?

Docker in brief

Important note: if you are on Windows I suggest you use wsl2 to get better performances, the difference is gonna be really big and noticeable.

Docker's advantage during development is that you get a stable and identical environment to run your app. You no longer have to worry about developing on one OS and deploying on another OS, you don't have to keep versions synchronized and you can also easily install any software version you need really fast.

Let's go in the command line and write:

docker run nginx

this command will pull the latest nginx's image from https://hub.docker.com and run it on your pc. You now have an nginx server running on your machine (but right now there is no port mapping so you can't access nginx from your browser).

Nginx is probably running in the foreground right now and you cannot write any command, so let's kill the process with ctrl+c and try

docker ps -a

you will see a list of containers, in this case just the nginx container not running

Try now

docker run -d nginx

where -d stands for detached, this way the terminal will detach from the process and run in the background. List the containers, you will see that there are two containers. Running nginx again didn't start up the previous container but created and started a new one.

CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS                     PORTS     NAMES
88cdeba47bbd   nginx     "/docker-entrypoint.…"   4 seconds ago    Up 3 seconds               80/tcp    sleepy_feistel
1e3ff3526dc3   nginx     "/docker-entrypoint.…"   11 seconds ago   Exited (0) 8 seconds ago             jovial_albattani

For some this may feel confusing because it may look like a waste of resources, but not only containers are optimized to consume as little resources as possible (but we won't get how it they do it in this article) this behavior is actually desirable when it comes to container's workflows and you'll soon see the reason.

Now let's try to enter the nginx container, to do so we can either use the ID or the random name that it was assigned by docker.

docker exec -ti sleepy_feistel /bin/sh

we are now inside the container, being the nginx container based on linux we can run whatever command we want, think of it like a virtual machine (but beware this is not an actual VM) or even better an isolated environment. We could create files and install programs and things would change inside the container.

But changing a running container is not desirable at all!!!

Although you can do it, modyfing the state of the container as it's running is anatema, a nono outside of learning and debugging.

Containers have to be stateless, this way we can destroy them without concern and create new containers faster. The ideal container does its job and disappears. While containers must be stateless, a true real-world application has data that changes and must be persisted, and of course there are ways to achieve this and we're gonna see them very soon.

Meanwhile, we're gonna move to the next step which is exploring dockerfile. A dockerfile is a series of instruction on which a new image can be created. Let's create a folder and in this folder let's create a file called Dockerfile:

FROM php:8.4.5-fpm-alpine3.21

RUN apk add --no-cache \
    curl \
    libpng-dev \
    oniguruma-dev \
    libxml2-dev \
    zip \
    unzip \
    libzip-dev

RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd zip

WORKDIR /var/www/html

COPY ./hello.php .

CMD ["php", "hello.php"]

in the same folder let's also create a simple hello.php and let it echo something just so we can test things (we're gonna run the hello.php in the command line, so make it simple).

  • the FROM instruction tells docker to pull the php:8.4.5-fpm-alpine3.21 image from the registry (in our case dockerhub) and this is going to be the first layer.
  • with RUN we can run any command, we install various dependencies and then when we need to install PHP extension we can use the very convenient docker-php-ext-install which is provided by the base PHP's image.
  • Then we set up a WORKDIR and we copy the .hello.php file in it, then with CMD we define what command to run when the container start.

Now we can build the image

docker build -t my-first-php-app .

-t my-first-php-app tags your image with the name my-first-php-app, the dot at the end refers to the current directory. Now we have built an image based on the docker file which also contains our "app"

With docker image ls it's possible to list all the images created, so with docker image ls my-first-php-app we can also see the image we have just created.

We can now spin up a container with our image like if we were pulling one from docker hub:

docker run my-first-php-app

A container will start and the command given via CMD will be executed, giving us the result of hello.php

When developing we don't really want to build an image with our app each time we make a change, it would be absolutely INSANE. So, we're gonna soon see what to do in our case. For now remember that you can copy files inside the image and they will be part of the container.

Before moving on, let's clean things up: if we try docker ps -a we can see that our app container is still there and if we were to run it again another container would be created. To avoid creating unlimited containers we can run the docker run command using the --rm flag, this way when the container stops it gets automatically removed.

docker run --rm my-first-php-app

right now things may seem a little complicated, but you'll soon see everything come together.

We now have multiple containers running. There is not one-hit command from docker for this, so the easier thing to do on windows and mac is to just use the GUI, meanwhile on linux you can try

docker ps -aq | xargs docker stop | xargs docker rm

this will have docker list the containers by id, then xargs will apply the subsequent command to each id. An alternative is docker rm $(docker ps -aq)

persisting data: volumes vs bind mount

Containers are made to be created and destroyed fast and without any care. But applications change data, so what do we do to persist data?

There are two options available, one is volumes and the other bind mounts.

As with many things, practice will make them clearer but meanwhile let's see what these two options are and when to use them.

  • Bind mounts are for sharing with the host, and they work by mirroring a folder on the host to a folder inside the container. So for example you put your files inside c:\myapp\data and you mirror this folder with a folder inside a container pointing to /var/www/html. Every change you make in the folder from your host will reflect inside the container that mounts the folder and viceversa. This way you will be able to share data between containers and also change the files fast, which is essential when developing. The downside is the performance is not always great and if not set up correctly you also may end up with files you don't want inside your project's folder

  • meanwhile volumes are hidden files on the host system that you only use via docker. They will not be accessible from the outside (unless you wanna suffer hard) and it's where the container will persist data. They got better performances and you can easily mount a volume so that it matches a specific folder inside a container. You can then safely destroy the container, being that the volume will persist and then mount the volume somewhere else. This has better performances and is very useful for data you don't need to change or access from outside the container, like database data.

let's build the actual dev environment

Now that you know about building your app: you may be tempted to put everything into one container. The server, php, mysql, node, all running inside one neat container. And there would be nothing wrong with that technically and maybe it may be even advisable in some cases, but in our case we're gonna keep all the pieces separated.

We're gonna have a container for nginx, one for php, one for node, etc and they're all gonna work together.

To do it we're gonna take advantage of docker compose. Beware, there are two commands and they both work: docker-compose and docker compose. docker-compose is a standalone tool and refers to the V1 written in python while docker compose (no dash) is the V2 written in Go integrated into docker. While V1 should work fine with everything we're doing here, we're going to use V2 so make sure to always write docker compose.

Docker compose let us launch a group of containers all together and set up their options easily via one file called docker-compose.yml

Let's make an empty folder which is where all the files of our project will live and create a file named docker-compose.yml

Now in the docker-compose.yml file we write

networks:
  super_app:
    name: super_app

services:
  database:
    image: mysql:8.4.4
    container_name: mysql
    volumes:
      - "super_app_mysql:/var/lib/mysql"
    ports:
      - "3306:3306"
    environment:
      MYSQL_DATABASE: laraveldb
      MYSQL_USER: laravel
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: password
    networks:
      - super_app

volumes:
  super_app_mysql:

Here is what we're doing here

  • we create a network, we call it super_app. This is the most basic form of networking and we will make sure that all containers run on this network so they can communicate with each other.
  • we list the services we're gonna create, and the first service is going to be called just database. Then we instruct to use the image mysql:8.4.4 and we give a name to the container.
  • volumes: this is how we keep the container stateless while still persisting the data. We know that as mysql is working it's gonna modify /var/lib/mysql, so what we're doing we're mounting a volume that we've called super_app_mysql. All the data will be saved inside this volume, and when we destroy the mysql container will be able to mount this volume on new containers easily, keeping the data. Where will this volume reside on our system? Docker it's gonna put the files of this volume in a hidden folder on our system, and we should only access it and eventually delete it using docker.
  • with ports we tell docker to map a port on the host to a port inside the container. If we don't do this, the database will only be accessible from the container itself or containers connected to the container's network, meanwhile we want to access with tools like dbeaver, heidisql, etc.
  • with environment we set the environment's variables, in this case the variables do something because the mysql image is set to receive them. Of course we could create an image with our environment variables if we want
  • we tell docker to put the database on the network
  • under volumes we tell docker the volume we're gonna use and docker will create it for us (volumes can also be created manually)

We can now run docker compose up --build -d, docker will build everything, create a network and spin up a container with our mysql database that will be exposed on port 3306. If we did everything correctly we should be able to connect to the database from the host system via localhost:3306 using the username and password provided.

Try to connect with your favorite sql client and try to write some data. If everything went fine let's move to the next step, we can bring everything down using docker compose down

One warning: docker compose will rely on the docker-compose.yml to see which services are there to bring down, so if you remove a service from the file you will have to destroy the container manually. You can type docker compose down --remove-orphans to remove orphaned containers

The mysql container is destroyed, but what about the data? Because we're using volumes, the data is saved in the host system albeit in an hidden folder, so when we run docker compose up -d to have everything up again (with a whole new container), the data we've written before is going to be available.

With this we can see another strength of containers. Imagine what happens if as times goes on something were to break inside the container. We can have a new one again in seconds, but starting from a state where it was working. So, unless the data itself caused the problems, things will work again immediately. If we had the mysql running on the server itself and it stopped working, we would have been forced to debug it. Instead with docker we can come back fast. If you want to know more search for pets VS cattle in devops.

Let's add a server

Before we move on, don't forget to bring everything down with

docker compose down

at this stage of your docker's journey always bring every container down with every change, it will make your life easier.

We're now going to add nginx to our docker compose. Let's see how it's gonna look in our docker-compose.yml, under services we add:

  nginx:
    image: nginx:1.27.4
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./app:/var/www/html/
      - ./nginx/conf.d:/etc/nginx/conf.d
    depends_on:
      - php
    networks:
      - super_app

Now let's analyze what's under volumes: in this case we're using bind mounts instead of volumes. Yes, the name is volumes but we're actually using bind mounts. Using the ./ is what makes docker understand that we're not trying to use or create a volume but referencing a folder on our system.

The reason we're using bind mounts is that we want to be able to change the files we're putting inside the container from the host, that's the folder where we're gonna develop our Laravel's app after all (and don't forget that the container will also be able to write in that folder).

Let's make a folder /app on the host, this is where we will put our laravel's app files. We also create a folder nginx/conf.d with a file nginx.conf, where we're putting the nginx configuration (so we're getting nginx/conf.d/nginx.conf).

server {
    listen 80;
    server_name localhost;
    root /var/www/html/public;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options "nosniff";

    index index.php index.html;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

This configuration has been taken from the Laravel's documentation, with small modifications.

  • we're telling nginx that the root is the /public folder, as we know it is on laravel (see root /var/www/html/public;)
  • with the line fastcgi_pass php:9000; we're telling nginx to forward php files processing to a PHP FastCGI backend, which runs on php:9000. php is going to be name we're going to give the service on the network(we could have given it any other name) and such service it's gonna run on port 9000

You may have noticed in the docker-compose.yml file that for the nginx service we've added a depends_on, which means it depends on another service to run, php in our case. So, let's add this service.

Adding a php container

This container will process php files. Because nginx depends on this service let's put it before nginx:

  php:
    build:
      context: ./php
      dockerfile: Dockerfile
    image: super_app:php
    container_name: php_app
    user: "1000:1000"
    volumes:
      - ./app:/var/www/html/
    networks:
      - super_app
  • as you can see there is no port matching with the host. PHP's CGI is going to run on port 9000 and it's gonna be available on the docker's network but there is no need to expose it to the host
  • user: "1000:1000" sets the UID (User ID) and GID (Group ID) for the user inside the container. This is important when using mounted volumes because by default the container may use a different user and cause a mismatch in permissions between the host and the container or even between containers. Being that 1000 is the default id on most linux systems and WSL, this simple line should save you many headaches
  • instead of pulling an image directly from docker hub, we're gonna build our own image from our own dockerfile and with super_app:php we're gonna be able to easily reference it to use it in other services. So, we're gonna create a folder /php/ and put the Dockerfile there, and we're gonna make sure to create a very basic but ready to go PHP installation:
FROM php:8.4.5-fpm-alpine3.21

RUN apk add --no-cache \
    curl \
    libpng-dev \
    oniguruma-dev \
    libxml2-dev \
    zip \
    unzip \
    libzip-dev

RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd zip

WORKDIR /var/www/html

We can now put a simple php file into app/public/ and test if everything is working correctly by running docker compose up --build -d and reaching localhost/ourphpfile.php from our browser

If everything is okay we can move to the next step

Adding composer and installing Laravel

Many docker tutorials for laravel have you install composer on your host machine and install laravel before putting it into a container, but one of the biggest strength of docker is having a portable environment so why should we have to install composer on our system when a container can run it for us?

To run composer we just need to create another service in our docker-compose.yml.

  composer:
    build:
      context: ./composer
      dockerfile: Dockerfile
    container_name: composer
    entrypoint: [ "composer" ]
    user: "1000:1000"
    volumes:
      - ./app:/var/www/html/
    networks:
      - super_app

the Dockerfile

FROM super_app:php

COPY --from=composer:2.8.7 /usr/bin/composer /usr/bin/composer
  • we take the image we have created
  • COPY --from=composer:2.8.7 /usr/bin/composer /usr/bin/composer, pulls the composer image from docker hub, takes the composer's bin from the image and copies it inside the container.

We now build everything with docker compose up --build -d

If we've done everything correctly we can run composer with this command

docker compose run --rm --interactive composer [composer commands]

what this does is it takes the composer service, it runs it, then it removes the container it creates after we finish and the --interactive flag lets us interact with commands if needed. Let's try to install laravel, enter the app folder and try:

docker compose run --rm --interactive composer create-project laravel/laravel .

If everything completed without errors, you can reach http://localhost/ and you will see laravel has been installed! By the way at the moment of writing we're on Laravel 12.

entrypoint: [ "composer" ] is what told docker compose to run composer and let you add further instructions.

Also when generating files beware of permissions, make sure you got them right.

The advantage with having composer as a docker service is that you have complete control on your composer version, no need to install stuff on your machine, maximum portability. Of course the disadvantage is that the commands can get kinda long, but that's something you may improve with bash aliases, it's up to you!

artisan

Now that we have installed Laravel, we have artisan available thanks to the container running PHP so we just have to run

docker compose exec php php artisan make:model Post

This will execute artisan from the php container's.

If you wanted you could have created another container for artisan and run it in a similar way to composer:

  artisan:
    image: super_app:php
    container_name: artisan
    entrypoint: [ "php", "artisan" ]
    user: "1000:1000"
    volumes:
      - ./app:/var/www/html/
    networks:
      - super_app

docker compose run --rm --interactive artisan make:model Post

The problem with this is that this container would depend on the fact that we have installed Laravel using composer. This is kinda dirty.

You may also realize now that instead of making a service for composer you could have installed it into the php image and executed it as we've done with artisan. It's up to you what to do.

Vite, node, npm run dev and all that stuff

Now we want to use npm and vite. I've seen many tutorials facing the following problem by just running npm build and building the files and serving them, but that's not good if we want to have a nice and productive dev environment.

The problem with having vite dev server running on docker is that we need to configure both the dev server and laravel so assets get out from the dev server to the host and so that laravel is also able to link to them correctly.

Here is the way I solve this issue. First we create a service for npm

  npm:
    build:
      context: ./node
      dockerfile: Dockerfile
    user: "1000:1000"
    container_name: npm
    ports:
      - "5173:5173"
    volumes:
      - ./app:/var/www/html/
    entrypoint: ["npm"]
    networks:
      - super_app

We match the ports where vite is gonna run, then in the Dockerfile we add node

FROM node:20.18.2-alpine3.21

RUN apk add --no-cache \
    bash \
    curl \
    jq

RUN npm install -g npm@11.2.0

WORKDIR /var/www/html/

# Set environment variables
ENV NODE_ENV=development
ENV PATH /var/www/html/node_modules/.bin:$PATH

let's build everything and let's try to test this service by running npm install

docker compose run --rm --interactive npm install

Now let's change the welcome.blade.php view:

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <title>Laravel</title>
        @vite(['resources/css/app.css', 'resources/js/app.js'])
    </head>
    <body class="bg-[#FDFDFC] dark:bg-[#0a0a0a] text-[#1b1b18] flex p-6 lg:p-8 items-center lg:justify-center min-h-screen flex-col">
        <h1 class="bg-amber-700 text-8xl">Hello, nice docker tutorial</h1>
    </body>
</html>

if we now try to access localhost we will get the vite's manifest missing error. If we run

docker compose run --rm --interactive npm build

we're gonna build the files and things are gonna work again, but this is not what we want in a production environment.

Let's delete the files in the public folder that we built using npm build and move on.

Then let's try to run the vite dev server:

docker compose run --rm --service-ports --interactive npm run dev

we need to add --service-ports because otherwise docker compose run will ignore the ports specified in docker compose.

http://localhost:5173/@vite/client will not be reachable

tutorial1-npm-run-9947becb7339   tutorial1-npm        "npm run dev"            npm        About a minute ago   Up About a minute               0.0.0.0:5173->5173/tcp

by looking at the container's status we see the ports are matched, so why we can't reach the server? Because while the ports are matched, the vite dev server is not exposed to the network, it says it too in the command line

➜  Network: use --host to expose

simplest open vite dev server to the network is to add the flag to the vite command, so open package.json and change:

"dev": "vite --host"

now vite is opened to the network and reachable via http://localhost:5173/@vite/client, but laravel still cannot reach it. The reason is that the html's served by laravel still doesn't link to localhost:5173, so open vite.config.js and in defineConfig add after plugins:

server: {
    origin: 'http://localhost:5173',
}

now laravel can pull the assets from the vite dev server. Now if we make a change to our blade view, we see the browser doesn't refresh the page. This is due to cors policies, so you need to configure them too, and in the end your starting defineConfig in vite.config.js file will look like this

export default defineConfig({
    plugins: [
        laravel({
            input: ['resources/css/app.css', 'resources/js/app.js'],
            refresh: true,
        }),
        tailwindcss(),
    ],
    server: {
        origin: 'http://localhost:5173',
        cors: {
            origin: ['http://localhost'],
            credentials: true,
        },
    }
});

https for your local container

To get certificates locally, you can follow these steps: first install mkcerts https://github.com/FiloSottile/mkcert which will issue certificates that are trusted on your machine.

Then we can create a folder /certs/mkcert and in this folder you can generate a certificate and key for localhost by running mkcert localhost and run mkcert -install. You will get two files.

Important note: if you're using wsl2 on windows to generate the certificates, they will not be trusted by the browser. The easiest thing to do is to run mkcert -install and mkcert localhost on windows and copy and paste the certificates into the project folder on wsl2

We now have to put these files inside the nginx container, so we change our docker-compose.yml:

  nginx:
    image: nginx:1.27.4
    container_name: nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./app:/var/www/html/
      - ./nginx/conf.d:/etc/nginx/conf.d
      - ./certs/mkcert:/etc/nginx/certs/mkcert
    depends_on:
      - php
    networks:
      - super_app

make sure to match port 443 and put the certificate files inside the container, then modify the nginx.conf file to include the certs and redirect to https:

server {
    listen 80;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name localhost;
    root /var/www/html/public;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options "nosniff";

    index index.php index.html;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }

    ssl_certificate /etc/nginx/certs/mkcert/localhost.pem;
    ssl_certificate_key /etc/nginx/certs/mkcert/localhost-key.pem;
}

now all http requests will be redirected to https. Also because you're now using https you will need to modify the vite.config.js file

cors: {
    origin: ['https://localhost'],
    credentials: true,
},

and with this, everything is set. We will not go into how to add redis or other services because at this point you should be comfortable enough to reasearch how to do it by yourself (I hope so).

Inspiration, production, etc

The goal of this tutorial was to give you a basic dev environment and also give you a way to learn more. To do so you may want to check these projects: