Imagine using a VPS to host multiple websites, all using docker.
As only one application can bind to the VPS's port 80 at any given time, one way to solve this is to use nginxproxy/nginx-proxy:
services:
proxy:
image: nginxproxy/nginx-proxy:....
ports:
- "80:80"
[...]
website1:
environment:
- VIRTUAL_HOST=domain1.com
[...]
website2:
environment:
- VIRTUAL_HOST=sub.domain2.com
[...]
Drawbacks:
- This compose file now has configurations for several independent websites that have nothing in common besides the proxy
- mistakenly running
docker compose downshuts down the whole cluster - similarly, running
docker compose upcan be expensive, as we need to launch the whole cluster - this file doesn't scale beyond a few (read: 3-4) websites
To address this, we can take advantage of Docker networking, two steps:
- define a
wwwnetwork used by the proxy image
# compose.yml just for proxy
services:
proxy:
image: nginxproxy/nginx-proxy:....
ports:
- "80:80"
networks: # network(s) the service will be added to
- www
[...]
networks:
www: # define new network here
name: www # without this property, docker will name the network `<folder-name-hosting-this-compose-file>_www`
# no more services here, besides things like `nginxproxy/acme-companion`
- For each website, create its standalone compose file, where the website service uses the same network defined by the proxy:
# compose.yml just for website1
services:
website1:
environment:
- VIRTUAL_HOST=domain1.com
networks: # network(s) the service will be added to
- www
[...]
networks:
www:
external: true # use the external network called `www`
This allows having standalone compose files for each service, while still being able to be discovered by proxy, addressing all drawbacks mentioned above.
As an added bonus, proxy can and will discover new services even after it launched, meaning we don't need to worry to have websites running before the proxy or viceversa (albeit, if the proxy is not running, our websites won't be reachable).
Source: Networking in Compose