Vagrant
Manage multi-machine environments
In real-world scenarios, running services on separate machines provides flexibility, scalability, and security. It enables you to scale services independently based on demand, allocate resources effectively, and minimize risks with network segmentation.
In this tutorial, you will configure a multi-machine Vagrant environment, with each virtual machine (VM) hosting a specific part of the Terramino demo application. This setup mimics production environments and enhances performance and reliability.
The Terramino application is separated into three services. You will update the Vagrantfile
so each service runs on its own virtual machine.
- The
redis
service stores high scores and runs on port6379
. Thebackend
service depends onredis
for data storage. - The
backend
service handles game logic and API requests. It runs on port8080
, connects toredis
and includes a CLI interface. - The
frontend
service serves the web interface and proxies API requests to thebackend
. It runs on port8081
.
Destroy the existing machine
Before starting, destroy the existing VM from the previous tutorial. Confirm the destruction with y
when prompted.
$ vagrant destroy
Significant changes to your Vagrantfile can cause conflicts with resources like ports or hostnames if the old VM persists.
Update the Vagrantfile
First, create a script named common-dependencies.sh
to install Docker and set up the Terramino demo application. Vagrant will run this script on all the VMs to set up the common dependencies. The main difference between this script and the one in the previous tutorial is that it installs avahi-daemon
adn libnss-mdns
to simplify machine networking.
common-dependencies.sh
# Install Docker and clone repo (common dependencies for multi-machine tutorial)
# Install Docker
apt-get update
apt-get install -y ca-certificates curl gnupg git avahi-daemon libnss-mdns
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
usermod -aG docker vagrant
# Clone repo
if [ ! -d "/home/vagrant/terramino-go/.git" ]; then
git clone https://github.com/hashicorp-education/terramino-go.git /home/vagrant/terramino-go
cd /home/vagrant/terramino-go
git checkout containerized
fi
Then, make the script executable.
$ chmod +x common-dependencies.sh
Modify your Vagrantfile to define individual VMs for each service. Replace your existing configuration with the following.
Tip
You can find the complete configuration for these tutorials in the Learn Vagrant Get Started GitHub repository. The final configuration for this specific tutorial is in the 04.Vagrantfile
file.
Vagrantfile
# Service configuration reference
SERVICES = {
'redis' => {
ip: '192.168.56.10',
ports: { 6379 => 6379 }
},
'backend' => {
ip: '192.168.56.11',
ports: { 8080 => 8080 }
},
'frontend' => {
ip: '192.168.56.12',
ports: { 8081 => 8081 }
}
}
Vagrant.configure("2") do |config|
# Common configuration
config.vm.box = "hashicorp-education/ubuntu-24-04"
config.vm.box_version = "0.1.0"
# Common provisioning script for all VMs
config.vm.provision "shell", name: "common", path: "common-dependencies.sh"
# Redis Server
config.vm.define "redis" do |redis|
redis.vm.hostname = "redis"
redis.vm.network "private_network", ip: SERVICES['redis'][:ip]
redis.vm.network "forwarded_port", guest: 6379, host: 6379
redis.vm.synced_folder "./redis/terramino-go", "/home/vagrant/terramino-go", create: true
redis.vm.provision "shell", name: "start-redis", inline: <<-SHELL
cd /home/vagrant/terramino-go
docker compose up -d redis
SHELL
redis.vm.provision "shell", name: "reload-redis", run: "never", inline: <<-SHELL
cd /home/vagrant/terramino-go
docker compose stop redis
docker compose rm -f redis
docker compose up -d redis
SHELL
end
# Backend Server
config.vm.define "backend" do |backend|
backend.vm.hostname = "backend"
backend.vm.network "private_network", ip: SERVICES['backend'][:ip]
backend.vm.network "forwarded_port", guest: 8080, host: 8080
backend.vm.synced_folder "./backend/terramino-go", "/home/vagrant/terramino-go", create: true
backend.vm.provision "shell", name: "start-backend", inline: <<-SHELL
cd /home/vagrant/terramino-go
# Get Redis IP dynamically (with 1 minute timeout)
for i in {1..30}; do
if REDIS_IP=$(getent hosts redis.local | awk '{print $1}'); then
break
fi
echo "Waiting for redis.local to be resolvable..."
sleep 2
done
# Run the backend container with Redis host
docker build -f Dockerfile.backend -t backend .
docker run -d -p 8080:8080 \
-e REDIS_HOST=redis.local \
-e REDIS_PORT=6379 \
-e TERRAMINO_PORT=8080 \
--add-host redis.local:${REDIS_IP} \
backend
# Add CLI alias
echo 'alias cli="docker compose exec backend ./terramino-cli"' >> /home/vagrant/.bashrc
SHELL
backend.vm.provision "shell", name: "reload-backend", run: "never", inline: <<-SHELL
cd /home/vagrant/terramino-go
# Get Redis IP dynamically
REDIS_IP=$(getent hosts redis.local | awk '{print $1}')
docker stop backend || true
docker rm -f backend || true
docker build -f Dockerfile.backend -t backend .
docker run -d --network host -p 8080:8080 \
-e REDIS_HOST=redis.local \
-e REDIS_PORT=6379 \
-e TERRAMINO_PORT=8080 \
--add-host redis.local:${REDIS_IP} \
--name backend \
backend
SHELL
end
# Frontend Server
config.vm.define "frontend" do |frontend|
frontend.vm.hostname = "frontend"
frontend.vm.network "private_network", ip: SERVICES['frontend'][:ip]
frontend.vm.network "forwarded_port", guest: 8081, host: 8081
frontend.vm.synced_folder "./frontend/terramino-go", "/home/vagrant/terramino-go", create: true
frontend.vm.provision "shell", name: "start-frontend", inline: <<-SHELL
cd /home/vagrant/terramino-go
# Wait for nginx.conf to be available to update backend hostname
for i in {1..30}; do
if [ -f nginx.conf ]; then
break
fi
echo "Waiting for nginx.conf to be available..."
sleep 2
done
# Update nginx.conf to use backend hostname
sed -i 's#proxy_pass http://backend:8080#proxy_pass http://backend.local:8080#' nginx.conf || {
echo "Failed to update nginx.conf"
exit 1
}
# Get backend IP dynamically (with 1 minute timeout)
for i in {1..30}; do
if BACKEND_IP=$(getent hosts backend.local | awk '{print $1}'); then
break
fi
echo "Waiting for backend.local to be resolvable..."
sleep 2
done
docker build -f Dockerfile.frontend -t frontend .
docker run -d -p 8081:8081 \
--add-host backend.local:${BACKEND_IP} \
frontend
SHELL
frontend.vm.provision "shell", name: "reload-frontend", run: "never", inline: <<-SHELL
cd /home/vagrant/terramino-go
# Get backend IP dynamically
BACKEND_IP=$(getent hosts backend.local | awk '{print $1}')
docker stop frontend || true
docker rm -f frontend || true
docker build -f Dockerfile.frontend -t frontend .
docker run -d -p 8081:8081 \
--add-host backend.local:${BACKEND_IP} \
frontend
SHELL
end
end
This configuration creates three VMs, each with a unique hostname, IP address, and port mapping. The SERVICES
map centralizes these settings, making the configuration easier to maintain and scale.
The private_network
setting assigns fixed IPs to the machines, letting them communicate with each other directly. For example, the backend
service can connect to redis
at 192.168.56.10
.
Hostname resolution is configured using tools like avahi-daemon
and libnss-mdns
, allowing services to refer to each other by names like redis.local
. This improves clarity and simplifies service discovery within the private network.
However, since Terramino services run in Docker containers, hostname resolution such as redis.local
and backend.local
does not work automatically. To address this, the backend
and frontend
services dynamically retrieve IP addresses using DNS queries.
# Get Redis IP dynamically
REDIS_IP=$(getent hosts redis.local | awk '{print $1}'); then
# Get backend IP dynamically
BACKEND_IP=$(getent hosts backend.local | awk '{print $1}')
These commands query the DNS resolver to fetch the correct IP addresses for dependent services. This approach ensures reliable communication between services, even when the environment scales or changes.
The provisioning scripts start these services and explicitly pass the upstream IP addresses.
# Start backend service
docker run -d --network host -p 8080:8080 \
-e REDIS_HOST=redis.local \
-e REDIS_PORT=6379 \
-e TERRAMINO_PORT=8080 \
--add-host redis.local:${REDIS_IP} \
--name backend \
Backend
# Start frontend service
docker run -d -p 8081:8081 \
--add-host backend.local:${BACKEND_IP} \
frontend
Start the multi-machine environment
Launch all three VMs simultaneously.
$ vagrant up
Alternatively, you can start specific machines by appending their name to vagrant up
.
$ vagrant up redis
$ vagrant up backend
$ vagrant up frontend
Once Vagrant finishes starting the machines, check the status of all the machines.
$ vagrant status
Current machine states:
redis running (virtualbox)
backend running (virtualbox)
frontend running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
Test the environment
Open http://localhost:8081
on your host machine to access the Terramino frontend. The frontend communicates with the backend, which retrieves high scores from the Redis server. This setup demonstrates how the services communicate within the multi-machine environment.
Suspend the backend VM to simulate a service outage.
$ vagrant suspend backend
Refresh the frontend. The high score section will display SVC_DOWN
, indicating that the backend is unavailable. Since there is a 5 second timeout for the NGINX proxy, you will have to wait briefly for SVC_DOWN
to appear.
Resume the backend VM to restore functionality.
$ vagrant resume backend
Refresh the frontend and the high score functionality should work again.
Now that you are finished with the Vagrant get started tutorials, destroy the guest machine.
$ vagrant destroy
frontend: Are you sure you want to destroy the 'frontend' VM? [y/N] y
==> frontend: Forcing shutdown of VM...
==> frontend: Destroying VM and associated drives...
backend: Are you sure you want to destroy the 'backend' VM? [y/N] y
==> backend: Forcing shutdown of VM...
==> backend: Destroying VM and associated drives...
redis: Are you sure you want to destroy the 'redis' VM? [y/N] y
==> redis: Forcing shutdown of VM...
==> redis: Destroying VM and associated drives...
Enter y
when prompted to destroy the guest machines.
Next steps
In this tutorial, you configured a multi-machine Vagrant environment to run separate services. You created VMs for Redis, backend, and frontend servers, tested their interactions, and simulated service outages. This architecture supports independent scaling, isolates failures, and mirrors production setups for better reliability and performance.
For more information on topics covered in this tutorial, refer to the following documentation:
- Multi-machine
- Vagrant networking, specifically
private_network
To continue learning about Vagrant, refer to the following resources: