Ansible playbooks and Terraform configuration for provisioning and configuring TripleA game servers on Linode.
Terraform:
- Creates and removes Linode servers
- Initial admin accounts
Ansible manages:
- Admin user accounts
- Firewall rules
- Common utilities (Docker, apt packages)
- Databases (PostgreSQL)
- Reverse proxy (nginx)
- SMTP (postfix)
- Lobby & Bots running on docker
- Get SSH access — add your entry to
terraform/keys/admins.json, then open a PR. CI/CD will deploy your account on merge. See Admin Onboarding for full details. - Set required environment variables (see section below).
- Run a dry-run to verify everything is working:
No changes are made by default — it runs in preview/check mode.
./run.sh
| Variable | Used By | Where set | Purpose | How to obtain |
|---|---|---|---|---|
LINODE_TOKEN |
Terraform (terraform/Makefile) |
Local shell / CI secret | Linode Personal Access Token to provision/manage servers | Linode Cloud Manager → Create token |
LINODE_ACCESS_TOKEN |
Ansible dynamic inventory (inventory/linode.yml) |
Local shell / CI secret | Separate Linode token used by the Ansible Linode inventory plugin to discover servers | Same as above — can be the same token value as LINODE_TOKEN |
TRIPLEA_ANSIBLE_VAULT_PASSWORD |
Ansible (ansible/Makefile) |
Local shell / CI secret | Password to decrypt Ansible Vault secrets in playbooks | Shared secret — ask a maintainer |
DEPLOY_INFRASTRUCTURE_USER_PRIVATE_SSH_KEY |
GitHub Actions | GitHub Actions secret | Private SSH key for the deploy-infrastructure service account |
Generate with ssh-keygen, store private half here, public half in playbook.yml |
GitHub Actions secrets (configure at Settings → Secrets → Actions):
LINODE_TOKENLINODE_ACCESS_TOKENTRIPLEA_ANSIBLE_VAULT_PASSWORDDEPLOY_INFRASTRUCTURE_USER_PRIVATE_SSH_KEY
Local runs via
run.sh: your personal SSH key (already on servers) +TRIPLEA_ANSIBLE_VAULT_PASSWORD+LINODE_TOKEN+LINODE_ACCESS_TOKEN. For freshly provisioned servers, SSH in asadmins@<ip>— your key is injected at provisioning time viaterraform/keys/admins.pub. See SSH access on freshly provisioned servers.Terraform only:
LINODE_TOKEN(orTF_VAR_linode_token).Ansible only:
TRIPLEA_ANSIBLE_VAULT_PASSWORD+LINODE_ACCESS_TOKEN.
The infrastructure is split into two phases that always run in this order:
- Terraform — provisions the Linode server (compute, networking, cloud-init).
- Ansible — configures the server using the dynamic inventory (
inventory/linode.yml), which discovers servers via the Linode API and groups them by their Linode tags. Those tags become the Ansible host groups (e.g.lobby,bots).
In CI/CD this is handled automatically (needs: terraform ensures ordering). Locally you must run Terraform first, then Ansible.
terraform/keys/admins.json is the single source of truth for all admin accounts and their public keys. Both Terraform and Ansible read from this file:
- Terraform decodes it at provisioning time and cloud-init creates a named personal account for each admin with their SSH keys and passwordless sudo. Any admin can SSH in as themselves immediately after a server boots — no waiting for Ansible, no needing the
ansibleservice account key. - Ansible reads the same file via
lookup('file', ...) | from_jsoninplaybook.ymland converges idempotently on top (groups, sudoers file, etc.).
The file format is a JSON array:
[
{
"name": "username",
"ssh_keys": [
"ssh-ed25519 AAAA... user@host"
]
}
]This means the local bootstrap flow for a freshly provisioned server is simply:
# 1. SSH in as the bootstrap user (your key is already there from cloud-init)
ssh admins@<new-server-ip> # verify it's up
# 2. Run Ansible to create all personal accounts and fully configure the server
APPLY=1 ./run.sh --limit <new-server-ip> --tags systemAfter step 2 your personal account exists and you connect normally via your username going forward.
To add a new admin maintainer:
- Add their entry to
terraform/keys/admins.json— a{"name": "username", "ssh_keys": [...]}object. - Open a PR. On merge, CI/CD will:
- Terraform picks up the new key in
admins.pubfor any future server provisioning (existing servers are unaffected — cloud-init only runs once). - Ansible runs and creates the personal account on all existing servers.
- Terraform picks up the new key in
Removing an admin: remove their entry from
admins.jsonin the same PR.
- Add the server definition to
terraform/servers.auto.tfvarsand runterraform apply(or open a PR to let CI/CD do it). - Once Terraform completes, run Ansible to fully configure the server:
You can SSH in as
APPLY=1 ./run.sh --limit <new-server-ip> --tags system
admins@<ip>immediately after provisioning since your key was injected by cloud-init.
If you only have root password access, manually create the ansible service account first:
useradd ansible
mkdir -p /home/ansible/.ssh
echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINAUjUJsoqE4NEEnv8Hov06Kn6CNhSDheGRxm7HbLaG9 ansible@triplea" \
> /home/ansible/.ssh/authorized_keys
chmod 700 /home/ansible/.ssh
chmod 600 /home/ansible/.ssh/authorized_keys
chown -R ansible:ansible /home/ansible
echo 'ansible ALL=(ALL) NOPASSWD:ALL
Defaults:ansible !requiretty' > /etc/sudoers.d/ansibleOnce done, the server can be managed with ./run.sh as normal.
run.sh is a wrapper around ansible-playbook. By default it runs in dry-run (check) mode and makes no changes.
# Preview changes (default — no changes made)
./run.sh
# Apply changes
APPLY=1 ./run.sh
# Limit to a specific host or group
APPLY=1 ./run.sh --limit bots
APPLY=1 ./run.sh --limit lobby
# Apply specific tags only
APPLY=1 ./run.sh --tags system
APPLY=1 ./run.sh --limit [IP] --tags system
# Verbose output
./run.sh --verbose# Preview (check + diff)
make diff
# Apply
make apply
# Apply as the ansible service account
make apply-as-ansible
# Update all bot maps
make update-botsIf Ansible is not installed, run.sh will prompt to install it. Or install manually:
sudo apt update && sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install --yes ansible
ansible-galaxy collection install -r ansible/requirements.yml --forceTerraform manages Linode server provisioning. All commands run from the terraform/ directory via make.
Prerequisite: LINODE_TOKEN must be set.
export LINODE_TOKEN=<your-linode-pat>
cd terraform/
make init # Initialize providers
make validate # Validate configuration
make plan # Preview changes
make apply # Apply changes (interactive confirmation)Server definitions live in terraform/servers.auto.tfvars. SSH public keys used during provisioning are in terraform/keys/.
Sensitive values in playbooks are encrypted with Ansible Vault.
cd work/triplea-project/infrastructure/ansible cat [file-with-secret] | ansible-vault encrypt_string --name [ansible-var-name] --vault-password-file vault-password.sh
Encrypt a single variable value:
```bash
secret=<your-secret>
echo -n $secret | ansible-vault encrypt_string --vault-password-file vault_password 2>/dev/null
Encrypt a whole file:
ansible-vault encrypt --vault-password-file vault_password <file>The vault password is stored in TRIPLEA_ANSIBLE_VAULT_PASSWORD (locally) and as a GitHub Actions secret.
Workflow: .github/workflows/infrastructure.yml
| Trigger | Terraform | Ansible |
|---|---|---|
| Pull Request | make plan (preview) |
make diff (check + diff) |
Push to master |
make apply-now |
make apply |
Manual workflow_dispatch on master |
make apply-now |
make apply |
Ansible runs after Terraform (needs: terraform) so newly provisioned servers exist before configuration is applied.
Required GitHub Actions secrets (Settings → Secrets → Actions):
| Secret | Purpose |
|---|---|
LINODE_TOKEN |
Terraform — provision/destroy Linode servers |
LINODE_ACCESS_TOKEN |
Ansible dynamic inventory — discover servers via Linode API |
TRIPLEA_ANSIBLE_VAULT_PASSWORD |
Decrypt Ansible Vault secrets |
DEPLOY_INFRASTRUCTURE_USER_PRIVATE_SSH_KEY |
SSH private key for the deploy-infrastructure service account |
ssh-keygen -f ~/.ssh/ansible # no passphrase- Update the private key in GitHub Actions secrets →
SSH_PRIVATE_KEY - Update the public key in
playbook.ymlunder thedeploy-infrastructureuser entry - Ensure you have your own SSH access before rotating — rotating breaks CI/CD until the new key is deployed
Bot processes are managed by systemd (not Docker restart policies).
# Restart a bot
sudo systemctl restart bot@01
# Check status & logs
sudo systemctl status bot@01
sudo journalctl -ubot@01 -n 1000
# List running containers
docker container ls
# Restart via docker (less preferred)
docker stop bot01Download all maps to all bots:
./update_bots.shThe deploy user is a limited service account used for automated deployments. It is granted passwordless sudo for one specific script only - no general root access.
How it works:
- The
admin_userrole creates thedeployuser and installs/etc/sudoers.d/deploy. - The
marti/approle installs/usr/local/bin/deploy-marti.sh(owned by root) and drops/etc/sudoers.d/deploy-marti:Cmnd_Alias DEPLOY_MARTI = /usr/local/bin/deploy-marti.sh deploy ALL=(ALL) NOPASSWD: DEPLOY_MARTI - The deploy pipeline connects as
deployvia SSH and runssudo /usr/local/bin/deploy-marti.shdirectly.
Important: Ansible's become: true must NOT be used for this task. become escalates to a root shell via a Python bootstrap, which is not covered by the sudoers rule. The sudo call must reference the exact script path from the command line so it matches the NOPASSWD entry.