Setting Up My New Cloud Server
Recently, I decided to migrate to a new cloud server. I'm writing this down both for my own future reference and for anyone coming across this to learn a thing or two about system administration work.
This server will host many things for me, but I will only document the process for WireGuard, Caddy, Nextcloud, Deluge, OpenRA, and my download server.
Picking the server
I chose to keep Hetzner as my provider. This time I'm not using a server auction
from their Hetzner Robot service, but a normal shared CPU server from the
Hetzner Cloud Console. The server is a model cpx41
. The specs are listed
below for convenience:
Part | Type |
---|---|
CPU Model | AMD EPYC |
vCPU Cores | 8 vCPU cores (shared) |
RAM | 16 GB |
Storage | 240 GB |
Traffic Quota | 20 TB out, infinite in |
Picking the distro
I have ran many distros across many servers in the past. For this one, I chose
Ubuntu 22.04.2 LTS
due to its great LXD/LXC support which will come up later
in this post. I recommend picking a Linux distro that best fits your specific
situation.
The rest of this post will contain steps for my server. The commands and package names may be different if you chose a different distro.
Adding a new user
First, let's stop using root to log into the server. We will create a new privileged user using the commands below.
useradd --create-home --groups sudo myuser
passwd myuser
To explain, useradd
creates a new user. The --create-home
flag ensures a
home directory is created. The --groups sudo
flag adds our new user to the
sudo
group which is needed for privilege elevation. Lastly, myuser
is the
username being created. The passwd
command sets the password for myuser
.
NOTE: For the rest of this blog, I will refer to the new user as myuser
.
Now let's set up our SSH keys for authentication. I copied mine from another
machine but it is very easy to make your own. From your main computer, run
ssh-keygen
and take note if your password (if you set one) and where the file
is saved (if you changed it). The default path for most distros is
~/.ssh/id_rsa.pub
for the public key and ~/.ssh/id_rsa
for the private key.
On the server, create a file at /home/myuser/.ssh/authorized_keys
and paste
your public key from your main computer. Let's ensure the file is owned by
myuser
since it was created by root
. Run
chown -R myuser:myuser /home/myuser/.ssh
to set ownership.
Now log out from the root
user and log in as myuser
. If you could log in
without using a password, you set up the new user correctly.
Securing SSHD
Now let's secure SSHD. The most practical step is to move SSHD to a different
port. There are many computers which exclusively run bots which try to log into
cloud servers and gain root access in order to take over a server. Changing the
port will stop 99% of these attempts. We will also disable SSH access for the
root
user and block all password authentication attempts.
Using your editor of choice, open /etc/ssh/sshd_config
as root
. Change the
lines mentioned below:
Port 2022
PermitRootLogin no
PasswordAuthentication no
PermitEmptyPasswords no
Note that the port can be anything you want, but I chose 2022
for this
example. Before continuing, ensure you see a line that contains
#PubkeyAuthentication yes
or PubkeyAuthentication yes
. If you see
#PubkeyAuthentication no
or PubkeyAuthentication no
, change it to
PubkeyAuthentication yes
. This line allows you to log in using your SSH key.
Now let's restart the SSHD service and verify it is running.
sudo systemctl restart sshd
sudo systemctl status sshd
The first command will be silent and not output anything. That is because it only restarts the service. The second command should display something similar to the text below.
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2023-04-12 00:46:36 UTC; 16h ago
Docs: man:sshd(8)
man:sshd_config(5)
Process: 3263 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
Main PID: 3264 (sshd)
Tasks: 1 (limit: 18691)
Memory: 8.6M
CPU: 432ms
CGroup: /system.slice/ssh.service
└─3264 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"
The important part is the Active: active (running)
line. If you see anything
different, you messed up the SSHD config file.
Now log out of the server and log back in using the new SSHD port. You may need to edit the firewall settings from your cloud provider to allow incoming TCP connections on the new port.
Setting up WireGuard
I highly recommend this post which talks about setting up WireGuard in more detail. Since I already have a valid WireGuard config on all my machines and a backup copy of the server's config, I am going to copy my config onto the new server.
While WireGuard is a peer-to-peer VPN, I am setting it up using a client-server topology since my server is the only device with a static publicly accessible IP address.
My cloud server's config is at /etc/wireguard/wg-main.conf
and looks like
this:
[Interface]
Address = 10.0.0.0
ListenPort = 9680
PrivateKey = REDACTED
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE;iptables -A FORWARD -o %i -j ACCEPT; sysctl -w net.ipv4.ip_forward=1; sysctl -w net.ipv6.conf.all.forwarding=1
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE;iptables -D FORWARD -o %i -j ACCEPT
# Client 1
[Peer]
PublicKey = REDACTED
AllowedIPs = 10.0.0.1/32
# Client 2
[Peer]
PublicKey = REDACTED
AllowedIPs = 10.0.0.2/32
# Client 3
[Peer]
PublicKey = REDACTED
AllowedIPs = 10.0.0.3/32
# Client 4
[Peer]
PublicKey = REDACTED
AllowedIPs = 10.0.0.4/32
# Client 5
[Peer]
PublicKey = REDACTED
AllowedIPs = 10.0.0.5/32
# Client 6
[Peer]
PublicKey = REDACTED
AllowedIPs = 10.0.0.6/32
My desktop's config is also at /etc/wireguard/wg-main.conf
and looks like
this:
[Interface]
Address = 10.0.0.1
PrivateKey = REDACTED
[Peer]
PublicKey = REDACTED
EndPoint = 1.2.3.4:9680
AllowedIPs = 10.0.0.0/24
PersistentKeepalive = 25
These files are mostly original, but I replaced all public/private keys with
REDACTED
and changed the port to 9680
. The client names were also changed
to be anonymous.
Now let's install WireGuard and start the VPN.
sudo apt install wireguard-tools
wg-quick up wg-main
The first command will install WireGuard and its dependencies. The second
command will start the VPN. The wg-main
argument should match the filename
inside /etc/wireguard/
. Since mine is called wg-main.conf
, the argument is
wg-main
.
You should be able to access the server from your main computer and any other WireGuard peers should be able to connect to one another. If not, something is wrong with your WireGuard config or firewall.
If your cloud provider has a firewall, ensure your WireGuard port is open for incoming UDP connections.
Now let's make things permanent.
wg-quick down wg-main
sudo systemctl enable --now wg-quick@wg-main
To explain, wg-quick down wg-main
is the opposite of what we ran earlier. This
command stops the VPN instead of starting it. The second command starts the VPN
as a system service. systemctl
is the command to control services. enable
tells SystemD, the init system, to start the service on every boot. --now
tells SystemD to also start the service immediately, thus avoiding the need to
reboot. wg-quick@wg-main
is a bit more complicated. wg-quick@
is the name of
the service and wg-main
is the name of the VPN. If your config file inside
/etc/wireguard/
has a different name, you will need to edit this command to
match.
You are now safe to reboot your server and check that WireGuard persists across reboots.
LXD/LXC
While LXD/LXC runs the same across all distros, the actual install process varies by distro.
For Ubuntu, let's install the lxd
snap image.
sudo apt install snapd
sudo snap install lxd
sudo usermod -aG lxd myuser
The first command installs snapd
, the daemon for managing snap images. The
second command installs the lxd
snap image. The third command adds myuser
to
the lxd
group, which allows the user to run the lxd
and lxc
commands
without root
permissions.
Below is my output from the lxd init
command and my configuration options.
Note how almost everything is set to defaults except the storage backend. I
chose dir
instead of zfs
or btrfs
because the Hetzner cloud image uses
ext4
for the root partition and those backends would need preallocated space.
This makes dir
the perfect backend for ext4
partitioned drives.
NOTE: By using the dir
backend, my container snapshots will take more
space than with the zfs
or btrfs
backends. I consider this perfectly fine
since my server snapshots are handled by the Hetzner Cloud Console.
Additionally, snapshots can be deleted at any time using the lxc
command.
myuser@myserver ~ [SIGINT]> lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: all
Port to bind LXD to [default=8443]: 8443
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config:
core.https_address: '[::]:8443'
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
name: lxdbr0
type: ""
project: default
storage_pools:
- config: {}
description: ""
name: default
driver: dir
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
Now check that lxc list
can run without errors.
myuser@myserver ~> lxc list
To start your first container, try: lxc launch ubuntu:22.04
Or for a virtual machine: lxc launch ubuntu:22.04 --vm
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
Perfect! Now let's run our first container as a test.
We will pick a container by searching the images server and then launching it.
myuser@myserver ~> lxc image list images: ubuntu amd64 cloud jammy
+-----------------------------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-----------------------------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
| ubuntu/jammy/cloud (3 more) | 1da44228e1e4 | yes | Ubuntu jammy amd64 (20230412_07:43) | x86_64 | CONTAINER | 133.38MB | Apr 12, 2023 at 12:00am (UTC) |
+-----------------------------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
| ubuntu/jammy/cloud (3 more) | e3706f871b5f | yes | Ubuntu jammy amd64 (20230412_07:43) | x86_64 | VIRTUAL-MACHINE | 290.18MB | Apr 12, 2023 at 12:00am (UTC) |
+-----------------------------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
Ok, let's launch that container.
myuser@myserver ~> lxc launch images:ubuntu/jammy/cloud test
Creating test
Starting test
myuser@myserver ~> lxc list
+---------------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| test | RUNNING | 1.2.3.2 (eth0) | 1111:2222:3333:4444:5555:6666:7777:1111 (eth0) | CONTAINER | 0 |
+---------------+---------+-----------------------+------------------------------------------------+-----------+-----------+
Good! We have a container. Now let's reboot and verify that it persists across reboots. If it does not come back after a reboot, you need to troubleshoot your LXD/LXC setup.
Now I'm going to add my other server as a remote to lxc
.
myuser@myserver ~> lxc remote add 10.0.0.2
Generating a client certificate. This may take a minute...
Certificate fingerprint: REDACTED
ok (y/n/[fingerprint])? y
Admin password (or token) for 10.0.0.2:
Client certificate now trusted by server: 10.0.0.2
myuser@myserver ~> lxc remote list
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| 10.0.0.2 | https://10.0.0.2:8443 | lxd | tls | NO | NO | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images | https://images.linuxcontainers.org | simplestreams | none | YES | NO | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix:// | lxd | file access | NO | YES | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | none | YES | YES | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | none | YES | YES | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
myuser@myserver ~> lxc remote rename 10.0.0.2 otherserver
myuser@myserver ~> lxc remote list
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| otherserver | https://10.0.0.2:8443 | lxd | tls | NO | NO | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images | https://images.linuxcontainers.org | simplestreams | none | YES | NO | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix:// | lxd | file access | NO | YES | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | none | YES | YES | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | none | YES | YES | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
Note my mistake of not setting a friendly name for my other server. I used
lxc remote rename
to change the name from its WireGuard IP address to a
friendly nickname. Now I can use otherserver:container
to modify containers on
my other server from across the WireGuard VPN.
Let's set this up in the other direction.
From our new server:
myuser@myserver ~> lxc config set core.trust_password REDACTED
myuser@myserver ~> lxc config get core.trust_password
true
From the other server:
myuser@otherserver ~> lxc remote add myserver 10.0.0.0
Admin password (or token) for myserver:
Client certificate now trusted by server: myserver
NOTE: My cloud provider's firewall does not have the LXD/LXC port open, nor did I open it. This remote management is happening over the WireGuard VPN we set up in an earlier step.
Caddy
Caddy is the web server that will face the internet. It will be used to reverse-proxy into our containers. For now, let's configure it to show a message when we connect.
While Caddy doesn't have a proper package in either the apt
or snap
repos,
we can download the deb file from the releases page.
wget "https://github.com/caddyserver/caddy/releases/download/v2.6.4/caddy_2.6.4_linux_amd64.deb"
sudo dpkg -i caddy_2.6.4_linux_amd64.deb
The first command will download the deb file from Caddy's releases page. The second command will install the deb file and its dependencies.
Now let's configure two landing pages. First, you need to ensure your DNS entries are valid for the domains (or subdomains) you plan to use. You will also need to ensure ports 80 and 443 are open for incoming TCP connections.
Edit your /etc/caddy/Caddyfile
to look like this:
# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.
:80 {
# Set this path to your site's directory.
# root * /usr/share/caddy
# Enable the static file server.
# file_server
# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080
# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000
respond "Please use a subdomain for a redirect."
}
# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile
myserver.domain.com {
respond "This is the main server domain. Please use a different subdomain for a redirect."
}
nextcloud.domain.com {
respond "This will eventually be a Nextcloud server"
}
deluge.domain.com {
respond "This will eventually be a Deluge server"
}
Now you can restart your Caddy service with sudo systemctl restart caddy
and
verify it worked with sudo systemctl status caddy
.
Nextcloud
First, let's make a container for Nextcloud. This is so we can easily use
lxc move
to migrate the container in case of backups or server upgrades.
I will first launch the container, note the IP address, and make the address static.
lxc launch images:ubuntu/jammy/cloud nextcloud
lxc list
lxc config device override nextcloud eth0 ipv4.address=1.2.3.3
The first command will set up an Ubuntu Jammy container called nextcloud
. The
second command will list my containers and allow me to retrieve the nextcloud
container's IPv4 address. The third command sets the nextcloud
container's
IPv4 address to be static.
Now I can edit Caddy to redirect my Nextcloud subdomain to the container.
In /etc/caddy/Caddyfile
:
nextcloud.domain.com {
reverse_proxy http://1.2.3.3 {
header_up Host nextcloud.domain.com
header_up X-Forwarded-Host nextcloud.domain.com
}
}
Note that I'm doing my reverse proxy over HTTP instead of HTTPS. Internet communication is still encrypted because clients will connect over HTTPS to the Caddy web server. The plain HTTP only happens inside the server's LXD bridge so we are safe.
Now restart Caddy with sudo systemctl restart caddy
.
Next, let's turn our attention to the Nextcloud container. Get a shell to the
container using lxc shell nextcloud
. Now run the following commands to install
snapd
and nextcloud
.
apt update
apt upgrade
apt install snapd
snap install nextcloud
Now set up the admin and local user accounts. Be sure to note their passwords. There are some issues with Nextcloud wanting to use HTTP but the Caddy server wants HTTPS. There may also be issues with the host and phone region. Let's fix those problems.
First, stop Nextcloud using snap stop nextcloud
.
Now open /var/snap/nextcloud/current/nextcloud/config/config.php
in your
preferred text editor. Edit or add the following lines, changing the domain to
match your configuration.
'overwrite.cli.url' => 'https://nextcloud.domain.com',
'default_phone_region' => 'US',
'overwritehost' => 'nextcloud.domain.com',
'overwriteprotocol' => 'https',
Now restart Nextcloud using snap start nextcloud
and verify that the new
configuration is working.
Finish setting up your user. Then continue with these directions.
Let's take a backup of our new Nextcloud installation. Run the following commands:
lxc stop nextcloud
lxc snapshot nextcloud "finish-setup"
lxc start nextcloud
Download Server
This server will handle downloads for my website. In order to make the container as light as possible, I'm going to use Alpine Linux with NGINX. These two commands allowed me to search for an Alpine Linux cloud image and deploy it.
lxc image list images: amd64 cloud alpine
lxc launch images:alpine/edge/cloud ftp
Now launch a shell into the container using lxc shell ftp
.
Let's run upgrades and install NGINX using the commands below.
apk update
apk upgrade
apk add nginx
This is a good start, but we want to move NGINX's web root to /files
to make
files easier to access and manage. Let's make some changes. First, create the
/files
directory using mkdir /files
and use chown -R root:www-data /files
to ensure NGINX can read the directory contents.
Now open /etc/nginx/http.d/default.conf
and edit the contents to match my
settings below.
server {
listen 80 default_server;
listen [::]:80 default_server;
# Everything is a 404
location / {
root /files;
}
# You may need this to prevent return 404 recursion.
location = /404.html {
internal;
}
}
Now let's start the NGINX service and enable it on startup.
service nginx start
rc-update add nginx
Now the container is ready to serve files inside of /files
as long as they are
owned by root:www-data
. On the host side, we still need to assign a static IP
and tell Caddy to reverse proxy into the container.
myuser@myserver ~> lxc list ftp
+------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| ftp | RUNNING | 1.2.3.4 (eth0) | 1111:2222:3333:4444:5555:6666:7777:2222 (eth0) | CONTAINER | 0 |
+------+---------+-----------------------+------------------------------------------------+-----------+-----------+
myuser@myserver ~> lxc config device override ftp eth0 ipv4.address=1.2.3.4
Device eth0 overridden for ftp
In the code block above, I got the IP address of the ftp
container and forced
the same IP address to be static. Now let's configure Caddy. Open your
/etc/caddy/Caddyfile
and set up the reverse proxy for your domain.
ftp.domain.com {
reverse_proxy http://1.2.3.4 {
header_up Host ftp.domain.com
header_up X-Forwarded-Host ftp.domain.com
}
}
Now let's restart Caddy and check for any errors.
sudo systemctl restart caddy
sudo systemctl status caddy
If there are no errors, the download server is now complete.
Deluge
The Deluge container is perhaps the most complicated on my server, but the setup process is not hard. For this container, I'm going to use Arch Linux because it will provide a lightweight container with the latest packages.
lxc image list images: amd64 cloud arch
lxc launch images:archlinux/cloud deluge
Before configuring the container, let's do our host networking first so the Deluge daemon can run properly on the first launch.
myuser@myserver ~> lxc list deluge
+--------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| deluge | RUNNING | 1.2.3.5 (eth0) | 1111:2222:3333:4444:5555:6666:7777:3333 (eth0) | CONTAINER | 0 |
+--------+---------+-----------------------+------------------------------------------------+-----------+-----------+
myuser@myserver ~> lxc config device override deluge eth0 ipv4.address=1.2.3.5
Device eth0 overridden for deluge
myuser@myserver ~> lxc config device add deluge tcp-seeding proxy listen=tcp:0.0.0.0:56881-56889 connect=tcp:127.0.0.1:56881-56889
Device tcp-seeding added to deluge
myuser@myserver ~> lxc config device add deluge udp-seeding proxy listen=udp:0.0.0.0:56881-56889 connect=udp:127.0.0.1:56881-56889
Device udp-seeding added to deluge
The static IP was assigned because I want Deluge's web panel to be exposed. The
following was added to my /etc/caddy/Caddyfile
for the reverse proxy.
deluge.domain.com {
reverse_proxy http://1.2.3.5:8112 {
header_up Host deluge.domain.com
header_up X-Forwarded-Host deluge.domain.com
}
}
Now we need to restart the Caddy daemon and check for errors.
sudo systemctl restart caddy
sudo systemctl status caddy
We are now ready to configure the container. Use lxc shell deluge
to get a
shell inside the container. Let's update and install Deluge.
NOTE: You can speed up the download process by setting
ParallelDownloads = 20
inside /etc/pacman.conf
. The benefit from more
parallel downloads depends on your internet connection. I set mine to 20
because my server's connection is very fast.
pacman -Syyu
pacman -S deluge
Now we need to configure Deluge for headless operation. Open
/srv/deluge/.config/deluge/auth
in a text editor and add your user. It needs
to be in the format USERNAME:PASSWORD:10
where USERNAME
is your username and
PASSWORD
is your password. The 10
grants admin access to your user. Enable
remote access by setting "allow_remote": true
inside
/srv/deluge/.config/deluge/core.conf
. Now let's start the Deluge daemon.
systemctl enable --now deluged
systemctl status deluged
Now we need to configure deluge-console
to remotely manage the daemon. The
program itself is full of bugs and the setup prompts almost never work. We will
configure the client manually. Open /root/.config/deluge/hostlist.conf
in an
editor and compare it to the format noted below.
{
"file": 3,
"format": 1
}{
"hosts": [
[
"ID",
"ADDRESS",
PORT,
"USERNAME",
"PASSWORD"
]
]
}
Realistically, you only need to change the two I marked USERNAME
and
PASSWORD
. Substituting those two values for real login credentials, you are
now ready to connect to the daemon. Run deluge-console
and press ENTER
to
select the server. You should now be able to view and modify the Deluge daemon.
We are almost done. Just enable the web service using
systemctl enable --now deluge-web
. In my experience, the web server would not
start and kept crashing. I logged out of the container and ran
lxc restart deluge
and the web server worked perfectly ever since.
Change the Deluge Web UI password from the default (deluge
) to whatever you
want. Note that it should be different than the admin account for security
reasons. Now let's set up the server connection.
- Click
Connection Manager
- Click on the only entry present
- Click
Edit
- Change
Username
to the username of your admin account - Change
Password
to the password of your admin account - Click
Edit
to save your changes - Click
Close
to close the Connection Manager window
Now deluged
, deluge-console
, and deluge-web
are all fully configured and
ready to work.
OpenRA
The OpenRA server is very easy to set up compared to others on this list. Let's run this one on Arch Linux for the bleeding edge server version.
lxc launch images:archlinux/cloud openra
lxc config device add openra tcp1234 proxy listen=tcp:0.0.0.0:1234 connect=tcp:127.0.0.1:1234
lxc shell openra
Now we can update the container and install OpenRA.
NOTE: You can speed up the download process by setting
ParallelDownloads = 20
inside /etc/pacman.conf
. The benefit from more
parallel downloads depends on your internet connection. I set mine to 20
because my server's connection is very fast.
pacman -Syyu
pacman -S openra
su arch
cd
Now make a file at /home/arch/start-server.sh
and paste the contents below.
This script is my personal modification of the official example and the
openra-ra-server
script that comes with the openra
package.
#!/bin/sh
set -o errexit || exit $?
cd "/usr/lib/openra"
if test -f "OpenRA.Server"; then
LAUNCH_CMD="./OpenRA.Server Game.Mod=ra "
elif command -v mono >/dev/null 2>&1 && [ "$(grep -c .NETCoreApp,Version= OpenRA.Server.dll)" = "0" ]; then
LAUNCH_CMD="mono --debug OpenRA.Server.dll Game.Mod=ra "
else
LAUNCH_CMD="dotnet OpenRA.Server.dll Game.Mod=ra "
fi
# Usage:
# $ ./launch-dedicated.sh # Launch a dedicated server with default settings
# $ Mod="d2k" ./launch-dedicated.sh # Launch a dedicated server with default settings but override the Mod
# Read the file to see which settings you can override
Name="${Name:-"My Server"}"
Mod="${Mod:-"ra"}"
ListenPort="${ListenPort:-"1234"}"
AdvertiseOnline="${AdvertiseOnline:-"True"}"
Password="${Password:-"password"}"
RecordReplays="${RecordReplays:-"False"}"
RequireAuthentication="${RequireAuthentication:-"False"}"
ProfileIDBlacklist="${ProfileIDBlacklist:-""}"
ProfileIDWhitelist="${ProfileIDWhitelist:-""}"
EnableSingleplayer="${EnableSingleplayer:-"True"}"
EnableSyncReports="${EnableSyncReports:-"False"}"
EnableGeoIP="${EnableGeoIP:-"True"}"
EnableLintChecks="${EnableLintChecks:-"True"}"
ShareAnonymizedIPs="${ShareAnonymizedIPs:-"True"}"
JoinChatDelay="${JoinChatDelay:-"5000"}"
SupportDir="${SupportDir:-""}"
while true; do
${LAUNCH_CMD} \
Server.Name="$Name" \
Server.ListenPort="$ListenPort" \
Server.AdvertiseOnline="$AdvertiseOnline" \
Server.EnableSingleplayer="$EnableSingleplayer" \
Server.Password="$Password" \
Server.RecordReplays="$RecordReplays" \
Server.RequireAuthentication="$RequireAuthentication" \
Server.ProfileIDBlacklist="$ProfileIDBlacklist" \
Server.ProfileIDWhitelist="$ProfileIDWhitelist" \
Server.EnableSyncReports="$EnableSyncReports" \
Server.EnableGeoIP="$EnableGeoIP" \
Server.EnableLintChecks="$EnableLintChecks" \
Server.ShareAnonymizedIPs="$ShareAnonymizedIPs" \
Server.JoinChatDelay="$JoinChatDelay" \
Engine.SupportDir="$SupportDir" || :
done
I recommend changing the Name
and Password
variables to something other than
the defaults I provided. Now let's set up the SystemD service file. Create a
file at /home/arch/openra.service
and paste the contents below.
[Unit]
Description = OpenRA Server
After = network.target
[Service]
WorkingDirectory=/home/arch/
ExecStart=/home/arch/start-server.sh
User=arch
Group=arch
Type=idle
Restart=on-failure
[Install]
WantedBy = multi-user.target
Now type exit
to return to the root
user. Run these commands to configure
the new OpenRA service.
ln -s /home/arch/openra.service /etc/systemd/system/openra.service
systemctl daemon-reload
systemctl enable --now openra
To explain, ln -s
makes a symbolic link from /home/arch/openra.service
to
/etc/systemd/system/openra.service
. The systemctl daemon-reload
command
tells SystemD to scan for new service files. Lastly,
systemctl enable --now openra
tells SystemD to run the OpenRA service right
now and after every reboot.
OpenRA is now ready. Don't forget to open port 1234 in your firewall for incoming TCP connections.
Conclusion
In this blog post, I documented the process of setting up my new cloud server in a way that anyone can follow along. We started by picking a cloud provider and Linux distro. Then we secured SSHD and set up WireGuard, Caddy, LXD, Nextcloud, OpenRA, and Deluge.
Credits
Most of this information comes from past system administration experience but I would like to point out some key sources for further reading.