Amateur Dev-Ops: Setting Up an Ubuntu Server

This post sets out the steps I performed to configure a new (physical) Ubuntu Server.

black and grey device
Photo by Pixabay on Pexels.com

I chose a physical server over a cloud server (e.g. as provided by Google, Amazon or Digital Ocean) as I wanted to build up my server administration skills while retaining  direct control of my data and processes. I chose Ubuntu as I am already familiar with the desktop variants.

Hardware

I started out with a Dell PowerEdge T130 Tower Server. This is their small business entry-model, costing around £650 (with VAT) with an Intel Xeon E1220 4-core processor, 8GB of memory and a 1 TB hard drive.

I chose a configuration without an operating system. We’re going to install that in this post.

To set up the system, you also need an old VGA monitor, a USB mouse and a USB keyboard. These can be ditched later, but are useful to have around if you run into problems. You will also need an Ethernet connection to a local network. I have a power line network that is connected to the property’s router.

The tower also came with a now rather defunct optical media drive (you can’t choose to omit this). As I won’t use this I disconnected it internally. This may free up some power and resources.

Connect up the monitor, mouse and keyboard. Also connect the Ethernet cable. Plug in the server.

iDRAC Setup

Dell provides an Integrated Dell Remote Access Controller that is referred to as iDRAC. This allows you to remotely access system information and control the server. There are a number of quick steps to set this up.

First, power on the server. The system performs a number of checks. Press F12 to access the lifecycle manager. On the first run this will display a short set of screens to configure iDRAC access.

Because I am lazy I opted to use DHCP to configure the IPv4 and IPv6 addresses. I can cheat a little by accessing the network configuration for the router and looking for the server as a wired client device. The server actually has two IP addresses, one for the server itself and one for the out-of-band iDRAC management channel. In the router configuration I set both IP addresses to be constant – this way I get the simplicity of DHCP yet also have static IP addresses on the local network.

Once the iDRAC configuration is complete the server will restart.

Operating System Install

As we have no operating system installed the server will not boot by default. Things are clearly set up for a Windows Server install as it keeps looking for a drive C:\.

There are number of ways to install the OS. The Dell Lifecycle Manager offers a wizard to install from an .ISO file. I chose to use a USB stick as this is how I normally install Ubuntu Desktop on other machines. On another machine download the latest copy of Ubuntu Server (18.04 LTS for me) and use a startup disk creator to load onto a USB stick (Ubuntu desktop has one that can be accessed from the searchable dash).

To boot from a USB stick you need to enter the BIOS settings on boot by pressing F2. Select “Boot Options” in the system configuration and in the lower third of the options move the “Flash Drive” (i.e. USB stick) to the top of the boot order. Then reboot.

If you successfully boot from the USB stick, you should see the Ubuntu Server install screen. The options are all text based and fairly straightforward. I chose not to install any programs to start, as I’ll install these later (and mainly use Docker). You can import allowed SSH public keys for a user from Github, which is pretty cool.

You may need to format and/or partition the hard drive (which is setup for a Windows Server install). If you want to do this using a graphical interface you can always boot with a USB stick having a desktop version of Ubuntu and use GParted. We will look at volume management a bit later.

SSH Server Configuration

There are a few things I like to do to make a new server more secure. These are fairly simple low-hanging fixes but can prevent around 80% of attacks.

The first is to change the SSH port from the usual port of 22. Most automated attacks just randomly select IP addresses and try this port.

The second is to login using SSH keys instead of usernames and passwords. Login via username can then be switched off. If you have imported authorised public SSH keys from Github, any machine you’ve allowed to access Github should be able to access the server.

The third is to disable root login. This means we have to login as a user and use “sudo”.

All these can be performed by editing the settings in sudo nano /etc/ssh/sshd_config. Uncomment the Port and PubKeyAuthentication lines and edit accordingly. Also add a PasswordAuthentication no line. Root login is typically disabled by default. Remember to restart the SSH service after saving the updates using service sshd restart.

Install Docker & Docker Machine

Docker is available in the Ubuntu repository but it’s currently a version behind (17 vs 18 at the time of writing). To install the latest version of Docker you can follow the steps here. Beware: we now have a mixture of apt packages and snap packages. When installing Ubuntu Server it installs the “snap” version of Docker. Hence, I recommend not installing Docker as part of the initial installation and using the linked apt method for now.

Connect Some USB Hard Drives

I have a few USB Terabyte hard drives I want to hook up. First, connect the USB cables. Then to view the drives you can use the “list hardware” command sudo lshw -class disk -short.

To mount a particular partition we need to create a mount folder. There are generally two places to do this /mnt or /media. The consensus is that user mounts should go into /mnt. Use sudo mkdir /mnt/mybigdrive to create the directory.

Now we need to mount a particular partition. Use lsblk -f to list the partitions on the attached devices. If my attached drive is sdb and has one partition, I can mount the drive using something like sudo mount /dev/sdb1  /mnt/mybigdrive/ .

TMUX is Your Friend

One useful discovery is TMUX (Terminal Multiplexer I think). This can help you keep track of your remote SSH sessions.

First create a new terminal session by running tmux new -s server_admin. Do your business. Then detach using tmux detach. The session is now available even if your SSH connection goes. You can attach to the session by running tmux a -t server_admin.

This is a great guide to TMUX.

Understanding Logical Volume Management

As we are using a recent version of Ubuntu Server we need to get our heads around logical volume management. This is a way of flexibly controlling our data storage.

We have three types of volume:

  • Physical Volumes – these are often the underlying hard disks.
  • Volume Groups – these are collections of Physical Volumes.
  • Logical Volumes – these are created from the pool of storage provided by a Volume Group.

We can use the pvdisplayvgdisplay, and lvdisplay commands to show information about each volume type.

These two articles from Digital Ocean provide a good introduction:

Now we have a 1TB hard disk that is prepared using the Ubuntu Server install as one physical volume. There is also one volume group, which contains the physical volume. The volume group is by default named “ubuntu-vg”.

Let’s run through an example of creating a logical volume to store our projects.

First we can use sudo lvs to have a look at the existing logical volumes. This is also helpful for checking the Volume Group that is in use.

Next we create a new logical volume using sudo lvcreate -L 100G -n projects ubuntu-vg. This creates a new logical volume of size 100GB called “projects” using the “ubuntu-vg” volume group.

Now we need to create a file system on the new logical volume. To do this use sudo mkfs.ext4 /dev/ubuntu-vg/projects. We can check the file system has been created successfully by running sudo fsck -t ext4 -f /dev/ubuntu-vg/projects.

Finally to use the new logical volume we need to mount it somewhere. Lets choose “/mnt/projects”. Create a new blank folder using sudo mkdir /mnt/projects. Then mount the new logical volume into the folder using sudo mount /dev/ubuntu-vg/projects /mnt/projects. We can check everything is where it should be by running df -h.

It’s going to take me a bit of time to get used to volume management but I can quickly see the upside. I can add physical volumes to the volume group from other hard drives, and then expand existing logical volumes using the extended volume group.

Group Permissions

As we are running a server that may have multiple users we need to move from a single user owning everything to group access. This is explained here.

Our “projects” directory created above will initially be owned by “root” of the group “root” (as we mounted the drive using sudo, i.e. as the super user). Let’s initially change this to the “leader” of a new group: <code>sudo chown -R [groupLeader]:[groupLeader]</code>.

Now let’s add a new group for project developers using <code>sudo groupadd projectdev</code>. You can check this has been created by using <code>getent group | cut -d: -f1 | sort</code> (this lists all groups, cuts out the group name and sorts alphabetically).

Now let’s add our another user to the “projectdev” group using <code>sudo usermod -a -G projectdev [otherUser]</code>. We can check the user has been added using <code>getent group projectdev</code>. Then change the ownership of the projects folder to be owned by the group (assuming you are logged in as [groupLeader])<code>chgrp projectdev /mnt/projects</code>.

You’ll need to log in and out before you can access the folder.

Restoring Backups

I had a duplicity backup from a previous (fried) computer. If you use the “backup” GUI on Ubuntu Desktop, this uses duplicity on the backend (wrapped using Deja-Dup).

First you may need to install duplicity on the server. Use sudo apt-get install duplicity to do this.

Next we need to find the folder containing your duplicity backups. For me this was a folder on a USB drive. Mount the drive and navigate to the folder. Restoring the backup then becomes a case of running sudo duplicity restore --progress file://[/full/path/to/backupfolder] [/full/restore/folder/path].

If the folder is not blank (sometimes Ubuntu sticks system folders like “lost&found” in there), you may need to use the “–force” flag to overwrite any contents in the restore folder. I also had an issue with running out of temporary storage, so I mounted a new bigger logical volume to a “/mnt/customtemp” directory and then used the “–temp /mnt/customtemp” option to use this over the default “/tmp” directory. The “–progress” flag just helps to make things more verbose and show progress during the restore.

Loading Keys

If your backup contained any key files (e.g. “id_rsa” and “id_rsa.pub”) you may want to copy these into your current “/home/[$USER]/.ssh” directory. This can then allow you to use existing keys to access remote SSH servers (such as GitHub).

Advertisements