Technical
35 posts
Mar 13 2021
Touchups
Not surprisingly, when switching over to a new base operating system, a few tweaks are needed for previous instructions.
realmd + Samba
By default, realmd and Samba (when desired for something like FreeRADIUS) don't play nice with each other - they both try to own /etc/krb5.keytab, leading to unhappiness (since realmd will try and renew the keytab without telling Samba, breaking the latter). The correct order is:
- Connect to the domain via realm join.
- Connect to the domain via net ads join.
- Add ad_update_samba_machine_account_password = true to /etc/sssd/sssd.conf under your domain config.
- Restart sssd (systemctl restart sssd). You should now be good to go.
Oct 03 2020
Back to backups
I'd thought that Duplicati was going to serve my needs well, but it turns out that, as typical, things are a bit more... complicated. I hit something similar to this issue (Unexpected difference in fileset version X: found Y entries, but expected Z), but on a fresh backup, even after repeated attempts. So, attempting to recreate the database, it ended up in a stalled state for longer than the backup itself took, similar to this issue. As you can imagine, an unreliable backup system isn't actually a backup system, so I went hunting for something else. This time through, I decided I wasn't going to worry quite as much about the onsite backup (shame on me) and decided to back straight up into the cloud.
Why skip the local backup? Well, because the previous method, although secure, doesn't lend itself well to restores, since separate systems handle the backups versus the encryption As a result, to be able to restore a file, I would need to know the "when" of the file, then restore the entire backup for that system at that time, then mount that backup to be able to find the file, rather than being able to grab the one file I want. Not being able to see the names of files being restored can be quite painful. Having access to considerably more storage allows for a single system to perform both, while still being secure.
Storage
But how to get considerably more storage? In my case, I started using Microsoft 365, so would it be possible to mount a OneDrive drive in Linux? As it turns out: yes, albeit with caveats. Using rclone, it's possible to mount different cloud storage providers, including OneDrive. Installing it is as simple as you would expect:
To set up the connection, follow the appropriate instructions for your service on the documentation page. Pay attention to any additional notes for your service (for example, for Microsoft OneDrive, the notes regarding versioning and how to disable it).
The difference here is that OneDrive is then mounted, so that the storage is streamed on an as-needed basis and is completely available. rclone doesn't have built-in fusermount support, though, so follow the instructions here to create /usr/local/bin/rclonefs. To mount on-boot, using the systemd approach is more reliable than the fstab approach, since it's possible to have the mount wait on network access.
There are a couple of caveats (i.e. things that don't work) about this approach:
- There is no file ownership - similar to SMB, all files are owned by a single user.
- There are no file permissions.
- There are no symlinks or hardlinks.
- 0-byte files can be deleted, but cannot be edited.
These have an impact on the options for backup software.
Backups
I found that, after searching through several different options, the one that worked best for me is restic. Several don't play nice with rclone mount due to symlinks/hardlinks (BackupPC, UrBackup) or file permissions (restic via sftp), and many rely on the server for encryption, meaning that compromising the server means that all data is compromised (BackupPC, UrBackup). Some of them are designed to fundamentally work against tape drives and not disk drives, leading to other issues (Bacula, Bareos, BURP). Borg Backup and Duplicacy could be options, but hit problems when attempting to secure clients from each other, since setting up per-user sftp chroot jails on top of rclone mount has its own security issues (that of needing to grant the container CAP_SYS_ADMIN, which is... not ideal. This problem does go away if a local backup is also kept, however. Borg Backup is very dependent upon a local cache (meaning that system restores get uglier) and has very limited support for transfer protocols, and Duplicacy has a weird license, but both could potentially work as well, particularly if either a local backup is kept or a transfer protocol other than sftp is used (in the case of Duplicacy).
For handling cloud storage, I've set up access to restic via its Rest Server, so that all files are owned by the user the daemon runs as (which neatly bypasses a lot of the permissions issues). It allows for partitioning users away from each other, but at the cost of needing yet another set of credentials to juggle. Via sftp, restic attempts to set file permissions to 0700, which doesn't work so well if sftp is set up with separate accounts either. The configuration ends up being fairly straightforward:
kind: PersistentVolume
metadata:
name: backup-pv
labels:
name: backup-pv
spec:
capacity:
storage: <storage>
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: <path>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <system>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: <storage>
storageClassName: local-storage
selector:
matchLabels:
name: "backup-pv"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: restic
labels:
app: restic
spec:
replicas: 1
selector:
matchLabels:
app: restic
template:
metadata:
labels:
app: restic
spec:
containers:
- name: restic
image: restic/rest-server
env:
- name: OPTIONS
value: "--private-repos"
volumeMounts:
- name: backup-pvc
mountPath: /data
ports:
- containerPort: 8000
volumes:
- name: backup-pvc
persistentVolumeClaim:
claimName: backup-pvc
---
kind: Service
apiVersion: v1
metadata:
name: restic
labels:
app: restic
spec:
selector:
app: restic
ports:
- protocol: TCP
port: 8000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: restic
labels:
app: restic
spec:
rules:
- host: <hostname>
http:
paths:
- path: /
backend:
serviceName: restic
servicePort: 8000
Local
Once the pod is up and running, add a user entry via:
After this, setting up backups is straightforward. On the client system:
$ restic -r reset:https://<user>:<password>@<hostname>/<user> init
This will initialize the backup bucket, asking for an encryption password. Make sure to record this password! After this, set up an environment file:
export RESTIC_PASSWORD=<encryption password>
Then, create the backup cron job, ensuring that it's executable:
set -o pipefail
<preparatory commands>
RESTIC_USER=`hostname`
source /root/backup.env
restic backup <directory> [<directory> ...]
# Change to your own preferences, per https://restic.readthedocs.io/en/stable/060_forget.html#removing-snapshots-according-to-a-policy
restic forget --keep-daily=7 --keep-weekly 4 --keep-monthly 12
restic check
Aug 23 2020
Moving the goalposts
I've now read several posts discussing the security of Alpine Linux versus other Linux distributions, with a lot of the arguments boiling down to whether it's better to have a less-used but smaller footprint (Alpine) or a very widely-used but larger footprint (e.g. Debian) distribution, such as that discussed here. Personally, I don't buy that there's no security advantage in having a smaller libc (musl) - it's well-documented, after all, that larger code bases lead to more bugs, although that's at least partially balanced out by the sheer number of users that are on glibc. The small footprint has definitely been attractive to me up until now, but the difficulty of trying to get software on it has definitely made things more frustrating. Things came to a head when I was looking at better domain integration after finding that I was having some more difficulty generating a keytab on Windows Server systems, at which point I figured I could do better (something I was unable to get working on Alpine Linux).
As such, I've decided to shift over to Debian, after finding that it gets me an okay footprint (1.5 GB with swap and a bit of usable free space), although it's (not surprisingly) still larger than Alpine Linux (1 GB with swap and some usable free space). But, it's considerably smaller than what CentOS or Ubuntu can offer, particularly if only picking the "SSH server" but not the "standard system utilities" and disabling recommend packages after installation:
APT::Get::Install-Suggests "false";
I initially had a bunch of text cribbed from here about setting up libpam-krb5 for authentication/authorization, and it actually works reasonably well, although involves a number of a steps, a few of them more fiddly than I'd like. And it turns out that realmd does all this in a simpler way (at a cost of 100MB over the libpam-krb5 option). As a result, domain integration is a matter of installing a laundry list of packages:
In order to tweak a few settings, create a simple configuration file:
default-home = /home/%U
[<domain>]
fully-qualified-names = no
And then add the system to the domain:
Enable GSSAPI authentication for SSH:
GSSAPIAuthentication yes
...
And restart sshd:
We want to ensure that users have home directories on login, so tweak the PAM config, enabling "Create home directory on login":
But, the default permissions for creating a home directory are awful (umask 0022), so tweak the result to be more sane:
session optional pam_mkhomedir.so umask=0077
...
And finally, to limit the users who should have login access:
$ realm permit -g <groups>
As a final note, ESXi integration is still available via open-vm-tools.
Aug 22 2020
Two steps forward, two steps back
After putting in a fair amount of effort in order to move from Windows Server to Samba and from Exchange to Kopano, I've decided to roll all that back (kind of). Why, you might ask? Ultimately, it ends up with the mail server:
- For redundancy, especially since I'm not being paid to be a full-time system administrator. It's much harder to guarantee uptime when I'm not monitoring my systems constantly, so if I want a reliable mail server, I would need to set up something like a secondary MX... which, quite frankly, is a PITA, especially once you factor in needing another domain controller to feed information to the mail server, along with whatever that Kopano would want. Doing it properly? Not so simple.
- Kopano hasn't been as straightforward as I was expecting. I've already mentioned the ActiveSync issue, but Z-Push has also been remarkably flaky as well.
So I'm actually switching over to Microsoft 365 (formerly known as Office 365) Business, which addresses these issues (although, as with everything, comes with others).
Windows Server
Setting up a newer Windows Server with a new domain generally means setting up a domain controller with Server Core, which is a different beast than setting up a minimal interface installation. Fortunately, there are pages out there that explain the PowerShell commands needed to get everything up and running once the initial installation is complete:
> Get-Command -Module ADDSDeployment
> Install-ADDSDomainController -InstallDns -DomainName <domain>
> Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools
> Install-ADDSForest -DomainName <domain> -DomainNetbiosName <netbios>
After this is a matter of installing the certificate authority root certificate (assuming that you're not using Windows Server as your CA). This is handled via certutil -dsPublish (after you copy the root certificate to your server). Installing a matching server certificate with private key via certutil -importPFX, then restarting the server lets the LDAP service start the LDAPS variant (there isn't an explicit service that can be restarted to start up LDAPS).
Azure AD Connect
... Oh my god, dealing with this piece of software was horrible. Poorly documented, "security" fighting me the whole time, bugs all over the place... suffice it to say that it wasn't a pleasant experience.
- This is documented, but it needs to run on Server GUI and not Server Core for... Reasons.
- The documentation blithely assumes that you've already turned IE Enhanced Security off... which isn't the default.
- Reconfiguring Azure AD Connect often requires uninstall and reinstalling it.
- After several attempts, I found that that the Express mode was unable to detect my domain and would just fail mysteriously instead.
- Once I did get it installed, I found that it installed Azure AD Connect Health Sync. And would, under certain circumstances, fail that installation, but wouldn't be clear anyways that it was an optional part of the installation (especially since you need to pay for a higher level of Azure AD before that functionality even works).
- And, of course, if you don't have Azure AD Connect Health Sync working, if the sync ever gets into a weird state, it'll never tell you. Fun.
I did try to see whether it would be possible to run Azure AD Connect against Samba. My conclusion is that it was possible at some point in time, but with the current version of Azure AD Connect, it's not (it runs some queries that the current version of Samba doesn't support).
Microsoft 365
All in all, so far, the experience hasn't been all too bad, even though there is some functionality that iOS has never supported (and likely never will): shared mailboxes. Instead, I'm using the workaround of paying for another account and logging in with that account as well. On the bright side, I suppose, it gives me additional OneDrive space for backups. On top of that:
- It's possible to set up e-mail sub-addressing on Office 365 with a bit of work, with one major benefit over services like Gmail or consumer Outlook: the ability to use a symbol other than + for subaddressing, which is great since there are lots of services out there which don't handle + very well.
- It takes quite a bit of wrangling, but it's also possible to send e-mail using aliases on the same domain while treating everything as a single mailbox.
Aug 02 2020
Knock, knock. Who's there?
One of the useful things about having a directory service is the ability to authenticate users effectively, with the standard for this with networked computers being RADIUS. This can then be used by services like VPNs and wireless 802.1X. So how to set one up?
Joining the domain
First off, start off with a standard system. Then, install the packages we'll need for authenticating versus an Active Directory domain:
In order to hook up RADIUS to Active Directory, the system must be added to the domain. Similar to what a domain controller needs, the Kerberos configuration file must be set up:
default_realm = <domain>
dns_lookup_realm = false
dns_lookup_kdc = true
Then, Samba must be set up:
workgroup = <short domain>
security = ADS
realm = <domain>
winbind refresh tickets = Yes
vfs objects = acl_xattr
map acl inherit = Yes
store dos attributes = Yes
Followed by the domain join command:
Then, since we need winbind, enable it in the daemon:
daemon_list="smbd nmbd winbindd"
...
At this point, you can then start Samba:
$ rc-update add samba
To check that Samba is working correctly, you can run a quick command to verify that the system is communicating with the domain correctly:
Password:
NT_STATUS_OK: The operation completed successfully. (0x0)
FreeRADIUS
We need to install the FreeRADIUS packages first:
Since Alpine Linux doesn't have a lot of the more advanced protections other Linux distributions have, changing group permissions so that FreeRADIUS can access winbind's files is sufficient:
$ chmod g+S /var/lib/samba/winbindd_privileged
Next up, follow the standard FreeRADIUS documentation to add a client for authentication. Then is server identification. First off is generating the Diffie-Helman files:
$ openssl dhparam -out dh -2 2048
To go with this file, we need an SSL server certificate for the RADIUS server to identify itself. The certificate and private key should be combined as /etc/raddb/certs/server.pem, and the CA root certificate as /etc/raddb/certs/ca.pem.
After this, the Active Directory integration. Edit the two files in /etc/raddb/sites-enabled (default and inner-tunnel), and replace every instance of -eap with eap (removing the hyphen). In addition, remove the additional hyphen in this section of the configuration:
eap {
ok = return
updated = return
}
...
The EAP and MSCHAP modules then need to be adjusted:
eap {
default_eap_type = peap
...
tls-config tls-common {
private_key_file = /etc/raddb/certs/server.pem
certificate_file = /etc/raddb/certs/server.pem
ca_file = /etc/raddb/certs/ca.pem
...
}
mschap {
...
ntlm_auth = "/usr/bin/ntlm_auth --allow-mschapv2 --request-nt-key --username=%{mschap:User-Name} --challenge=%{%{mschap:Challenge}:-00} --nt-response=%{%{mschap:NT-Response}:-00}"
...
}
You can then enable FreeRADIUS:
$ rc-update add radiusd
And test it (although you should use a test account, or make sure to remove these lines from your shell history), being aware that your results may vary slightly:
Sent Access-Request Id <number> from 0.0.0.0:<port> to 127.0.0.1:1812 length <number>
...
Received Access-Accept Id <number> from 127.0.0.1:1812 to 127.0.0.1:<port> length <number>
MS-CHAP-MPPE-Keys = <hex string>
MS-MPPE-Encryption-Policy = Encryption-Allowed
MS-MPPE-Encryption-Types = RC4-40or128-bit-Allowed
If you need to debug FreeRADIUS, it often makes more sense just to run it from the command line after shutting down the daemon:
$ radiusd -X
Samba
If you hadn't followed this blog post for setting up your Active Directory domain and you're running Samba, you might need to follow the hint on this page and add this section to your smb.conf on your directory controllers:
...
ntlm auth = mschapv2-and-ntlmv2-only
...
Jul 24 2020
Rebasing home
As obliquely referenced previously, I've switched away from using Active Directory to manage my domain to using Samba instead, largely to keep things simpler and also because minimal server is no longer supported. I'm quite used to Linux, so it's not that I object to using a command line - it's more that there's considerably more risk if PowerShell is something I only use occasionally, as I end up needing to essentially relearn things every time I want to make a change. So back to Linux it is, I suppose!
Setting up Samba as a domain controller on Alpine Linux is quite straightforward (I was running on Ubuntu for a few months, and that was probably harder to set up). Setting up a domain controller is pretty straightforward (I won't go into setting up the domain in the first place, although Samba provides quite a bit of documentation on that process). After setting up a standard Alpine Linux system (or during the process, if you prefer) is a remix of Samba's domain controller documentation. First, install the appropriate packages:
Then, set up the Kerberos configuration file:
default_realm = <domain>
dns_lookup_realm = false
dns_lookup_kdc = true
And then join the domain:
$ rm /etc/samba/smb.conf
$ samba-tool domain join <domain> DC -k yes
After this, the system should now be a member of the domain as a domain controller. Next up is adding some additional configuration. First off, the start up configuration:
The Samba config can look something like this:
dns forwarder = 1.1.1.1
netbios name = <name>
realm = <domain>
server role = active directory domain controller
workgroup = <domain short name>
idmap_ldb:use rfc2307 = yes
client signing = yes
client use spnego = yes
kerberos method = secrets and keytab
tls enabled = yes
tls keyfile = /etc/samba/tls/key.pem
tls certfile = /etc/samba/tls/cert.pem
tls cafile =
ntlm auth = mschapv2-and-ntlmv2-only
[sysvol]
...
[netlogon]
...
A few notes on the config:
- dns forwarder lets the DNS server handle external requests as well.
- The tls entries enable LDAPS. tls keyfile specifies the certificate private key, while tls certfile specifies the public certificate chain, both of which should be generated for the domain controller.
- ntlm auth is set as such to enable MSCHAPv2 authentication for FreeRADIUS.
And then enabling the daemon:
$ rc-update add samba
Once Samba is up and running, the new domain controller can use itself for LDAP(S) queries for NSLCD.
Jun 11 2020
(Re-)Building the world
One of the things you really need with systems is backups. I've (unfortunately) been pretty bad about those too, so have taken advantage of the recent insanity to finally put together a backup routine. In this instance, the goal is to have one an onsite backup (effectively a snapshot) with offsite backups. It's a layered system, however, so requires a little bit of explanation. Note that this takes advantage of Kubernetes and the previously mentioned ExternalDNS setup for easily addressable hostnames.
Local
We want secure backups, so in this case, we're opting for gocryptfs in reverse mode in order to have straightforward secure backups, where the goals include:
- Encrypted backups where it's okay for the local system has the key laying about (since it's the system that generates the data, after all), but the central store lacks that same key.
- Reasonable space usage (i.e. doubling space requirements for encryption isn't fun).
- Reasonable package overhead (e.g. having to run an entire Java engine on a lightweight system isn't fun).
We need gocryptfs installed from edge/testing:
$ vi /etc/apk/repositories
$ apk add gocryptfs
# Disable the edge/testing repository if you're not living on it
$ vi /etc/apk/repositories
Then, set up the basics for the mount. For this, we'll be using /root/backup as our target directory:
$ gocryptfs -init -reverse backup
# Enter password here, record the master key.
$ mkdir /root/.gocryptfs
$ chmod go-rwx /root/.gocryptfs
$ mv /root/backup/.gocryptfs.reverse.conf /root/.gocryptfs/backup.conf
At this point, it's now possible to use gocryptfs' reverse mode in order to create an encrypted folder that uses no additional space:
$ modprobe fuse
$ gocryptfs -config /root/.gocryptfs/backup.conf -masterkey <master key> -reverse /root/backup /root/backup-crypt
This directory can be easily copied to another location.</p><p>But what about backup restores, you ask? The backup configuration will no longer exist. Conveniently, it's easy enough to regenerate the configuration (although you would be using the -reverse flag to generate the config). Just make sure you record the master key in a safe place.
Remote
The next step is creating a central location to store the files, where the goals include:
- Encryption in transit (mostly to protect credentials, as the backups themselves are already encrypted).
- Account isolation (i.e. one system is unable to access another system's backups).
- Minimal leakage of credentials.
In this case, SFTP (via SSH) makes the most sense, particularly once ChrootDirectory and ForceCommand internal-sftp are enabled. On the client side, sftp -b allows for basic scripting. In this instance, this SFTP Docker container within Kubernetes works out well. The first step involves setting up local storage:
kind: PersistentVolume
metadata:
name: backup-rw-pv
labels:
name: backup-rw-pv
spec:
capacity:
storage: <storage>
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: <path>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <system>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-rw-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: <storage>
storageClassName: local-storage
selector:
matchLabels:
name: "backup-rw-pv"
Then, followed up by the actual application container:
kind: ConfigMap
metadata:
name: backup-sftp-users
data:
users.conf: |
<user entries>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: backup-sftp-init
data:
init-sftp.sh: |
#!/bin/sh
cat << EOF > /etc/ssh/ssh_host_ed25519_key
-----BEGIN OPENSSH PRIVATE KEY-----
<private key>
-----END OPENSSH PRIVATE KEY-----
EOF
cat << EOF > /etc/ssh/ssh_host_ed25519_key.pub
<public key>
EOF
cat << EOF > /etc/ssh/ssh_host_rsa_key
-----BEGIN RSA PRIVATE KEY-----
<private key>
-----END RSA PRIVATE KEY-----
EOF
cat << EOF > /etc/ssh/ssh_host_rsa_key.pub
<public key>
EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backup-sftp
labels:
app: backup-sftp
spec:
replicas: 1
selector:
matchLabels:
app: backup-sftp
template:
metadata:
labels:
app: backup-sftp
spec:
containers:
- name: backup-sftp
image: atmoz/sftp:alpine
volumeMounts:
- name: sftp-users
mountPath: /etc/sftp
- name: sftp-init
mountPath: /etc/sftp.d
- name: backup-rw-pvc
mountPath: /home
ports:
- containerPort: 22
volumes:
- name: sftp-users
configMap:
name: backup-sftp-users
- name: sftp-init
configMap:
name: backup-sftp-init
defaultMode: 0744
- name: backup-rw-pvc
persistentVolumeClaim:
claimName: backup-rw-pvc
---
apiVersion: v1
kind: Service
metadata:
name: backup-sftp
annotations:
external-dns.alpha.kubernetes.io/hostname: <hostname>
metallb.universe.tf/address-pool: <pool>
labels:
app: backup-sftp
spec:
type: LoadBalancer
ports:
- port: 22
targetPort: 22
selector:
app: backup-sftp
This hooks up everything up neatly. The user entries follow this format:
Best practices involve leaving pass empty and using SSH keys instead. Due to having to juggle permissions appropriately, SSH keys under the <path>/<user>/.ssh/keys directory are added to the authorized_keys file, so public keys should be added there. In order to have the container recognize new users, however, the container needs to be restarted:
- Add a user entry to the YAML configuration.
- Add the SSH public key to the correct directory (setting new directory permissions appropriately).
- Ensure that Kubernetes has read the new YAML configuration.
- Restart the SFTP pod (likely by killing the current pod).
Having client systems uploading their individual backups is done via a (simple?) script, probably located in somewhere like /etc/periodic/daily (so that it's automatically run nightly):
TARGET_DIR=/var/backup
# Make local backups.
rm -rf ${TARGET_DIR}/*
<commands to generate the backup here>
GOCRYPT_TARGET_DIR=${TARGET_DIR}-crypt
# Make sure that the gocryptfs directory isn't still mounted from before (state reset).
mount | grep ${GOCRYPT_TARGET_DIR}
if [ ${?} == 0 ]; then
umount ${GOCRYPT_TARGET_DIR}
fi
GOCRYPT_CONFIG=~/.gocryptfs/backup.conf
MASTER_KEY=<master key>
# Make sure that the gocryptfs directory is mounted.
mkdir -p ${GOCRYPT_TARGET_DIR} || exit 1
modprobe fuse || exit 1
gocryptfs -config ${GOCRYPT_CONFIG} -masterkey ${MASTER_KEY} -reverse ${TARGET_DIR} ${GOCRYPT_TARGET_DIR} || exit 1
# Get the SFTP target.
TARGET_HOST=$( dig +short <sftp host> @<powerdns host> )
SSH_USERNAME=$( hostname )
# Then copy files over.
cat <<EOF | sftp -b - ${SSH_USERNAME}@${TARGET_HOST}
chdir upload
-rm *
lchdir ${GOCRYPT_TARGET_DIR}
put *
EOF
# Clean up.
umount ${GOCRYPT_TARGET_DIR}
rmdir -p ${GOCRYPT_TARGET_DIR} || true
Note that the dig command is used if your PowerDNS is not hooked up to your primary DNS (which can be quite annoying if you're using Samba as your domain controller, as the BIND9_DLZ module is not commonly provided for Samba distributions). If yours is nicely hooked up, you can just specify the SFTP host directly in the SFTP connection line.
Cloud
Centralized backups still aren't enough, though. The next step involves storing (encrypted) offsite backups in case things go horribly wrong. Fortunately, Duplicati supports multiple backup destinations (in my case, I'm using Google Drive via G Suite), is free, and has a good feature set (including a sensible smart backup retention schedule). Setting up the official Docker container within Kubernetes is fairly straightforward, as usual. First off, a read-only version of the storage above:
kind: PersistentVolume
metadata:
name: backup-ro-pv
labels:
name: backup-ro-pv
spec:
capacity:
storage: <storage>
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: <path>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <system>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-ro-pvc
spec:
accessModes:
- ReadOnlyMany
volumeMode: Filesystem
resources:
requests:
storage: <storage>
storageClassName: local-storage
selector:
matchLabels:
name: "backup-ro-pv"
And then the application container:
kind: PersistentVolume
metadata:
name: backup-duplicati-pv
labels:
name: backup-duplicati-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /var/data/duplicati
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <hostname>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-duplicati-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
storageClassName: local-storage
selector:
matchLabels:
name: "backup-duplicati-pv"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backup-duplicati
labels:
app: backup-duplicati
spec:
replicas: 1
selector:
matchLabels:
app: backup-duplicati
template:
metadata:
labels:
app: backup-duplicati
spec:
containers:
- name: backup-duplicati
image: duplicati/duplicati:latest
command: ["/usr/sbin/tini", "--"]
args: ["/usr/bin/duplicati-server", "--webservice-port=8200", "--webservice-interface=any", "--webservice-allowed-hostnames=*"]
volumeMounts:
- name: backup-ro-pvc
mountPath: <path>
- name: backup-duplicati-pvc
mountPath: /data
ports:
- containerPort: 8200
volumes:
- name: backup-ro-pvc
persistentVolumeClaim:
claimName: backup-ro-pvc
- name: backup-duplicati-pvc
persistentVolumeClaim:
claimName: backup-duplicati-pvc
---
kind: Service
apiVersion: v1
metadata:
name: backup-duplicati
labels:
app: backup-duplicati
spec:
selector:
app: backup-duplicati
ports:
- protocol: TCP
port: 8200
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backup-duplicati
labels:
app: backup-duplicati
spec:
rules:
- host: <hostname>
http:
paths:
- path: /
backend:
serviceName: backup-duplicati
servicePort: 8200
At this point, you can connect to the Duplicati hostname you specified, then follow the standard GUI documentation to set up the basics, and you're done!
Mar 16 2020
Your name
When juggling multiple applications in Kubernetes, it's not uncommon to end up with all kinds of conflicting requirements. HTTP/HTTPS traffic is the easiest, since you can use something like Traefik (even if it does become more complicated if you run multiple endpoints), but if you want to run services that run other kinds of traffic.... It's actually a great reason to run MetalLB, as previously mentioned. The catch is, once the system start assigning different IPs to different services, how do you know which IP to contact? One option is to just use hard-coded IPs for everything, but that's not very scalable. Which is where you can have fun with something like ExternalDNS, which is able to register services with a DNS. In our case, using PowerDNS hosted on Kubernetes ends up being a very interesting option, allowing for everything to be internalized (although giving PowerDNS itself a static IP is a good idea!).
PowerDNS
Setting up PowerDNS isn't too bad if you already have a database set up (by default, I would recommend setting up an external database so that you don't need to worry about database corruption in case of a pod being forcibly stopped). The YAML file looks something like this (there is no official Helm chart as of this writing):
kind: Secret
metadata:
name: powerdns-secret
namespace: kube-system
type: Opaque
data:
PDNS_APIKEY: <base64 secret>
MYSQL_PASS: <base64 secret>
PDNSADMIN_SECRET: <base64 secret>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: powerdns
namespace: kube-system
labels:
app: powerdns
spec:
replicas: 1
selector:
matchLabels:
app: powerdns
template:
metadata:
labels:
app: powerdns
spec:
containers:
- name: powerdns
image: pschiffe/pdns-mysql:alpine
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pdnsutil list-zone <internal domain> 2>/dev/null"]
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nc -vz <database hostname> 3306"]
initialDelaySeconds: 20
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "a=0;while [ $a -lt 200 ];do sleep 1;a=$[a+1];echo 'stage: '$a;if nc -vz <database hostname> 3306;then (! pdnsutil list-zone <internal domain> 2>/dev/null) && pdnsutil create-zone <internal domain>;echo 'End Stage';a=200;fi;done"]
env:
- name: PDNS_api_key
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNS_APIKEY
- name: PDNS_master
value: "yes"
- name: PDNS_api
value: "yes"
- name: PDNS_webserver
value: "yes"
- name: PDNS_webserver_address
value: 0.0.0.0
- name: PDNS_webserver_allow_from
value: 0.0.0.0/0
- name: PDNS_webserver_password
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNS_APIKEY
- name: PDNS_default_ttl
value: "1500"
- name: PDNS_soa_minimum_ttl
value: "1200"
- name: PDNS_default_soa_name
value: "ns1.<internal domain>"
- name: PDNS_default_soa_mail
value: "hostmaster.<internal domain>"
- name: MYSQL_ENV_MYSQL_HOST
value: <database hostname>
- name: MYSQL_ENV_MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: powerdns-secret
key: MYSQL_PASS
- name: MYSQL_ENV_MYSQL_DATABASE
value: powerdns
- name: MYSQL_ENV_MYSQL_USER
value: powerdns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 8081
name: api
protocol: TCP
- name: powerdnsadmin
image: aescanero/powerdns-admin:latest
livenessProbe:
exec:
command: ["/bin/sh", "-c", "nc -vz 127.0.0.1 9191 2>/dev/null"]
initialDelaySeconds: 80
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nc -vz <database hostname> 3306 2>/dev/null "]
initialDelaySeconds: 40
env:
- name: PDNS_API_KEY
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNS_APIKEY
- name: PDNSADMIN_SECRET_KEY
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNSADMIN_SECRET
- name: PDNS_PROTO
value: http
- name: PDNS_HOST
value: 127.0.0.1
- name: PDNS_PORT
value: "8081"
- name: PDNSADMIN_SQLA_DB_HOST
value: <database hostname>
- name: PDNSADMIN_SQLA_DB_PASSWORD
valueFrom:
secretKeyRef:
name: powerdns-secret
key: MYSQL_PASS
- name: PDNSADMIN_SQLA_DB_NAME
value: powerdns
- name: PDNSADMIN_SQLA_DB_USER
value: powerdns
ports:
- containerPort: 9191
name: pdns-admin-http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: powerdns-service-dns
namespace: kube-system
annotations:
metallb.universe.tf/address-pool: <IP identifier>
labels:
app: powerdns
spec:
type: LoadBalancer
ports:
- port: 53
nodePort: 30053
targetPort: dns
protocol: UDP
name: dns
selector:
app: powerdns
---
apiVersion: v1
kind: Service
metadata:
name: powerdns-service-api
namespace: kube-system
labels:
app: powerdns
spec:
type: ClusterIP
ports:
- port: 8081
targetPort: api
protocol: TCP
name: api
selector:
app: powerdns
---
apiVersion: v1
kind: Service
metadata:
name: powerdns-service-admin
namespace: kube-system
labels:
app: powerdns
spec:
type: ClusterIP
ports:
- port: 9191
targetPort: pdns-admin-http
protocol: TCP
name: pdns-admin-http
selector:
app: powerdns
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: powerdns
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
labels:
network: internal
spec:
rules:
- host: powerdns.<internal domain>
http:
paths:
- path: /
backend:
serviceName: powerdns-service-admin
servicePort: 9191
Filling in all of the entries sets up a PowerDNS service backed by MySQL or MariaDB, along with the PowerDNS-Admin frontend.
ExternalDNS
After this is a matter of setting up ExternalDNS so that it talks to PowerDNS, for which there is a Helm chart:
kind: HelmChart
metadata:
name: external-dns
namespace: kube-system
spec:
chart: https://charts.bitnami.com/bitnami/external-dns-2.20.5.tgz
set:
provider: pdns
pdns.apiUrl: http://powerdns-service-api.kube-system.svc
pdns.apiPort: "8081"
pdns.apiKey: "<unencrypted PDNS_APIKEY from above>"
txtOwnerId: "external-dns"
domainFilters[0]: "<internal domain>"
interval: 10s
rbac.create: "true"
Once this is up and running, it will start registering services and ingresses with PowerDNS so that you can start querying the static IP specified earlier to find out IPs for various services, using their native ports (such as setting up an SSH server that will actually listen on port 22).
Next steps
After this is the obvious step: setting up DNS delegation for the specified subdomain. But that part should be easy, right? If you need to, take a look (again) at PowerDNS, except at the Recursor rather than the Authoritative Server.
Mar 08 2020
Shooting the messenger
Microsoft Exchange is a powerful and flexible groupware server, quite probably the best available right now. It's also massive overkill to run for just a few people, except that it's what I've been doing for the past several years. I finally decided to move away from it, so I spent some time looking for alternatives, particularly for something that needed fewer resources (seeing as it's designed for considerably larger audiences, Exchange needs quite a bit). Given the circumstances, I figured I would look for something that (naturally) would support the same level of functionality, as well as work with Outlook (which I personally find to be a very good desktop client). And, given that this is ultimately something for personal use, something that doesn't cost much. I'm willing to put in some sweat hours in order to get it working, especially since I'm able to learn a few things along the way. And before you point it out, I'm not a fan of having my e-mail hosted elsewhere (let alone my calendar and contact information) - it's something I would prefer to keep local. I was hoping to have something that supports the hodgepodge of protocols that Microsoft Exchange uses so that Outlook would work seamlessly, but it looks like there aren't many that satisfy that which are self-hosted. In the end, native Outlook support was what I had to compromise on, and I ended up going with Kopano, which implements the ActiveSync protocol (as with many others). Unfortunately, the thing I lost in that move was the ability to move messages from one account to another (which I do for organization). 😕 In any case, on to the technical details!
Basic installation
One of the complications about Kopano is that it's difficult to obtain a stable version of the software if you're not a paying customer, something that's all too common for commercial open-source companies. They're perfectly happy to let you use their nightly builds and be a guinea pig for whatever software they've committed, though!
- You can go check out the Kopano Bitbucket repository and build it yourself! ... Except, I do way too much of that at work already. So, pass.
- There's also the option of getting a Contributor Edition license if you're spending a lot of time promoting/testing Kopano, but my interests tend to be considerably more... widespread than that.
- You can try using the OpenSUSE or Debian implementations. Which aren't necessarily much better than the community versions. Picked from the Debian changelog:
- Interestingly, there's another option - as Kopano notes on their download page, they have official versions available via the Univention Corporate Server (UCS) system. They don't upload every version, but the versions that are available have still had stabilization work performed. So this is the route I investigated.
As mentioned previously, I use VMware ESXi as one of my virtualization platforms (I do have a Proxmox platform as well, and it's something I'll probably write about at some point in the future), so I downloaded the ESXi image (oddly, the UCS page lists separate Core, WebApp, WebMeetings, and Z-Push images, but they're all the same). Importing an OVA template into ESXi is fairly straightforward, so there isn't too much to write about there. The installation process is fairly simple as well.
Configuration
I went through several standard configuration options, some of which required setting UCS registry entries (either via System → Univention Configuration Registry or the ucr set command line utility):
- For accepting inbound e-mail from external sources, adding the external-facing domain in Mail (Domain → Mail) is a good idea.
- In order to have the name the server provides to other SMTP servers match the outward-facing name, setting mail/smtp/helo/name makes sense too.
- For Kopano settings, I'm in favour of using an external database for better disaster management, so setting kopano/cfg/server/mysql_host makes sense.
- Accordingly, it makes sense to disable MariaDB (mariadb/autostart=disabled and mysql/autostart=disabled).
- With this is creating a new MariaDB/MySQL database (kopano/cfg/server/mysql_database) and user as specified in the settings (kopano/cfg/server/mysql_user, kopano/cfg/server/mysql_password).
- In order to complete offload everything, last is setting attachment_storage = database within /etc/kopano/server.cfg so that attachments are stored within the database (not recommended for large installations, but this isn't a large installation).
- Sane public-facing SSL certificates via installing Let's Encrypt (via Software → App Center). Instructions for installation are on that page.
- To use the certificates for HTTP, set apache2/force_https=true.
- If you want to use certificates for Kopano, set kopano/cfg/gateway/ssl_certificate_file, kopano/cfg/gateway/ssl_private_key_file, kopano/cfg/server/server_ssl_ca_file, and kopano/cfg/server/server_ssl_key_file. If you want to use CalDAV, then you can set kopano/cfg/ical/ssl_certificate_file and kopano/cfg/ical/ssl_private_key_file too.
- If you want to replace the default certificates (for internal-facing sites, for example - these won't collide with your Let's Encrypt sites), set apache2/ssl/ca, apache2/ssl/certificate, and apache2/ssl/key.
- Ongoing Active Directory synchronization (via Domain → Active Directory Connection).
- I wasn't able to generate a certificate that the system was happy with, so I ended up manually uploading a PEM certificate, then setting connector/ad/ldap/certificate to the uploaded path. You can tell that the underlying system wants a PEM certificate due to the code in /usr/lib/python2.7/dist-packages/univention/connector/ad/main.py where the local CA certificate and the incoming certificate are concatenated.
- Encryption is good, so you might think that setting LDAPS/SSL/etc. would be good. But, things get complicated because of their implementation: you can use SASL/GSSAPI (in this case, via Kerberos) or LDAPS/SSL, as Samba doesn't allow calling STARTTLS after SASL has been set up. The conversation between the two gets complicated, so I'll refrain from commenting further, but Kerberos should be sufficient.
- For connectivity, providing root with password-based ssh access is usually not recommended, so setting sshd/permitroot=without-password makes sense.
- If you want another user to have access, create a matching auth/sshd/user/<username>=yes entry.
Further changes
There were additional configuration items which I wanted, which are items that UCS doesn't allow for out of the box:
- If your server has an internal and an external network interface, in order to have it respond correctly to both internal and external traffic, you need to use iproute2. The catch is that due to UCS managing the network, manual changes to the usual places are likely to be overridden. So one interesting option is to use a crontab with the following entries (assuming the internal entry has already been added to /etc/iproute2/rt_tables), so that these commands run on reboot:
@reboot sleep 30 && ip rule add from <internal address> lookup internal priority 0
- The ESXi image comes in at 50 GB, which is excessive once mail is offloaded, so shrinking the disk image isn't a bad idea. The specifics for doing that within ESXi are beyond this article, though, although it's not too difficult finding instructions.
- If you want to prevent external addresses from accessing the Univention portal (probably not a bad idea), you can modify /etc/univention/templates/files/etc/apache2/sites-available/univention.conf to add this (modify for your own requirements):
Require ip 192.168.0.0/24
.
.
.
To regenerate the Apache configuration file is a simple:
Thoughts
UCS is... interesting. It's not bad for what it is, but there's still an awful lot of "magic" that happens (which, admittedly, is probably necessary for something that's Linux-based). As a result, you end up with a situation where it's possible to see some of the things you might want to change, but it's difficult to do so (such as the iproute2 settings, which is handled via a cronjob because the "correct" way won't stick). For something as messy as Kopano, I'm willing to give this a shot (especially for release packages), but I don't think it's something I would want to do normally.
Mar 04 2020
Inside and out
As previously mentioned, I've been using k3s in order to run an internal Kubernetes cluster. One thing that it doesn't do out of the box, however, is handle more than one IP, which can be restricting to Ingresses. In my case, I'm interested in providing services to a few different methods, so this would be problematic. This is something that can be addressed by using different software, however - in this case, a software implementation of a loadbalancer by the name of MetalLB.
Due to setting up MetalLB instead of k3s' default servicelb, this is an opportunity to tweak Traefik as well. This provides particularly useful due to wanting an instance of Traefik for each region. As such, the installation command turns into this (the wget version follows):
MetalLB is a fairly straightforward installation:
At this point, creating a simple configuration file enables MetalLB:
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: external
protocol: layer2
addresses:
- 1.2.3.4/32 # external IP
- name: internal
protocol: layer2
addresses:
- 192.168.0.1/32 # internal IP
auto-assign: false
This example just takes a single external and a single internal IP, naming them external and internal respectively (very imaginative, I know). The interesting point is the auto-assign, as it declares that this IP will not be automatically used. IP ranges can also be used if desired.
After this, Traefik (external) is fairly straightforward to set up as well, using a modified version of the YAML file bundled with k3s. We add a couple of bonuses while we're at it (full documentation available here):
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
ssl.enforced: "true"
ssl.permanentRedirect: "true"
metrics.prometheus.enabled: "true"
kubernetes.labelSelector: network!=internal
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
acme.enabled: "true"
acme.staging: "false"
acme.email: <your e-mail address>
acme.challengeType: tls-alpn-01
acme.delayBeforeCheck: 90
acme.domains.enabled: "true"
The interesting points here are the kubernetes.labelSelector, as this declares that it should use non-internal addresses (in this case, 1.2.3.4), as well as enabling ACME for websites served from here. The ssl.* settings just build upon that.
The Traefik (internal) YAML looks fairly similar, although simplified due to not having any of the ACME settings:
kind: HelmChart
metadata:
name: traefik-internal
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
set:
fullnameOverride: traefik-internal
rbac.enabled: "true"
ssl.enabled: "true"
metrics.prometheus.enabled: "true"
kubernetes.labelSelector: network=internal
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
service.annotations.metallb\.universe\.tf/address-pool: internal
The name is different here, naturally, but there's also the fullnameOverride setting, used so that Kubernetes components don't collide with the "regular" Traefik. The kubernetes.labelSelector is different here, as you can see, and we take advantage of MetalLB's ability to request specific IPs in order to have Traefik's loadbalancer service use the internal IP. The backslashes allow for specifying raw periods in the annotation name.
At this point, the previous docker-demo.yaml's Service can be tweaked to the following:
.
.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
labels:
network: internal
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8081
For purpose of testing, we leave the host: entry blank so that it accepts all connections (yes, this could have been done with the previous example as well). The addition of the network: internal label means that this is exposed on 192.168.0.1 instead of 1.2.3.4. And that's it!