Archive
Blog - posts for March 2020
Mar 16 2020
Your name
When juggling multiple applications in Kubernetes, it's not uncommon to end up with all kinds of conflicting requirements. HTTP/HTTPS traffic is the easiest, since you can use something like Traefik (even if it does become more complicated if you run multiple endpoints), but if you want to run services that run other kinds of traffic.... It's actually a great reason to run MetalLB, as previously mentioned. The catch is, once the system start assigning different IPs to different services, how do you know which IP to contact? One option is to just use hard-coded IPs for everything, but that's not very scalable. Which is where you can have fun with something like ExternalDNS, which is able to register services with a DNS. In our case, using PowerDNS hosted on Kubernetes ends up being a very interesting option, allowing for everything to be internalized (although giving PowerDNS itself a static IP is a good idea!).
PowerDNS
Setting up PowerDNS isn't too bad if you already have a database set up (by default, I would recommend setting up an external database so that you don't need to worry about database corruption in case of a pod being forcibly stopped). The YAML file looks something like this (there is no official Helm chart as of this writing):
kind: Secret
metadata:
name: powerdns-secret
namespace: kube-system
type: Opaque
data:
PDNS_APIKEY: <base64 secret>
MYSQL_PASS: <base64 secret>
PDNSADMIN_SECRET: <base64 secret>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: powerdns
namespace: kube-system
labels:
app: powerdns
spec:
replicas: 1
selector:
matchLabels:
app: powerdns
template:
metadata:
labels:
app: powerdns
spec:
containers:
- name: powerdns
image: pschiffe/pdns-mysql:alpine
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pdnsutil list-zone <internal domain> 2>/dev/null"]
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nc -vz <database hostname> 3306"]
initialDelaySeconds: 20
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "a=0;while [ $a -lt 200 ];do sleep 1;a=$[a+1];echo 'stage: '$a;if nc -vz <database hostname> 3306;then (! pdnsutil list-zone <internal domain> 2>/dev/null) && pdnsutil create-zone <internal domain>;echo 'End Stage';a=200;fi;done"]
env:
- name: PDNS_api_key
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNS_APIKEY
- name: PDNS_master
value: "yes"
- name: PDNS_api
value: "yes"
- name: PDNS_webserver
value: "yes"
- name: PDNS_webserver_address
value: 0.0.0.0
- name: PDNS_webserver_allow_from
value: 0.0.0.0/0
- name: PDNS_webserver_password
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNS_APIKEY
- name: PDNS_default_ttl
value: "1500"
- name: PDNS_soa_minimum_ttl
value: "1200"
- name: PDNS_default_soa_name
value: "ns1.<internal domain>"
- name: PDNS_default_soa_mail
value: "hostmaster.<internal domain>"
- name: MYSQL_ENV_MYSQL_HOST
value: <database hostname>
- name: MYSQL_ENV_MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: powerdns-secret
key: MYSQL_PASS
- name: MYSQL_ENV_MYSQL_DATABASE
value: powerdns
- name: MYSQL_ENV_MYSQL_USER
value: powerdns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 8081
name: api
protocol: TCP
- name: powerdnsadmin
image: aescanero/powerdns-admin:latest
livenessProbe:
exec:
command: ["/bin/sh", "-c", "nc -vz 127.0.0.1 9191 2>/dev/null"]
initialDelaySeconds: 80
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nc -vz <database hostname> 3306 2>/dev/null "]
initialDelaySeconds: 40
env:
- name: PDNS_API_KEY
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNS_APIKEY
- name: PDNSADMIN_SECRET_KEY
valueFrom:
secretKeyRef:
name: "powerdns-secret"
key: PDNSADMIN_SECRET
- name: PDNS_PROTO
value: http
- name: PDNS_HOST
value: 127.0.0.1
- name: PDNS_PORT
value: "8081"
- name: PDNSADMIN_SQLA_DB_HOST
value: <database hostname>
- name: PDNSADMIN_SQLA_DB_PASSWORD
valueFrom:
secretKeyRef:
name: powerdns-secret
key: MYSQL_PASS
- name: PDNSADMIN_SQLA_DB_NAME
value: powerdns
- name: PDNSADMIN_SQLA_DB_USER
value: powerdns
ports:
- containerPort: 9191
name: pdns-admin-http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: powerdns-service-dns
namespace: kube-system
annotations:
metallb.universe.tf/address-pool: <IP identifier>
labels:
app: powerdns
spec:
type: LoadBalancer
ports:
- port: 53
nodePort: 30053
targetPort: dns
protocol: UDP
name: dns
selector:
app: powerdns
---
apiVersion: v1
kind: Service
metadata:
name: powerdns-service-api
namespace: kube-system
labels:
app: powerdns
spec:
type: ClusterIP
ports:
- port: 8081
targetPort: api
protocol: TCP
name: api
selector:
app: powerdns
---
apiVersion: v1
kind: Service
metadata:
name: powerdns-service-admin
namespace: kube-system
labels:
app: powerdns
spec:
type: ClusterIP
ports:
- port: 9191
targetPort: pdns-admin-http
protocol: TCP
name: pdns-admin-http
selector:
app: powerdns
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: powerdns
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
labels:
network: internal
spec:
rules:
- host: powerdns.<internal domain>
http:
paths:
- path: /
backend:
serviceName: powerdns-service-admin
servicePort: 9191
Filling in all of the entries sets up a PowerDNS service backed by MySQL or MariaDB, along with the PowerDNS-Admin frontend.
ExternalDNS
After this is a matter of setting up ExternalDNS so that it talks to PowerDNS, for which there is a Helm chart:
kind: HelmChart
metadata:
name: external-dns
namespace: kube-system
spec:
chart: https://charts.bitnami.com/bitnami/external-dns-2.20.5.tgz
set:
provider: pdns
pdns.apiUrl: http://powerdns-service-api.kube-system.svc
pdns.apiPort: "8081"
pdns.apiKey: "<unencrypted PDNS_APIKEY from above>"
txtOwnerId: "external-dns"
domainFilters[0]: "<internal domain>"
interval: 10s
rbac.create: "true"
Once this is up and running, it will start registering services and ingresses with PowerDNS so that you can start querying the static IP specified earlier to find out IPs for various services, using their native ports (such as setting up an SSH server that will actually listen on port 22).
Next steps
After this is the obvious step: setting up DNS delegation for the specified subdomain. But that part should be easy, right? If you need to, take a look (again) at PowerDNS, except at the Recursor rather than the Authoritative Server.
Mar 08 2020
Shooting the messenger
Microsoft Exchange is a powerful and flexible groupware server, quite probably the best available right now. It's also massive overkill to run for just a few people, except that it's what I've been doing for the past several years. I finally decided to move away from it, so I spent some time looking for alternatives, particularly for something that needed fewer resources (seeing as it's designed for considerably larger audiences, Exchange needs quite a bit). Given the circumstances, I figured I would look for something that (naturally) would support the same level of functionality, as well as work with Outlook (which I personally find to be a very good desktop client). And, given that this is ultimately something for personal use, something that doesn't cost much. I'm willing to put in some sweat hours in order to get it working, especially since I'm able to learn a few things along the way. And before you point it out, I'm not a fan of having my e-mail hosted elsewhere (let alone my calendar and contact information) - it's something I would prefer to keep local. I was hoping to have something that supports the hodgepodge of protocols that Microsoft Exchange uses so that Outlook would work seamlessly, but it looks like there aren't many that satisfy that which are self-hosted. In the end, native Outlook support was what I had to compromise on, and I ended up going with Kopano, which implements the ActiveSync protocol (as with many others). Unfortunately, the thing I lost in that move was the ability to move messages from one account to another (which I do for organization). 😕 In any case, on to the technical details!
Basic installation
One of the complications about Kopano is that it's difficult to obtain a stable version of the software if you're not a paying customer, something that's all too common for commercial open-source companies. They're perfectly happy to let you use their nightly builds and be a guinea pig for whatever software they've committed, though!
- You can go check out the Kopano Bitbucket repository and build it yourself! ... Except, I do way too much of that at work already. So, pass.
- There's also the option of getting a Contributor Edition license if you're spending a lot of time promoting/testing Kopano, but my interests tend to be considerably more... widespread than that.
- You can try using the OpenSUSE or Debian implementations. Which aren't necessarily much better than the community versions. Picked from the Debian changelog:
- Interestingly, there's another option - as Kopano notes on their download page, they have official versions available via the Univention Corporate Server (UCS) system. They don't upload every version, but the versions that are available have still had stabilization work performed. So this is the route I investigated.
As mentioned previously, I use VMware ESXi as one of my virtualization platforms (I do have a Proxmox platform as well, and it's something I'll probably write about at some point in the future), so I downloaded the ESXi image (oddly, the UCS page lists separate Core, WebApp, WebMeetings, and Z-Push images, but they're all the same). Importing an OVA template into ESXi is fairly straightforward, so there isn't too much to write about there. The installation process is fairly simple as well.
Configuration
I went through several standard configuration options, some of which required setting UCS registry entries (either via System → Univention Configuration Registry or the ucr set command line utility):
- For accepting inbound e-mail from external sources, adding the external-facing domain in Mail (Domain → Mail) is a good idea.
- In order to have the name the server provides to other SMTP servers match the outward-facing name, setting mail/smtp/helo/name makes sense too.
- For Kopano settings, I'm in favour of using an external database for better disaster management, so setting kopano/cfg/server/mysql_host makes sense.
- Accordingly, it makes sense to disable MariaDB (mariadb/autostart=disabled and mysql/autostart=disabled).
- With this is creating a new MariaDB/MySQL database (kopano/cfg/server/mysql_database) and user as specified in the settings (kopano/cfg/server/mysql_user, kopano/cfg/server/mysql_password).
- In order to complete offload everything, last is setting attachment_storage = database within /etc/kopano/server.cfg so that attachments are stored within the database (not recommended for large installations, but this isn't a large installation).
- Sane public-facing SSL certificates via installing Let's Encrypt (via Software → App Center). Instructions for installation are on that page.
- To use the certificates for HTTP, set apache2/force_https=true.
- If you want to use certificates for Kopano, set kopano/cfg/gateway/ssl_certificate_file, kopano/cfg/gateway/ssl_private_key_file, kopano/cfg/server/server_ssl_ca_file, and kopano/cfg/server/server_ssl_key_file. If you want to use CalDAV, then you can set kopano/cfg/ical/ssl_certificate_file and kopano/cfg/ical/ssl_private_key_file too.
- If you want to replace the default certificates (for internal-facing sites, for example - these won't collide with your Let's Encrypt sites), set apache2/ssl/ca, apache2/ssl/certificate, and apache2/ssl/key.
- Ongoing Active Directory synchronization (via Domain → Active Directory Connection).
- I wasn't able to generate a certificate that the system was happy with, so I ended up manually uploading a PEM certificate, then setting connector/ad/ldap/certificate to the uploaded path. You can tell that the underlying system wants a PEM certificate due to the code in /usr/lib/python2.7/dist-packages/univention/connector/ad/main.py where the local CA certificate and the incoming certificate are concatenated.
- Encryption is good, so you might think that setting LDAPS/SSL/etc. would be good. But, things get complicated because of their implementation: you can use SASL/GSSAPI (in this case, via Kerberos) or LDAPS/SSL, as Samba doesn't allow calling STARTTLS after SASL has been set up. The conversation between the two gets complicated, so I'll refrain from commenting further, but Kerberos should be sufficient.
- For connectivity, providing root with password-based ssh access is usually not recommended, so setting sshd/permitroot=without-password makes sense.
- If you want another user to have access, create a matching auth/sshd/user/<username>=yes entry.
Further changes
There were additional configuration items which I wanted, which are items that UCS doesn't allow for out of the box:
- If your server has an internal and an external network interface, in order to have it respond correctly to both internal and external traffic, you need to use iproute2. The catch is that due to UCS managing the network, manual changes to the usual places are likely to be overridden. So one interesting option is to use a crontab with the following entries (assuming the internal entry has already been added to /etc/iproute2/rt_tables), so that these commands run on reboot:
@reboot sleep 30 && ip rule add from <internal address> lookup internal priority 0
- The ESXi image comes in at 50 GB, which is excessive once mail is offloaded, so shrinking the disk image isn't a bad idea. The specifics for doing that within ESXi are beyond this article, though, although it's not too difficult finding instructions.
- If you want to prevent external addresses from accessing the Univention portal (probably not a bad idea), you can modify /etc/univention/templates/files/etc/apache2/sites-available/univention.conf to add this (modify for your own requirements):
Require ip 192.168.0.0/24
.
.
.
To regenerate the Apache configuration file is a simple:
Thoughts
UCS is... interesting. It's not bad for what it is, but there's still an awful lot of "magic" that happens (which, admittedly, is probably necessary for something that's Linux-based). As a result, you end up with a situation where it's possible to see some of the things you might want to change, but it's difficult to do so (such as the iproute2 settings, which is handled via a cronjob because the "correct" way won't stick). For something as messy as Kopano, I'm willing to give this a shot (especially for release packages), but I don't think it's something I would want to do normally.
Mar 04 2020
Inside and out
As previously mentioned, I've been using k3s in order to run an internal Kubernetes cluster. One thing that it doesn't do out of the box, however, is handle more than one IP, which can be restricting to Ingresses. In my case, I'm interested in providing services to a few different methods, so this would be problematic. This is something that can be addressed by using different software, however - in this case, a software implementation of a loadbalancer by the name of MetalLB.
Due to setting up MetalLB instead of k3s' default servicelb, this is an opportunity to tweak Traefik as well. This provides particularly useful due to wanting an instance of Traefik for each region. As such, the installation command turns into this (the wget version follows):
MetalLB is a fairly straightforward installation:
At this point, creating a simple configuration file enables MetalLB:
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: external
protocol: layer2
addresses:
- 1.2.3.4/32 # external IP
- name: internal
protocol: layer2
addresses:
- 192.168.0.1/32 # internal IP
auto-assign: false
This example just takes a single external and a single internal IP, naming them external and internal respectively (very imaginative, I know). The interesting point is the auto-assign, as it declares that this IP will not be automatically used. IP ranges can also be used if desired.
After this, Traefik (external) is fairly straightforward to set up as well, using a modified version of the YAML file bundled with k3s. We add a couple of bonuses while we're at it (full documentation available here):
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
ssl.enforced: "true"
ssl.permanentRedirect: "true"
metrics.prometheus.enabled: "true"
kubernetes.labelSelector: network!=internal
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
acme.enabled: "true"
acme.staging: "false"
acme.email: <your e-mail address>
acme.challengeType: tls-alpn-01
acme.delayBeforeCheck: 90
acme.domains.enabled: "true"
The interesting points here are the kubernetes.labelSelector, as this declares that it should use non-internal addresses (in this case, 1.2.3.4), as well as enabling ACME for websites served from here. The ssl.* settings just build upon that.
The Traefik (internal) YAML looks fairly similar, although simplified due to not having any of the ACME settings:
kind: HelmChart
metadata:
name: traefik-internal
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
set:
fullnameOverride: traefik-internal
rbac.enabled: "true"
ssl.enabled: "true"
metrics.prometheus.enabled: "true"
kubernetes.labelSelector: network=internal
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
service.annotations.metallb\.universe\.tf/address-pool: internal
The name is different here, naturally, but there's also the fullnameOverride setting, used so that Kubernetes components don't collide with the "regular" Traefik. The kubernetes.labelSelector is different here, as you can see, and we take advantage of MetalLB's ability to request specific IPs in order to have Traefik's loadbalancer service use the internal IP. The backslashes allow for specifying raw periods in the annotation name.
At this point, the previous docker-demo.yaml's Service can be tweaked to the following:
.
.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
labels:
network: internal
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8081
For purpose of testing, we leave the host: entry blank so that it accepts all connections (yes, this could have been done with the previous example as well). The addition of the network: internal label means that this is exposed on 192.168.0.1 instead of 1.2.3.4. And that's it!