Archive

Last modified by Administrator on 2022/01/30 10:10

Blog - posts for February 2020

Feb 22 2020

More building blocks

As I'd mentioned before, one thing I've been picked up over the past few years is Kubernetes. By default, it's designed for large-scale deployments with a declarative approach toward configuration. There are a number of attractive features that it offers, even for small installations, however, such as:

  • Easier upgrading
    • Delete a pod and it'll redeploy itself with whatever the newest Docker container is - with the caveat that you're using upstream Docker containers, though.
  • Better separation of configuration and data versus executables
    • A single YAML file can contain the entire service configuration, and directories can be mapped in for persistent data.
  • Better scalability
    • Virtual machines inherently use more resources than containers due to needing to emulate more of the stack.

There is, admittedly, more complexity involved in maintaining such a system, but it's still manageable, and I feel that in this case the benefits outweigh the costs. So, what's involved? (Note: I'm not going to be spending much time explaining how Kubernetes works here, since that would end up taking entirely too much time. There are quite a few tutorials out there if you're interested in learning.)

Having looked around at a couple of alternatives, I've found that I'm fairly happy with k3s, which is an abbreviated Kubernetes (also known as k8s) (and, yes, that's a rather awful joke). For now, I've been using a single-node deployment, although it's pretty easy to scale it out as well. The recommend way of installing it using curl, largely for safety reasons:

$ curl -sfL https://get.k3s.io | sh -

If you're feeling brave and/or trust Rancher (the company behind k3s) enough and/or don't feel like installing curl, you can use wget instead:

$ wget -qO - https://get.k3s.io | sh -

The installation should be painless. After installation has completed, you should be able to do a check that Kubernetes is running:

$ kubectl get -n kube-system pods
NAME                                      READY   STATUS              RESTARTS   AGE
metrics-server-6d684c7b5-jl842            0/1     ContainerCreating   0          11s
coredns-d798c9dd-k9rrc                    0/1     ContainerCreating   0          11s
local-path-provisioner-58fb86bdfd-2xvbp   0/1     ContainerCreating   0          11s
helm-install-traefik-2x8c7                0/1     ContainerCreating   0          11s

At this point, we can use a modified simple YAML file from Docker introducing Kubernetes support within Docker (the changes are to migrate deployments from apps/v1beta to app/v1 and to use an Ingress instead of a NodePort):

docker-demo.yaml
apiVersion: v1
kind: Service
metadata:
 name: db
 labels:
   app: words-db
spec:
 ports:
    - port: 5432
     targetPort: 5432
     name: db
 selector:
   app: words-db
 clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: db
 labels:
   app: words-db
spec:
 selector:
   matchLabels:
     app: words-db
 template:
   metadata:
     labels:
       app: words-db
   spec:
     containers:
      - name: db
       image: dockersamples/k8s-wordsmith-db
       ports:
        - containerPort: 5432
         name: db
---
apiVersion: v1
kind: Service
metadata:
 name: words
 labels:
   app: words-api
spec:
 ports:
    - port: 8080
     targetPort: 8080
     name: api
 selector:
   app: words-api
 clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: words
 labels:
   app: words-api
spec:
 replicas: 5
 selector:
   matchLabels:
     app: words-api
 template:
   metadata:
     labels:
       app: words-api
   spec:
     containers:
      - name: words
       image: dockersamples/k8s-wordsmith-api
       ports:
        - containerPort: 8080
         name: api
---
apiVersion: v1
kind: Service
metadata:
 name: web
 labels:
   app: words-web
spec:
 ports:
    - port: 8081
     targetPort: 80
     name: web
 selector:
   app: words-web
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: web
 labels:
   app: words-web
spec:
 selector:
   matchLabels:
     app: words-web
 template:
   metadata:
     labels:
       app: words-web
   spec:
     containers:
      - name: web
       image: dockersamples/k8s-wordsmith-web
       ports:
        - containerPort: 80
         name: words-web
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: ingress
spec:
 rules:
  - host: <hostname>
   http:
     paths:
      - path: /
       backend:
         serviceName: web
         servicePort: 8081

I'm also rather fond of the k3s demo put together by a Rancher support engineer, but it doesn't work correctly on newer versions of k3s due to Traefik now setting /ping as a keepalive port (which collides with this app's querying of that URL). It's not too hard to fix it, but it requires either building yet another Docker container (minor changes needed to both main.go and templates/index.html.tmpl) or disabling Traefik's use of /ping, and I don't quite care enough to jump through those hoops. 馃槢

k3s-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
 name: rancher-demo
 labels:
   app: rancher-demo
spec:
 replicas: 1
 selector:
   matchLabels:
     app: rancher-demo
 template:
   metadata:
     labels:
       app: rancher-demo
   spec:
     containers:
      - name: rancher-demo
       image: superseb/rancher-demo
       ports:
        - containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
 name: rancher-demo-service
spec:
 selector:
   app: rancher-demo
 ports:
  - protocol: TCP
   port: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: rancher-demo-ingress
spec:
 rules:
  - host: <hostname>
   http:
     paths:
      - path: /
       backend:
         serviceName: rancher-demo-service
         servicePort: 8080

The <hostname> in the Ingress specified at the end should be updated with your test domain name. Write it to disk, then load it up into Kubernetes. We can see the components loading up shortly afterward:

$ kubectl apply -f kube-deployment.yml
service/db created
deployment.apps/db created
service/words created
deployment.apps/words created
service/web created
deployment.apps/web created
ingress.extensions/ingress created
$ kubectl get pods
NAME聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 READY聽聽 STATUS聽聽聽 RESTARTS聽聽 AGE
db-77f4c964c-9xsq6聽聽聽聽聽 1/1聽聽聽聽 Running聽聽 0聽聽聽聽聽聽聽聽聽 4s
web-7fdc45cb65-m744g聽聽聽 1/1聽聽聽聽 Running聽聽 0聽聽聽聽聽聽聽聽聽 4s
words-8b7cc4ff8-j2m2l聽聽 1/1聽聽聽聽 Running聽聽 0聽聽聽聽聽聽聽聽聽 4s
words-8b7cc4ff8-2v7xf聽聽 1/1聽聽聽聽 Running聽聽 0聽聽聽聽聽聽聽聽聽 4s
words-8b7cc4ff8-fjcgw聽聽 1/1聽聽聽聽 Running聽聽 0聽聽聽聽聽聽聽聽聽 4s
words-8b7cc4ff8-x5gtb聽聽 1/1聽聽聽聽 Running聽聽 0聽聽聽聽聽聽聽聽聽 4s
words-8b7cc4ff8-cfqdt聽聽 1/1聽聽聽聽 Running聽聽 0聽聽聽聽聽聽聽聽聽 4s

At that point, you can go to your domain and verify that it works. The different pods are used as a pool to query nouns/verbs/adjectives, with the IP of the serving pod listed at the bottom of each block. Each reload should show a different set of words/IPs. Congratulations! Although, you probably do want to secure your system, but that's an exercise left up to the reader.

Feb 17 2020

Press OK for certificate

When setting up internal services (using Alpine Linux, there's generally a need for certificates for securing internal communication. I brushed over it previously, but a few additional comments about that:

  • Encrypting internal communication is a good idea as there are multiple possibilities for eavesdroppers for internal traffic:
    • If you have any servers that allow multiple users (particularly with interactive shells), those users could be sniff network traffic.
    • Treat systems (whether servers or containers) as though when they'll be compromised, not if. There's ultimately no such thing as a piece of software that has no vulnerabilities, so your infrastructure should have layered defenses.
  • I ultimately don't like wildcard certificates across multiple systems, as if one of those systems is compromised, you've given up a large chunk of that asymmetric encryption. Naturally, if you have multiple services on a single system, then sharing a certificate across those services (with, say, a multi-domain certificate) isn't as much of an issue.

From a brief look at various PKI implementations, I settled on OpenXPKI as something that would be fairly easy to use and maintain for the size of what I'm maintaining. EJBCA is, of course, a much more widely-used CA implementation, but it's also more heavyweight than what I'm looking for. As a bonus, OpenXPKI is the primary target for CertNanny, which allows for automated certificate renewal via SCEP. Taking a look at the other open source software that implements a SCEP client, most of it hasn't been updated for considerably longer than CertNanny, other than jSCEP. And, well, I'm trying to avoid running Java where I can, due to how much Java loves chewing through resources (also applies to EJBCA). 馃槙

I won't be going into installing and setting up OpenXPKI here, although I will mention that needing to install Debian to install the official packages is annoying. I've been reluctant to use a container (due to wanting to treat my CA as core infrastructure), but I may switch over to that at some point in the future.... Just remember to also install openca-tools as well, which isn't listed in the Quickstart documentation, but is needed for SCEP to function.

A few changes/bugfixes are necessary to OpenXPKI so that it works correctly with SCEP:

  • If your realm has a profile/I18N_OPENXPKI_PROFILE_TLS_SERVER.yaml (if your OpenXPKI is old enough), update your 05_advanced_style section to add this section beneath subject (this brings it up to the current definition, which allows for specifying SANs on advanced requests, which we need):
I18N_OPENXPKI_PROFILE_TLS_SERVER.yaml
.
.
.
            san:
                - san_ipv4
                - san_dns
.
.
.
  • Edit your realm's profile/template/san_dns.yaml, and change the top line, as OpenSSL doesn't like SANs being specified with dns, only DNS:
san_dns.yaml
id: DNS
.
.
.
  • After these changes, restart the OpenXPKI daemon so that it reloads the templates.

The service for this example will be PostgreSQL. After starting with the usual Alpine Linux installation and configuration steps is adding PostgreSQL itself:

$ apk add postgresql

As per the PostgreSQL documentation, enable SSL in postgresql.conf. Now comes the interesting part: auto-generating server certificates. First off is installing sscep and CertNanny. If your CertNanny installation did not install a JDK, make sure you do so.

sscep and CertNanny aren't available on Alpine Linux by default. You can either build it yourself (fairly straightforward) or wait for me to provide packages.

A fix is needed for the code here as well:

  • Edit your CertNanny's lib/perl/CertNanny/Config.pm and lib/perl/CertNanny/Util.pm so that the openssl calls now use -sha1 instead of -sha:
Config.pm
    my @cmd = (qq("$openssl"), 'dgst', '-sha1', qq("$file"));
Util.pm
      my @cmd = (qq("$openssl"), 'dgst', '-sha1', qq("$tmpfile"));

Then, the CertNanny configuration files need to be set up. The simplest way is to copy the template files and modify them. Key sections to modify include:

Keystore-DEFAULT.cfg
  • keystore.DEFAULT.enroll.sscep.URL: set to your OpenXPKI SCEP URL (by default, http://<server>/scep/). sscep enforces using a non-SSL connection, as the connection is (in theory) already encrypted.
Keystore-OpenSSL.cfg

This is the default file we'll be using, since OpenSSL PEM is the de-facto standard on Linux. For PostgreSQL, we're not even concerned about the default entries, since we know where to put the file and that we don't want the server key encrypted. We also want the PostgreSQL server restarted whenever a new key is installed:

Keystore-OpenSSL.cfg
keystore.openssl.type                               = OpenSSL
keystore.openssl.location                           = /var/lib/postgresql/12/data/server.crt
keystore.openssl.format                             = PEM
keystore.openssl.key.file                           = /var/lib/postgresql/12/data/server.key
keystore.openssl.key.type                           = OpenSSL
keystore.openssl.key.format                         = PEM

keystore.openssl.hook.renewal.install.post          = /sbin/rc-service postgresql restart
certnanny.cfg
  • cmd.sscep: set to the path of your sscep binary.
  • Uncomment the include Keystore-OpenSSL.cfg line.

Then, some directory setup:

$ mkdir -p /var/CertNanny/state
$ mkdir -p /var/CertNanny/tmp
$ mkdir -p /var/CertNanny/AuthoritativeRootcerts

Put a copy of your root certificate into /var/CertNanny/AuthoritativeRootcerts so that sscep can validate your certificates.

Finally, we can set up automated certificate renewals. For the initial enrollment, use OpenXPKI to create an initial certificate (it uses these as a basis for certificate settings). However, the certificate will need to go through the Extended naming style to perform the following modifications:

  • The CN will need to have a suffix of :pkiclient (i.e. CN=<server>:pkiclient), which is the OpenXPKI default for auto-renewing certificates.
  • The SAN entries should be all FQDNs the server will listen on, including the hostname used to generate the CN.

The generated certificate and key should then be placed in the locations specified previously in Keystore-OpenSSL.cfg. At this point, you should be clear to enroll the certificate:

$ certnanny --cfg <prefix>/etc/certnanny.cfg enroll

You may need to accept the enrollment within OpenXPKI. And then, if you would like to verify, force a renew.

$ certnanny --cfg <prefix>/etc/certnanny.cfg renew --force

Finally, hooking it up via whichever cron mechanism you're using should finish the job.

Feb 16 2020

Setting sights on a new baseline

I haven't been completely idle for the past several years, for all that I've done a really bad job of writing new entries. One thing I've worked on during this time is learning a fair amount about Kubernetes, which (along with Docker) has been instrumental in popularizing Alpine Linux, a lightweight Linux distribution that focuses on security. When working with more of a service-oriented infrastructure instead of a monolithic one, keeping the overhead of virtual machines low leaves more resources for the actual services. Running services in containers also helps, but that's not a great approach for core services.

The standard Alpine Linux installation documentation covers all of the basics. A couple of additional minor notes:

  • I use the the Virtual image from the Downloads page, since I don't need to worry about a variety of hardware-related packages.
  • To be rid of the process '/sbin/getty -L 0 ttyS0 vt100' exited messages from /var/log/messages, I recommend commenting out this line from /etc/inittab (explanation provided here):
/etc/inittab
# enable login on alternative console
ttyS0::respawn:/sbin/getty -L 0 ttyS0 vt100
  • Installing open-vm-tools for ESXi integration comes down to a matter of personal taste. It registers information at boot-time, so starting the service immediately after installation doesn't update ESXi. So it's up to you if you think it's worth spending the 50MB of disk space for that.

I'm a proponent of having service users that are controlled centrally (in my case, via Active Directory). For Alpine Linux, since it doesn't natively use NSS (as it uses musl instead of glibc), integration comes in at the password authentication level via PAM, but not at the user verification level. This requires a little bit of extra work.

First off, install several packages after enabling the community repository:

$ apk add cyrus-sasl-gssapiv2 nss-pam-ldapd openssh-server-pam shadow]]

There's a bunch of documentation on how to set up nslcd on this Samba page, and how to set up the pam.d files on this nss-pam-ldapd page. The former is useful for how much detail it goes into for configuration, even if it's not on the main project site. In my case, I opt to go with the Kerberos authentication route, to avoid having the credentials left on the system in plaintext. If you're using Samba, running this on a domain controller should do the job:

$ samba-tool domain exportkeytab <keytab> --principal=<user>

This file then needs to be copied over to the client in question. For this purpose, that file on the client will be /etc/krb5.nslcd.keytab, with ownership by nslcd:nslcd and permissions 0600. After that, create an initial Kerberos ticket to work with for now (you'll need to figure out how to have it regularly renewed):

k5start isn't available on Alpine Linux by default. You can either build it yourself (fairly straightforward), follow the Samba username/password instructions, or wait for me to provide a package (in which k5start is configured as a service for automated renewal).

$ k5start -f /etc/krb5.nslcd.keytab -U -o nslcd -K 360 -b -k /tmp/nslcd.tkt

At this point, you can then make changes to /etc/nslcd.conf to hook up your directory server (most as per the Samba page, with some additional explanation for differences).

/etc/nslcd.conf
.
.
.
# We put both ldaps:// and ldap:// because they get used at slightly different times -
# ldap:// for querying the user details via the keytab, and ldaps:// for verifying the
# password. We put ldaps:// before ldap:// so that when verifying the password, the
# LDAP bind doesn't happen over cleartext.
uri ldaps://<LDAP server address>
uri ldap://<LDAP server address>

base <LDAP base>

# For the ldaps:// connection, don't blindly accept the certificate. Validate it.
tls_cacertfile <root certificate file>

pagesize 1000
referrals off
nss_nested_groups yes
sasl_mech GSSAPI
sasl_realm <Kerberos realm>
# nss-pam-ldapd is much happier being able to pass in the ticket's user.
sasl_authzid u:<user>
krb5_ccname /tmp/nslcd.tkt

# Uncomment the various "Mappings for Active Directory" lines.
.
.
.

You should be able to start the nslcd daemon and enable it on boot:

$ rc-service nslcd start
$ rc-update add nslcd

From here, in order to have PAM query nslcd, edit /etc/pam.d/base-auth to add the nss-pam-ldapd library:

/etc/pam.d/base-auth
# basic PAM configuration for Alpine.
auth聽聽聽 required聽聽聽聽聽聽聽 pam_env.so
auth聽聽聽 sufficient聽聽聽聽聽 pam_unix.so聽聽聽聽 nullok_secure
auth聽聽聽 required聽聽聽聽聽聽聽 pam_nologin.so聽 successok

auth聽聽聽 sufficient聽聽聽聽聽 pam_unix.so聽聽聽聽 nullok try_first_pass
auth聽聽聽 sufficient聽聽聽聽聽 pam_ldap.so聽聽聽聽 minimum_uid=1000 use_first_pass
# Required, since pam_unix.so has been downgraded from required to sufficient
auth聽聽聽 required聽聽聽聽聽聽聽 pam_deny.so

account required聽聽聽聽聽聽聽 pam_nologin.so
account sufficient聽聽聽聽聽 pam_unix.so
account sufficient聽聽聽聽聽 pam_ldap.so聽聽聽聽 minimum_uid=1000

password聽聽聽聽聽聽聽 sufficient聽聽聽聽聽 pam_unix.so聽聽聽聽 nullok sha512 shadow try_first_pass use_authtok
password聽聽聽聽聽聽聽 sufficient聽聽聽聽聽 pam_ldap.so聽聽聽聽 minimum_uid=1000 try_first_pass

-session聽聽聽聽聽聽聽 optional聽聽聽聽聽聽聽 pam_loginuid.so
-session聽聽聽聽聽聽聽 optional聽聽聽聽聽聽聽 pam_elogind.so
session sufficient聽聽聽聽聽 pam_unix.so
session optional聽聽聽聽聽聽聽 pam_ldap.so聽聽聽聽 minimum_uid=1000

Enabling OpenSSH logins from here is pretty simple:

/etc/ssh/sshd_config
.
.
.
UsePAM yes
.
.
.

As noted before, this is a PAM-only solution. As a result, any user that needs login capability will need a local user created, and that local user will dictate everything other than the password (e.g. the user ID, group ID, etc.). So ensure you do that for every user you need login permissions for (adjust options as you see fit):

$ useradd --create-home <user>

At this point, you should now be able to log in (via either console or ssh) to the system. Congratulations!

Note: Unfortunately, sudo won't work with your logged-in users, as it queries NSS by default, and the Alpine Linux version doesn't have LDAP-querying capabilities. So this is primarily useful for having distributed intermediate users.

Bonus! If you're interested in better SSH encryption, it's probably better not to use the defaults for sshd_config, since that current defaults (for a modern SSH client) to ecdsa-sha2-nistp256, which has some issues. As a result, it's worth thinking about adding this to the end of sshd_config (taken from this page). At present, the moduli file doesn't need to be adjusted since none of the primes are less than 2000 bits.

/etc/ssh/sshd_config
Protocol 2
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
KexAlgorithms [email protected],diffie-hellman-group-exchange-sha256
Ciphers [email protected],[email protected],[email protected],aes256-ctr,aes192-ctr,aes128-ctr
MACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,[email protected]

Feb 12 2020

A new start

After a long time working on other things, I've started rebuilding a bunch of the systems backing Toreishi.net. As part of that, I've decided to move from XWiki to Confluence (especially since, let's be honest, XWiki is essentially an open-source Confluence clone). I've been partial to the Atlassian collaboration suite since I worked with it several years ago, and I haven't encountered anything commercial that comes close to Confluence and JIRA in the intervening years. I still need to import a bunch of old data over, but there should be a fair amount of content resurrected and updated in the upcoming months. 馃槃