My name is Joey Hafner. I’m a computing engineer located in Tacoma, WA.
-
I’m always working on a diverse array of tech projects. Whether it’s my homelab, or spending 3 hours wrestling with a script to save me 30 seconds.
-
I’ve enjoyed hacking with whatever tech I could get my hands on since I was a teen, and that interest was overclocked in 2019 when I found Docker and started my homelab.
-
I’m always excited about opportunities to learn new technologies and build useful things.
But this lab is built by me and for me. Just as I am the sole (or primary) beneficiary of its value, I am also the sole owner. I’ve gotta pay for all the hard drives, the network switches, the API keys, the power supplies, the rack rails. I’ve gotta configure the SMTP notifications, the DNS, the firewalls and the subnets. I open, update, and close issues, remediate leaked secrets, and write documentation.
-
It’s exhausting and exhilarating, frustrating and fulfilling, thankless and thankful. And I’ve never written about it before, so here’s me finally getting to do that.
-
-
Core Services: It’s about data.
-
Formyriadreasons, I want to maintain as much control of my dataas possible. So I bought hard drives to store my data, and built computers around those hard drives to move my data to and fro. Lastly, I selected a few of the awesome projects others have built to tell the computers how to move my data.
-
-
Veteran self-hosters are familiar with many of these, but I want to talk about how each of these projects helps me claw back a little bit of control over my data.
Bitwarden has an excellent security track record so far. But two factors (ha!) led to my choosing to self-host the community-built server instead of using Bitwarden’s first-party cloud service:
-
-
Subscription-gated features. 2FA/OTP authenticator, file attachments, security reports, and more are gated behind a subscription. I wholeheartedly endorse Bitwarden’s decision, but that’s just enough encouragement for me to host it myself.
-
Bigger means more attractive target. The more people put their trust in Bitwarden, the more attractive a target it becomes. My personal server is unlikely to attract the attention of any individuals or organizations with the capability to penetrate Bitwarden’s security. Of course, that only matters if I maintain good security posture everywhere else in the homelab. More on that another day.
-
-
I’ve had an excellent experience with Bitwarden so far. The user experience is fluid enough that I was able to onboard my family without issue.
GitHub is great. And I often question my decision to host my own Git and CI/CD server. I’m not foolish enough to worry that any of my code would be included in any high-quality AI training datasets. Really, In
-
I started out using a self-hosted GitLab instance, but its power and flexibility entail weight. And I got tired of the administrative toil caused by frequent and substantial updates to the entire platform.
-
So I spun up Gitea, set up some runners, and got back to work.
Google Drive is pretty cool and useful. I didn’t care much for Docs and the other office products, but the storage, sharing, and sync functionality Drive provided was useful at school, at home, at work, and even for gaming. But it’s hard-limited to 15 GB on the free version. And expanding that costs ~$5/TB mo., which is only $0.63 cheaper than buying an 8TB SAS HDD every month. So I decided drives > Drive.
-
And it feels luxurious. Knowing that all my phone’s photos and videos are stored on my hardware and won’t every touch Google’s servers. I can take a full-fat video without worrying that 600 MB will eat up a bunch of my quota. Nextcloud also offers a wide swathe of plugins for other functionalities via their App store.
Over the years I’ve spent a lot of money on 3D models for fantasy RPG miniatures. My collection of models is measured in terrabytes. And I have a deep appreciation for the creative work of the artists who make these models. But artists, I think, are not naturally inclined toward robust file organization practices. So my “collection” is really more like a pile.
-
But Manyfold (formerly “VanDAM”) has helped enormously with the process of making that collection usable. Search and preview are the two biggest features. Automated tagging and other organizational features also help a lot.
The XKCD comic above articulates and experience I’ve had enough times. Nextcloud helps a lot when I want to share a file with someone else. But Send covers the cases where a friend wants to send me a file, or a friend is asking me how to send a file to another friend. I can just send them the link. I don’t even need to explain how it works, it’s built intuitively enough. And I get some peace of mind knowing that the files are encrypted end-to-end.
This service exists exclusively to let me right click the video file of a gaming highlight, hit “Share”, and send the link to my friends as seamlessly as possible while supporting high-fidelity content. I record my gameplay at 1440p 120 FPS. Check it out:
That last one’s not even 120 fps, I just love to show it. To be honest though, if Youtube supported scripted/automated uploads I would just use that. But it’s probably for the best that they don’t.
-
Home Assistant: Climate control for the Critter Cove
I think a lot of folks start their self-hosting journey with Home Assistant. It’s a fantastic tool, and it keeps some of your most important data from being reliant on often-flakey vendors who have little interest in supporting their product unless you’re giving them money for it.
-
My partner and I have four reptiles, several insects, and a hamster whose climates I simply cannot be bothered to adjust manually every hour of the waking day. So I installed Home Assistant, hooked up warm-side and cool-side Govee hygrometer/thermometers, put the heating lamps on TP Link smart dimming plugs, and wrote some automation to keep everybody in their happy temperature range.
One day I thought to myself, “do I know how a router works?” The answer was no, so I built one (hardware and configuration) from scratch between the hours of 10 PM and 4 AM to ensure none of my housemates would be disturbed by the requisite internet outage. I deployed the seat-of-my-pants router configuration to “production” overnight and handled about one day’s worth of post-deployment support before everything was seamlessly stable.
-
The hardest part was crimping and terminating every single ethernet cable. At least 20 connections. Woof.
Underpinning every single one of the above services in one way or another is my TrueNAS deployment. It is composed of two hosts: a primary and a backup. Every day, each system takes a differential snapshot of each dataset, and runs a short S.M.A.R.T. test on each disk. Nightly, the most important datasets on the primary are backed up to the backup as an Rsync task. And every week it runs a scrub task to ensure the stored data still checksums correctly.
-
And of course, it runs an SMB server and iSCSI target to facilitate clients and applications interacting with their own little data puddle.
-
Additional Services: Just for fun
-
In addition to the important services above, I run a handful of services just for off time.
-
Plex: I wish Jellyfin supported SSO
-
The superior media server, Plex provides a free (as in beer) solution with a beautiful frontend and comprehensive metadata scraping. Plex has set a standard for serving movies and TV that no alternative service has been able to match. It is the unwelcome incumbent no one has mustered the resources to dethrone. Over the last dthe usernames-emails-passwords) in 2022.
5eTools: If D&D Beyond was designed by software engineers instead of business boys
-
5eTools is an insanely high quality, data-driven repository for each and every piece of content ever made for D&D 5th edition. Fortunately, they have a blocklist feature to restrict the visible content to only the stuff I’ve bought the books for.
-
Calibre-web: Frontend for the premiere ebook library manager
Sometimes, rarely, I read a book. Much more freqeuently, I want to check a section from a book. Calibre(-web) affords me access to my entire library of books and ebooks from a web browser. Frustratingly, the owner of the project has decided not to implement generic OAuth2/OIDC support. But open-source, uh, finds a way.
-
Minecraft: Geoff "itzg" Bourne is a blessing
-
Hosting game servers for my friends got me into this stuff, and has been a throughline of my life for over 15 years.
-And Minecraft has been a consistent presence in that domain. Today that task is easier and more polished than ever.
-
Two Docker images, itzg/mc-router and itzg/docker-minecraft-server have handled every Minecraft server I’ve hosted since late 2020. The configuration is simple and declarative. The best part is the reverse proxying. In 2015 I would head to WhatsMyIP.org, copy the number at the top, and send it to my friends. Then they would manually type it into the connect dialog (copy-paste can be challenging), type something wrong, get a connection error, and call me on Google Hangouts. We’d, eventually get it figured out, but now I can just say “It’s e9.jafner.net” and that seems to stick a lot better.
-
Admin Services: Help me handle all this
-
Nothing since has given me the same high as seeing the green padlock next to jafner.net for the first time. After years of typing IPs, memorizing ports, skipping “Your connection is not private” pages, and answering “What’s the IP again?”, that green padlock was more gratifying than the $60k piece of paper framed on my wall.
-
Below are the tools and services I use to make everything else work properly.
Lines #1-2 configure a service to attach to the Docker network Traefik is monitoring.
-Lines #3-5 tell Traefik what (sub)domain(s) that service should serve, and tell it to provision a fresh LetsEncrypt certificate.
-
-
No wildcard certs.
-
No manual cert requests.
-
No services are exposed by default.
-
-
In addition to handling that core functionality, it also offers “middlewares” to handle additional functionality, like forwardauth, which helps me protect services that don’t support OAuth2/OIDC, such as WG-Easy.
-
I have never been tempted to look for alternatives. I struggle to imagine how a reverse proxy could be better for my use-case without overstepping its role.
It’s easy to allow your password manager’s “local accounts” folder slowly grow to dozens or hundreds of credentials as services are trialed, decomissioned, reinstalled, and troubleshooted (troubleshot?). Further, supporting basic account information updates for users that aren’t me is non-trivial. How many SMTP submission API keys would I need to support each of my services sending their own “Recover your account password” emails?
-
Keycloak provides much-needed consolidation of account management. Just like a real website, anyone who wants to do something with one of my services (e.g. open a Gitea issue) can walk through the familiar account creation process. Give a first and last name, email, username, and password. You’ll get a verification email in your inbox from "Keycloak Admin" <noreply@jafner.net>. Click the link, go back to the page you were trying to access, and boom, you can do stuff. Stuff I don’t want you to be able to do is still gated behind manual account approval. For example, you can’t just create an account and start uploading files to Nextcloud. Oh, and it supports 2FA.
-
One day I’ll be able to manage every single Jafner.net application’s ID and access via Keycloak, but a few have held out. But that’s an entire article itself.
-Until then I’ll be happy that most applications and services either support native OAuth2 or OIDC, or are single-user and can be simply gated behind Traefik forwardauth.
-
Wireguard: Road warrior who needs to reboot the Minecraft server
Every homelabber is faced with the question of how to administrate their server when they aren’t sitting (or standing) at their desk at home. It’s a great question; you’re dipping your toes into security posture, and the constant tension between security and ease-of-access. Using SSH keys instead of passwords is a no-brainer, but do you also configure SSH 2FA? Most folks don’t allow SSH traffic directly through the router, but how do you build and configure your VPN for SSHing into your server?
-
The steps I take to secure my SSH hosts are detailed here, but I’ll be digging into that more in another article. In brief:
-
-
SSH keys are matched to a user and host. E.g. Joey_Desktop, or Joey_Phone.
-
Authorized keys are added only as needed. There is no template. Nothing is included by default.
-
Each SSH server is configured to require pubkey authentication and disable password authentication.
-
Each SSH server is configured to require 2FA via the google-authenticator PAM module.
-
The router is not configured to port-forward any SSH traffic. Accessing SSH requires VPNing.
-
-
Lots of iteration led to this configuration. I’m sure I’ll write it out at some point.
-
Grafana, Prometheus, and Uptime-kuma: Observability before I knew what that meant
Before I knew what “Site Reliability Engineering” was, I had a Grafana instance (using ye olde Telegraf and InfluxDB) showing me pretty graphs. I was exposed to the timeseries database paradigm, and how to query it in a useful way. And later down the line I integrated Loki to pull all my Docker container logs into my one pretty visualization platform. I built dashboards for monitoring host health, troubleshooting specific issues, statuspages; I built alert policies to send notifications via Discord and email; and I exported data from practically every service. And then things settled down, and that data-analytics muscle began to atrophy. My environment was stable. So I changed tact.
-
Uptime-kuma is simple, beautiful, and does all the things I need. HTTP and ping-based uptime monitoring, outage notifications (via email and Discord), and status pages.
-
All this lets me sleep easy knowing that if any of my services go down, I’ll get a Discord notification. When someone asks me “Hey, is <service> down?” I can answer confidently, “Nope, works on my system.”
-
Closing thoughts, and looking forward
-
I’ve not written much about my lab before. It’s been a challenge to resist rambling about all the challenges and iterations I walked through to get the lab to the state it’s in today. And even just in the process of writing this the last couple days I’ve come to realize a few low-hanging fruit improvements I could make.
-
While I’m proud of all the things I’ve tought myself in this process, I could not have done it without hundreds of thousands of hours of freely-contributed projects, Q&As, documentation, tutorials, and every other kind of support.
-
In addition to the hundreds of individual project founders, maintainers, and contributers, I am deeply appreciative of the creators from whom inspiration flowed freely. I’m sure there’s room here somewhere for an Appendix N: Inspirational Reading article.
-
-
Thanks for reading. If you want to contact me to chat about homelabbing, D&D, tech writing, gaming, A/V streaming tech, or because you think I can help you solve a problem, email me at joey@jafner.net or use any of my other socials.
Below is a reverse chronological listing of my employment history.
-
[2021-11 to 2024-02] Site Reliability Engineer @ American Eagle
-
From November 2021 through February 2024, I worked on a long-term contract with American Eagle to design and implement observability tooling for cloud infrastructure and apps.
-
-
Spearheaded implementation of Bamboo pipelines to automate deployment of New Relic/Terraform observability codebase.
-
-
That reduced mean-time-to-deployment for our commits by more than 10x.
-
Increased deployment frequency from ~weekly to ~twice daily.
-
-
-
Collaborated with internal development teams to design human-optimized observability and alerting toolkits.
-
Authored and maintained documentation for our New Relic/Terraform codebase.
-
-
[2021-08 to 2021-11] DevOps Engineer @ UpWork Contractor
-
From August through November 2021, I collaborated with solo- and small-team projects to build automation pipelines, guide offshore software development teams toward client needs, and secure cloud resources to improve security posture. I also implemented data resilience and disaster recovery protocols with attention to ease-of-use and documentation. If you have an UpWork client account, you can view my engineering profile here.
-
[2021-03 to 2021-08] Technical Writer @ UpWork Contractor
-
From March through August 2021, I wrote technical documentation and articles for clients on UpWork. I loved working with open-source project developers to refine and articulate their vision. If you have an UpWork client account, you can view my technical writing profile here.
My name is Joey Hafner. I’m a computing engineer located in Tacoma, WA.
-
I’m always working on a diverse array of tech projects. Whether it’s my homelab, or spending 3 hours wrestling with a script to save me 30 seconds.
-
I’ve enjoyed hacking with whatever tech I could get my hands on since I was a teen, and that interest was overclocked in 2019 when I found Docker and started my homelab.
-
I’m always excited about opportunities to learn new technologies and build useful things.
Below is a reverse chronological listing of my employment history.
-
[2021-11 to 2024-02] Site Reliability Engineer @ American Eagle
-
From November 2021 through February 2024, I worked on a long-term contract with American Eagle to design and implement observability tooling for cloud infrastructure and apps.
-
-
Spearheaded implementation of Bamboo pipelines to automate deployment of New Relic/Terraform observability codebase.
-
-
That reduced mean-time-to-deployment for our commits by more than 10x.
-
Increased deployment frequency from ~weekly to ~twice daily.
-
-
-
Collaborated with internal development teams to design human-optimized observability and alerting toolkits.
-
Authored and maintained documentation for our New Relic/Terraform codebase.
-
-
[2021-08 to 2021-11] DevOps Engineer @ UpWork Contractor
-
From August through November 2021, I collaborated with solo- and small-team projects to build automation pipelines, guide offshore software development teams toward client needs, and secure cloud resources to improve security posture. I also implemented data resilience and disaster recovery protocols with attention to ease-of-use and documentation. If you have an UpWork client account, you can view my engineering profile here.
-
[2021-03 to 2021-08] Technical Writer @ UpWork Contractor
-
From March through August 2021, I wrote technical documentation and articles for clients on UpWork. I loved working with open-source project developers to refine and articulate their vision. If you have an UpWork client account, you can view my technical writing profile here.
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/Jafner.dev/public/page/index.xml b/Jafner.dev/public/page/index.xml
deleted file mode 100644
index 4b84f17a..00000000
--- a/Jafner.dev/public/page/index.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-
- Pages on Jafner.dev
- https://jafner.dev/page/
- Recent content in Pages on Jafner.dev
- Hugo -- gohugo.io
- en
- <a href="https://creativecommons.org/licenses/by-nc/4.0/" target="_blank" rel="noopener">CC BY-NC 4.0</a>
- Tue, 28 May 2024 17:56:53 -0700
-
-
- Articles
- https://jafner.dev/page/articles/
- Tue, 28 May 2024 17:56:53 -0700
- https://jafner.dev/page/articles/
-
-
-
- Experience
- https://jafner.dev/page/experience/
- Tue, 28 May 2024 17:56:47 -0700
- https://jafner.dev/page/experience/
- Below is a reverse chronological listing of my employment history.
[2021-11 to 2024-02] Site Reliability Engineer @ American Eagle From November 2021 through February 2024, I worked on a long-term contract with American Eagle to design and implement observability tooling for cloud infrastructure and apps.
Spearheaded implementation of Bamboo pipelines to automate deployment of New Relic/Terraform observability codebase. That reduced mean-time-to-deployment for our commits by more than 10x. Increased deployment frequency from ~weekly to ~twice daily.
-
-
- Projects
- https://jafner.dev/page/projects/
- Tue, 28 May 2024 17:56:41 -0700
- https://jafner.dev/page/projects/
-
-
-
- About Me
- https://jafner.dev/page/about/
- Tue, 28 May 2024 17:56:25 -0700
- https://jafner.dev/page/about/
- My name is Joey Hafner. I’m a computing engineer located in Tacoma, WA.
I’m always working on a diverse array of tech projects. Whether it’s my homelab, or spending 3 hours wrestling with a script to save me 30 seconds.
I’ve enjoyed hacking with whatever tech I could get my hands on since I was a teen, and that interest was overclocked in 2019 when I found Docker and started my homelab.
-
-
-
diff --git a/Jafner.dev/public/page/page/1/index.html b/Jafner.dev/public/page/page/1/index.html
deleted file mode 100644
index 5b9ac038..00000000
--- a/Jafner.dev/public/page/page/1/index.html
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-
- https://jafner.dev/page/
-
-
-
-
-
-
diff --git a/Jafner.dev/public/page/projects/index.html b/Jafner.dev/public/page/projects/index.html
deleted file mode 100644
index b99e3ca7..00000000
--- a/Jafner.dev/public/page/projects/index.html
+++ /dev/null
@@ -1,189 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Projects :: Jafner.dev — Hello Friend NG Theme
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
My first project posted publicly. Initially I didn’t expect to post it, but I didn’t see any other solutions to the problem I was having, so I figured my solution could inspire someone else to do it better.
-
Problem: Physical Volume Knobs for Applications
-
While using Windows, I installed an application called MIDI Mixer to map the physical volume knobs of my Behringer X-Touch Mini (and the accompanying mute and media control buttons) to individual applications on my PC. The most common use-case for me was turning my Spotify music up or down without alt-tabbing from my Overwatch game. My microphone was also mapped into the software.
-
It’s just nice to have a physical interface for these things. But when I tried switching to Pop!_OS a couple years ago, I first inquired at /r/linuxaudio about replicating that functionality on Linux. While I was able to find a similar project, nothing really scratched the itch, so I dug in to build my own.
-
Solution: PulseAudio and Xdotool in a Bash Script
-
I built pamidi. We’ll break it down here, with the benefit of retrospect.
-
Dependencies
-
Two utilities are critical to the function of the script: xdotool and pacmd. Additionally, we presume you’re running SystemD with PulseAudio.
-
-
xdotool is used to get the current focused window. This lets us drastically simplify the UX of mapping the volume knobs to specific applications.
-
pacmd is used to change volume and toggle mute for PulseAudio streams.
-
-
Code Breakdown
-
Let’s take a look at the code. The source is annotated with comments, but we’ll just look at the code here.
-
Initialize the Service
-
initialize(){
- echo "Initializing"
- echo "Checking for xdotool"
-if ! hash xdotool &> /dev/null; then
- echo "xdotool could not be found, exiting"
- exit 2
-else
- echo "xdotool found"
-fi
- echo "Waiting for pulseaudio service to start..."
-while[[$(systemctl --machine=joey@.host --user is-active --quiet pulseaudio)]]; do
- echo "Pulseaudio service not started, waiting..."
- sleep 2
-done
- echo "Waiting for X-TOUCH MINI to be connected..."
-while[[ ! $(lsusb | grep "X-TOUCH MINI")]]
-do
- echo "X-TOUCH MINI not connected. Waiting..."
- sleep 2
-done
- col_1_app_pid=-1
- col_2_app_pid=-1
- col_3_app_pid=-1
- col_4_app_pid=-1
- col_5_app_pid=-1
- col_6_app_pid=-1
- col_7_app_pid=-1
- col_8_app_pid=-1
- assign_profile_1
- print_col_app_ids
- echo "Initialized pamidi"
- notify-send "Initialized pamidi"
-}
-
-
First we check to ensure xdotool is installed. hash is a weird choice for checking the presence of a command. Would probably use which today.
-
Next we wait until we see that the PulseAudio SystemD unit’s status is “active”. But, uh… I’m not sure why I needed that --machine=joey@.host flag.
-
We wait until lsusb reports the X-TOUCH MINI as connected.
-
We set up our 8 variables for storing application PIDs.
-
We invoke a yet-to-be-implemented function assign_profile_1. It does nothing.
-
We print the PIDs bound to each knob to the console. And then we send an OS notification that the service is initialized.
We take two positional arguments for this function: application PID, and volume delta.
-
-
The use of volume delta is the primary differentiator between Mackie mode and standard mode. In Mackie mode, turning the knob returns a change in volume. Values from 0-63 represent -63 through -1 and values from 64-127 represent +0 through +63. We use this to set a volume_change variable.
-
-
-
In order to change volume, we need the sink ID matching the PID for the application we’ve bound to a particular knob. We do this in a very roundabout way.
-
-
We get a list of all PulseAudio sink-inputs (playback streams) with their detailed properties. We need the index (sink ID) and application process ID (our PID).
-
We do some pipe gymnastics to convert that to an array of tuples in the form <sink-id> <application-pid>. Then we iterate over that list to match the provided application PID.
-
-
-
Lastly, we use pactl set-sink-input-volume to change the volume.
-
-
$stream_id determines which sink-input is affected. Like 4.
-
$vol_change is a string of a signed integer in |0-63|. Like -12 or +62
-
-
-
-
Note: In Standard mode, we use the same pactl command, but the volume argument is prepended with a sign to increment/decrement the volume, rather than set it.
-
Toggle Mute, Mute On, and Mute Off
-
Instead of posting the full functions, which are highly repetitive, we’ll just look at how they differ from each other.
-
We follow the same process as in change volume to get the stream ID from the PID.
This function is not used anywhere. It requires that $output contain the raw response from pactl list-sink-inputs. It creates an array of tuples in the form <pid> <application binary>, and then prints the name of the application binary matching the PID passed to the function as the first positional argument.
This function is called when we press down on one of the knobs. It binds the currently focused application to that knob. Very nice UX, and reletively simply implemented. It takes the index of the knob pressed as its one positional argument.
-
-
Use xdotool to get the pid and name of the currently active window.
-
Assign the window PID to that knob, and send an OS notification to let the user know what happened.
-
-
Main: Mackie and Standard
-
main_mackie(){
- aseqdump -p "X-TOUCH MINI" | \
-while IFS=" ," read src ev1 ev2 ch label1 data1 label2 data2 rest; do
-case"$ev1$ev2$data1$data2" in
-# column 1
-"Note on 32"* ) bind_application 1 ;; # knob press
-"Note on 89"* ) toggle_mute $col_1_app_pid ;; # top button
-"Note on 87"* ) print_col_app_ids ;; # bottom button
-"Control change 16"* ) change_volume $col_1_app_pid $data2 ;; # knob turn
-
-# column 2
-"Note on 33"* ) bind_application 2 ;; # knob press
-"Note on 90"* ) toggle_mute $col_2_app_pid ;; # top button
-"Note on 88"* ) ;; # bottom button
-"Control change 17"* ) change_volume $col_2_app_pid $data2 ;; # knob turn
-
-# column 3
-"Note on 34"* ) bind_application 3 ;; # knob press
-"Note on 40"* ) toggle_mute $col_3_app_pid ;; # top button
-"Note on 91"* ) media_prev ;;
-"Control change 18"* ) change_volume $col_3_app_pid $data2 ;; # knob turn
-
-# column 4
-"Note on 35"* ) bind_application 4 ;; # knob press
-"Note on 41"* ) toggle_mute $col_4_app_pid ;; # top button
-"Note on 92"* ) media_next ;;
-"Control change 19"* ) change_volume $col_4_app_pid $data2 ;; # knob turn
-
-# column 5
-"Note on 36"* ) bind_application 5 ;; # knob press
-"Note on 42"* ) toggle_mute $col_5_app_pid ;; # top button
-"Note on 86"* ) ;;
-"Control change 20"* ) change_volume $col_5_app_pid $data2 ;; # knob turn
-
-# column 6
-"Note on 37"* ) bind_application 6 ;; # knob press
-"Note on 43"* ) toggle_mute $col_6_app_pid ;; # top button
-"Note on 93"* ) media_stop ;;
-"Control change 21"* ) change_volume $col_6_app_pid $data2 ;; # knob turn
-
-# column 7
-"Note on 38"* ) bind_application 7 ;; # knob press
-"Note on 44"* ) toggle_mute $col_7_app_pid ;; # top button
-"Note on 94"* ) media_play_pause ;;
-"Control change 22"* ) change_volume $col_7_app_pid $data2 ;; # knob turn
-
-# column 8
-"Note on 39"* ) bind_application 8 ;; # knob press
-"Note on 45"* ) toggle_mute $col_8_app_pid ;; # top button
-"Note on 95"* ) ;;
-"Control change 23"* ) change_volume $col_8_app_pid $data2 ;; # knob turn
-
-# layer a and b buttons
-"Note on 84"* ) assign_profile_1 ;;
-"Note on 85"* ) assign_profile_2 ;;
-esac
-done
-}
-
This one took a lot of trial and error, and this function is where we would need to implement profiles for different devices.
-
-
We use aseqdump to attach to the ALSA output stream of the “X-TOUCH MINI” device (-p "X-TOUCH-MINI).
-
We read each line in a while loop, and set variables according to the format used by the X-Touch Mini in aseqdump.
-
-
The sequence $ev1 $ev2 $data1 is used to determine which physical interaction was used. Its values look like “Note on 36” or “Control change 18”, which represent Knob 5 Press and Knob 3 Turn, respectively.
-
For knob turn interactions, we pass the $data2 value to the change_volume function, otherwise it is discarded.
-
-
-
-
Note: The difference between Mackie and standard here is the mapping between $data1 and the physical interaction. E.g. Knob 5 Press in Mackie mode sends “Note on 36”, and in standard mode it sends “Control change 13 127”.
-
Future Work
-
This script was amateurish, and today I don’t need the functionality it provides. It’s unlikely I will continue to work on it, but as an exercise, there are a few layers of improvements I would make:
-
-
Remove --machine=joey@.host from the PulseAudio service up check.
-
Improve the tragic state of optimization for the change volume functions. We do not need to get the entire list of running audio sinks every time we increment or decrement the volume.
-
Eliminate the repetitiveness of the change volume and mute functions.
-
Map interactions to ALSA sequence entries more programmatically (e.g. "Note on") bind_application $(($data1 - 31)) ;;) This can apply to both Mackie and standard mode.
-
Modularize functions to make it more portable between input devices.
-
Rewrite in a proper programming language. Python, or Go, or Rust.
-
-
Conclusion
-
This was a fun project. I learned a bit about MIDI and ALSA, a bit about Bash and Systemd, and I built something useful.
-
It was received kindly, despite its vast room for improvement. And some folks are still encountering this need, so maybe it’s worth revisiting.
-
Today, I use a GoXLR Mini with GoXLR-on-Linux/GoXLR-utility to get much of the functionality I was wanting. It’s missing some things (like dynamically rebinding faders to applications), but has some nice features pamidi could never replicate, such as microphone audio processing.