#19 Delete legacy docs, extract useful content into other docs, delete homelab/docs/

This commit is contained in:
Joey Hafner 2024-10-28 18:01:52 -07:00
parent e5109b1f9b
commit aa1527ccb5
No known key found for this signature in database
48 changed files with 264 additions and 1542 deletions

View File

@ -3,13 +3,17 @@ A monorepo for all my projects and dotfiles. Hosted on [my Gitea](https://gitea.
## Map of Contents
| Project | Summary | Path |
|:-------------------:|:-------:|:----:|
| homelab | Configuration and documentation for my homelab. | [`homelab/`](homelab/) |
| nix | Nix flake defining my PC & k3s cluster configurations | [`nix`](nix/) |
| Jafner.dev | Hugo static site configuration files for my [Jafner.dev](https://jafner.dev) blog. | [`blog/`](blog/) |
| razer-bat | Indicate Razer mouse battery level with the RGB LEDs on the dock. Less metal than it sounds. | [`projects/razer-bat/`](projects/razer-bat/) |
| 5etools-docker | Docker image to make self-hosting 5eTools a little bit better. | [`projects/5etools-docker/`](projects/5etools-docker/) |
| 5eHomebrew | 5eTools-compatible homebrew content by me. | [`projects/5ehomebrew/`](projects/5ehomebrew/) |
| archive | Old, abandoned, unmaintained projects. Fun to look back at. | [`archive/`](archive/) |
| Project | Summary |
|:----------------------:|:-------:|
| [dotfiles](/dotfiles/) | Configuration and documentation for my PCs. |
| [homelab](/homelab/) | Configuration and documentation for my homelab. |
| [projects](/projects/) | Self-contained projects in a variety of scripting and programming languages. |
| [sites](/sites/) | Static site files |
| [.gitea/workflows](/.gitea/workflows/) & [.github/workflows](/.github/workflows/) | GitHub Actions workflows running on [Gitea](https://gitea.jafner.tools/Jafner/Jafner.net/actions) and [GitHub](https://github.com/Jafner/Jafner.net/actions), respectively. |
| [.sops](/.sops/) | Scripts and documentation implementing [sops](https://github.com/getsops/sops) to securely store secrets in this repo. |
## LICENSE: MIT License
> See [LICENSE](/LICENSE) for details.
## Contributing
Presently this project is a one-man operation with no external contributors. All contributions will be addressed in good faith on a best-effort basis.

View File

@ -15,3 +15,56 @@ This directory contains the files that compose my homelab.
| silver-hand | Documentation and Terraform configuration for the `silver-hand` local Kubernetes cluster | [`silver-hand`](/homelab/silver-hand/) |
| stacks | Maximally independent Docker compose files for various services | [`stacks`](/homelab/stacks/) |
| wizard | Documentation, configuration, and scripts for the `wizard` VyOS host | [`wizard`](/homelab/wizard/) |
## Configure a New Host
### NixOS
- Download the [NixOS ISO installer](https://nixos.org/download/#nixos-iso).
- Refer to the [NixOS Manual](https://nixos.org/manual/nixos/stable/) for further instructions.
## Security
Below are described the general security principles followed throughout this project:
- Never lean on security through obscurity.
- Minimize friction induced by security. Friction induces laziness, which inevitably circumvents the original security system.
- Understand that security practices cannot *eliminate* vulnerability of the system, only make it *too expensive* to attack.
- Tie less important secrets to more important secrets, but not vice-versa.
Further, we have some tool-specific guidelines.
### Securing SSH
When configuring the SSH server for a local host or VPS, or when provisioning a new SSH keypair.
- Generate one private key for each user-machine pair.
- Do not automate dissemination of pubkeys. Always install pubkeys manually.
- Disable password-based authentication.
- Disable root login.
### Docker Compose
To write secrets into `docker-compose.yml` files securely, we use the following [`env_file`](https://docs.docker.com/reference/compose-file/services/#env_file) snippet:
```yaml
env_file:
- path: ./<service>.secrets
required: true
```
And we then write the required secrets in [dotenv format](https://www.dotenv.org/docs/security/env.html) in a file located at `./<service>.secrets`. For example:
```dotenv
API_KEY=zj1vtmUNGIfHJBfYsDINr8AVN5on1Hy0
# ROOT_PASSWORD=changeme
ROOT_PASSWORD=0gavJVrsv89bmdDeJXAcI1eCvQ4Um8Hy
```
When these files are staged in git, our [.gitattributes](/.gitattributes) runs the `sops` filter against any file matching one of its described patterns.
### Web Services
**If a service supports OAuth2**, we configure [Keycloak SSO]() for that service.
When feasible, we shift responsibility for authentication and authorization to Keycloak. This is dependent on each service implementing OAuth2/OpenIDConnect.
**If a service does not support OAuth2**, but it does support authentication via the `X-Forwarded-User` header, we use [mesosphere/traefik-forward-auth](https://github.com/mesosphere/traefik-forward-auth) as a Traefik middleware. This middleare restricts access to the service regardless of whether the service understands the `X-Forwarded-User` header, which makes it useful for compatible multi-user applications *and* single-user applications.
**If a service should not be internet-accessible**, we use Traefik's [`ipWhiteList`](https://doc.traefik.io/traefik/middlewares/http/ipwhitelist/) middleware to restrict access to LAN IPs only.
**Else**, some services *absolutely require* separate authentication (e.g. Plex, N8n). In such cases, we create user credentials as we would for any internet service; using our password manager.

View File

@ -1,4 +0,0 @@
# Contributing
Please open a new issue to clarify any questions.
As this repository is tied to my personal lab, I will likely not be accepting pull requests for code in this repository. However, pull requests for documentation are appreciated.

View File

@ -1,68 +0,0 @@
# Configure a New Host
## Prerequisites
- Fresh Debian 11+ installation on x86 hardware.
- SSH access to host.
## Create Admin User
1. Get su perms. Either via `sudo`, `su -`, or by logging in as the root user.
2. `adduser admin` to create the non-root admin user.
3. `usermod -aG sudo admin` to add the new user to the sudo group.
4. `sudo visudo` and append this line to the end of the file: `admin ALL=(ALL) NOPASSWD:ALL` to enable passwordless sudo.
After these, you can `sudo su admin` to log into the new user account.
https://www.cyberciti.biz/faq/add-new-user-account-with-admin-access-on-linux/
https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
## Set the Hostname
1. `sudo hostnamectl set-hostname <hostname>` to set the hostname.
2. `sudo nano /etc/hosts` and edit the old value for `127.0.1.1` to use the new hostname.
## Configure Secure SSH
1. `mkdir -p /home/admin/.ssh && echo "<insert pubkey here>" >> /home/admin/.ssh/authorized_keys` Add pubkey to authorized_keys. Make sure to place the correct SSH pubkey in the command before copying.
2. `sudo apt install libpam-google-authenticator` to install the Google 2FA PAM.
3. `google-authenticator` to configure the 2FA module. Use the following responses when prompted:
* Do you want authentication tokens to be time-based? `y`
* Do you want me to update your "/home/$USER/.google_authenticator" file? `y`
* Do you want to disallow multiple uses of the same authentication token? `y`
* Do you want to do so? `n` (refers to increasing time skew window)
* Do you want to enable rate-limiting? `y` We enter our TOTP secret key into our second authentication method and save our one-time backup recovery codes.
4. `sudo nano /etc/pam.d/sshd` to edit the PAM configuration, and add this line to the top of the file `auth sufficient pam_google_authenticator.so nullok`
5a. `sudo nano /etc/ssh/sshd_config` to open the SSH daemon config for editing. Make sure the following assertions exist:
* `PubkeyAuthentication yes`
* `AuthenticationMethods publickey,keyboard-interactive`
* `PasswordAuthentication no`
* `ChallengeResponseAuthentication yes`
* `UsePAM yes`
5b. `echo $'PubkeyAuthentication yes\nAuthenticationMethods publickey,keyboard-interactive\nPasswordAuthentication no\nChallengeResponseAuthentication yes\nUsePAM yes' | sudo tee /etc/ssh/sshd_config.d/ssh.conf` to perform the above as a one-liner. Requires a version of OpenSSH/Linux that supports sourcing sshd config from the `/etc/ssh/sshd_config.d/*.conf` path.
6. `sudo systemctl restart sshd.service` to restart the SSH daemon.
## Install Basic Packages
1. `sudo apt install curl nano inxi git htop`
### Install Docker
1. `curl -fsSL https://get.docker.com | sudo sh` This is the most convenient and least safe way to do this. If this script is ever compromised, we'd be fucked.
2. `sudo systemctl enable docker` to enable the Docker service.
3. `sudo usermod -aG docker $USER` to add the current user (should be non-root admin) to docker group.
4. `logout` to relog and apply the new permissions.
## Clone the Homelab Repo
1. Create a new Gitlab personal access token for the device at [Personal Access Tokens](https://gitlab.jafner.net/-/profile/personal_access_tokens). Should be named like `warlock` and have the following scopes: `read_api`, `read_user`, `read_repository`.
2. `mkdir ~/homelab ~/data && cd ~/homelab/ && git init && git config core.sparseCheckout true && git config pull.ff only` to init the repository with sparse checkout enabled.
3. `git remote add -f origin https://<pat-name>:<pat-value>@gitlab.jafner.net/Jafner/homelab.git` to add the repo with authentication via read-only personal access token. NOTE: Make sure to replace `<pat-name>` with the name of the personal access token, and replace `<pat-value>` with the key for the personal access token.
4. `echo "$HOSTNAME/" > .git/info/sparse-checkout` to configure sparse checkout for the host.
5. `git checkout main` to switch to the main branch with the latest files.

View File

@ -1,27 +0,0 @@
```mermaid
graph TB;
Upstream["dns.google (8.8.8.8; 8.8.4.4)"]
Clients["Clients [192.168.1.0/24]"]
Router["VyOS Router [192.168.1.1]"]
PiHoles["PiHole [192.168.1.22,192.168.1.21]"]
BlackHole["Black Hole"]
Clients --"First connect"--> Router
Router --"Sends DHCP with DNS=192.168.1.22,192.168.1.21"--> Clients
Clients --"Subsequent requests"--> PiHoles
Router ----> PiHoles
PiHoles --"Blacklisted domains"--> BlackHole
PiHoles --"Valid requests"--> Upstream
```
Clients connecting to the local network for the first time will receive as part of the DHCP negotiation ([code 6](https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol#Information)) the domain name servers' addresses. These addresses will correspond to the IP addresses of the PiHole servers (currently RasPis at `192.168.1.22` and `192.168.1.21`).
From that point, clients will send simultaneous DNS requests to both Piholes and use the first response received. This means the PiHoles will be able to track requests per-client. However, this splits tracking data between the two servers, so it may be difficult to visualize a complete picture.
A client can be manually configured to request DNS resolution from the router, which will forward requests to the PiHoles.
DNS requests to the PiHole will be checked against the [configured adlists](https://pihole.jafner.net/groups-adlists.php). If matched, the request will be blocked. If a user is attempting to access a website that is blocked, the request should quickly resolve to a Domain Not Found error. It will look like this:
![Chrome](/docs/img/pihole_domain_blocked_chrome.png)
![Firefox](/docs/img/pihole_domain_blocked_firefox.png)
If the request does not match any adlists, it will be passed upstream to Google `8.8.8.8` (or backup `8.8.4.4`).
Presently, the PiHole does not cache any requests.

View File

@ -1,34 +0,0 @@
```mermaid
flowchart TD
barbarian
druid
fighter
monk
wizard
cloudflare
cloudflare["Cloudflare DNS"] --DNS *.jafner.tools--> druid["Druid: High-uptime, low data services"]
cloudflare --DNS *.jafner.net----> wizard["Wizard: Routing with VyOS"]
wizard --Port forward :80,:443--> fighter["Fighter: Data-dependent services"]
barbarian["Barbarian: Primary NAS"] --Rsync backup--> monk["Monk: Backup NAS"]
druid --Docker--> 5eTools["5eTools: D&D 5th Edition Wiki"]
druid --Docker--> Gitea["Gitea: This Git server!"]
druid --Docker--> Uptime-Kuma["Uptime-Kuma: Synthetic monitoring and alerting"]
druid --Docker--> Vaultwarden["Vaultwarden: Self-hosted Bitwarden server"]
druid --Docker--> Wireguard["Wireguard: Performant VPN"]
fighter --Docker--> Autopirate["Autopirate: Stack of applications for downloading Linux ISOs"] <--SMB--> barbarian
fighter --Docker--> Calibre-web["Calibre-web: Ebook library frontend"] <--SMB--> barbarian
fighter --Docker--> Keycloak["Keycloak: SSO Provider"]
fighter --Docker--> Minecraft["Minecraft Servers"] <--iSCSI--> barbarian
fighter --Docker--> Grafana["Grafana, Prometheus, Uptime-Kuma"]
fighter --Docker--> Nextcloud["Nextcloud: Cloud drive and office suite"] <--iSCSI--> barbarian
fighter --Docker--> Plex["Plex: Media library frontend"] <--SMB--> barbarian
fighter --Docker--> Qbittorrent["Qbittorrent: Torrent client"] <--SMB--> barbarian
fighter --Docker--> Send["Send: Self-hosted Firefox Send"] <--iSCSI--> barbarian
fighter --Docker--> Stash["Stash: Linux ISO frontend"] <--SMB--> barbarian
fighter --Docker--> Unifi["Unifi controller"]
fighter --Docker--> Vandam["Manyfold: 3D Asset library manager"] <--SMB--> barbarian
fighter --Docker--> Wireguard2["Wireguard: Performant VPN"]
```

View File

@ -1,21 +0,0 @@
# Connect a service to the noreply@jafner.net service account
| Key | Value | Note |
|:---:|:-----:|:----:|
| From Address | noreply@jafner.net |
| From Name | No Reply |
| Protocol | SMTP |
| Mail Server | smtp.gmail.com |
| Mail Server Port | 465 |
| Security | SSL (Implicit TLS) |
| SMTP Authentication | Yes |
| Username | noreply@jafner.net |
| Password | *Create a unique Application Password (see below)* |
## Create an Application Password
1. To get an Application Password, navigate to the [Google Account Console -> Security
](https://myaccount.google.com/u/2/security), then click "App passwords".
2. Under the "Select device" drop-down menu, select "Other (custom name)" and type the name of the service that will use the password.
3. Copy the yellow-highlighted App password. Use it for the desired service.
## References
1. [Google Support - Check Gmail through other email platforms](https://support.google.com/mail/answer/7126229?hl=en#zippy=%2Cstep-check-that-imap-is-turned-on%2Cstep-change-smtp-other-settings-in-your-email-client)

View File

@ -1,33 +0,0 @@
# Goal
Spin up a Git server with a greater feature set than Gitea.
Specifically, I want:
- Integrated CI/CD. I would prefer a platform that comes with a 1st party CI/CD solution, rather than plugging in a 3rd party solution.
- Container/image registry. Building a locally-hosted registry for images enables better caching.
- Enterprise-competitive platform. Getting experience with a platform that competes with other enterprise SCM solutions is more valuable than something designed for a smaller scale.
# Plan
1. Create the host mount points for the docker volumes: `mkdir -p ~/docker_data/gitlab/data ~/docker_data/gitlab/logs ~/docker_config/gitlab/config`
2. Import the default GitLab configuration from [the docs](https://docs.gitlab.com/ee/install/docker.html#install-gitlab-using-docker-compose).
3. Customize the compose file:
1. `hostname: gitlab.jafner.net`
2. change the `external_url` under the `GITLAB_OMNIBUS_CONFIG` env var to `https://gitlab.jafner.net`
3. Add the `gitlab_rails['gitlab_shell_ssh_port'] = 2229` configuration line under `GITLAB_OMNIBUS_CONFIG` with a new SSH port
4. Remove http and https port bindings. Move host SSH port binding to a higher port.
5. Change the volume bindings to match my conventions (`DOCKER_DATA` instead of `GITLAB_HOME`)
6. Change the docker compose version to `'3.3'`
7. Add Traefik labels to enable TLS.
4. Run the file and test.
5. Troubleshoot issues.
6. GOTO 4.
7. Import Gitea repos
8. Move Gitea from `git.jafner.net` to `gitea.jafner.net`
9. Update Homer with new service locations
===
# References
1. [GitLab Docker images](https://docs.gitlab.com/ee/install/docker.html)
2. [GitLab SaaS vs Self-hosted](https://about.gitlab.com/handbook/marketing/strategic-marketing/dot-com-vs-self-managed/)
3. [Digital Ocean: How to Setup GitLab on a Digital Ocean Droplet](https://www.digitalocean.com/community/tutorials/how-to-setup-gitlab-a-self-hosted-github)

View File

@ -1,18 +0,0 @@
# Git Sparse-Checkout
We don't want every device that needs *any* part of the Jafner.net monorepo to get *all* of the monorepo. So we use [`git-sparse-checkout`](https://git-scm.com/docs/git-sparse-checkout) to pull only one or more subpaths when we clone.
Ensure that the device to be configured has an SSH pubkey with permission to pull/push to the repository.
```bash
mkdir ~/Jafner.net
cd ~/Jafner.net
git config --global init.defaultBranch main
git init
git config core.sparseCheckout true
git config core.fileMode false
git config pull.ff only
git config init.defaultBranch main
echo "homelab/$HOSTNAME/" >> .git/info/sparse-checkout
git remote add -f origin ssh://git@gitea.jafner.tools:2225/Jafner/Jafner.net.git
git pull
```

View File

@ -1,71 +0,0 @@
# NAS
The NAS is relied upon for many other hosts on the network, which need to be offlined before the NAS can be shut down.
1. Determine which service stacks rely on the NAS by running `grep -rnwli ~+ -e '/mnt/nas/media\|/mnt/torrenting\|/mnt/nas/calibre'` from the root of the `homelab` repo.
2. `docker-compose down` the stacks which rely on the NAS
3. `cat /etc/fstab` to get the list of mount points which rely on the NAS
4. For each NAS mount, run `sudo umount` for that share.
5. Offline the NAS. Press the physical power button on the NAS.
6. Perform necessary maintenance, then reboot the NAS.
7. After the NAS WebUI is available, SSH into the server and run `sudo mount -a`
8. Online the stacks affected by step 2.
# Server
1. Stop all Docker containers with `docker stop $(docker ps -aq)`.
2. Reboot the host with `sudo reboot now`.
3. When the host has finished booting, re-mount the NAS SMB shares defined in `/etc/fstab` with `sudo mount -a`
4. Start all Docker containers with `docker start $(docker ps -aq)`.
# Router
The router is relied upon by all clients on the network, so they all need to be offlined or prepared.
1. Offline the seedbox.
2. Offline the server.
3. Offline the NAS.
4. Run `shutdown`.
# PiHole
The PiHole is relied upon for DNS resolution for all devices on the network which have not manually configured another DNS resolver.
1. Log into `router` via SSH and run the following:
```
configure
delete service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 192.168.1.23
set service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 1.1.1.1
commit; save; exit
```
3. Switch back to the router and run the following:
```
configure
delete service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 1.1.1.1
set service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 192.168.1.23
commit; save; exit
```
4. Done.
# Full Lab
To offline the whole lab:
```sh
ssh joey@joey-server docker stop $(docker ps -aq)
ssh joey@joey-server sudo shutdown now
ssh joey@joey-seedbox docker stop $(docker ps -aq)
ssh joey@joey-seedbox sudo shutdown now
ssh root@joey-nas shutdown now
ssh admin@192.168.1.1 configure; delete system name-server 192.168.1.22; set system name-server 1.1.1.1; commit; save; exit
ssh pi@pihole sudo shutdown now
```
Perform necessary maintenance, then power hosts back on in the following order:
1. PiHole
2. NAS (ensure smb server is online)
3. Server
4. Seedbox
After all hosts are back on the network:
```sh
ssh admin@192.168.1.1 configure; delete system name-server 1.1.1.1; set system name-server 192.168.1.22; commit; save; exit
ssh joey@joey-server sudo mount -a
ssh joey@joey-server docker start $(docker ps -aq)
ssh joey@joey-seedbox sudo mount -a
ssh joey@joey-seedbox docker start $(docker ps -aq)
```

View File

@ -1,121 +0,0 @@
# Shutdown TL;DR
1. Shut down the server: `docker stop $(docker ps -aq) && sudo shutdown now`
2. Shut down the NAS: `shutdown -p now`
3. Shut down the router: `sudo shutdown now`susu
# About
"Rackdown" is AWS slang for turning a rack of hosts off and on again. In this case, the "rack" refers to practically all components of the DC. Server, NAS, disk shelf, switches, router, PiHole, modem, APs, and desktops. This doc will consolidate previous docs and provide an overall shutdown and reboot procedure.
# Overview (Dependency Graph)
```mermaid
flowchart TD;
CLN["CenturyLink node"]<--Depends--ONT;
ONT<--Cat5e-->Modem[ISP Modem/Router];
Modem<--Cat5e-->Router[Ubiquiti EdgeRouter 10X];
Router<--Cat5e-->switch_homelab[NetGear 8-Port Switch for Homelab];
switch_homelab<--Cat6-->desktop_joey[Joey's Desktop];
switch_homelab<--Cat5-->desktop_bridget[Bridget's Desktop];
switch_homelab<--Cat6-->NAS;
NAS<--SFP+ DAC-->Desktop;
NAS<--SFP+ DAC-->Server;
switch_homelab<--Cat6-->Server;
switch_homelab<--Cat6-->Seedbox;
switch_homelab<--Cat5e-->Pihole;
Router<--Cat5e-->switch_basementtv[TP-Link 5-Port Switch for Basement TV];
switch_basementtv<--Cat6-->desktop_maddie[Maddie's Desktop];
switch_basementtv<--Cat5e-->client_tv_downstairs[Downstairs TV];
Router<--Cat6-->wap_basement[Ubiquiti Unifi U6-Lite];
wap_basement<--Wifi6 2.4/5GHz-->clients_wireless_basement[Basement Wireless Clients];
Router<--Cat6-->wap_upstairs[Ubiquiti Unifi UAP-AC-LR];
wap_upstairs<--Wifi5 2.4/5GHz-->clients_wireless_upstairs[Upstairs Wireless Clients];
Router<--Cat6-->desktop_mom[Mom's Desktop];
Router<--Cat6-->desktop_dad[Dad's Desktop];
Router<-->desktop_gus[Gus' Desktop];
```
# Per-Node Reboot Instructions
For each of these, it is assumed that all dependent nodes have already been shut down as necessary.
## Rebooting the ONT
1. Unplug the 6-pin power plug. Wait 15 seconds.
2. Plug the power plug back in. Wait for the top three lights to be solid green.
## Rebooting the modem (Zyxel C3000Z)
1. Unplug the barrel power plug. Wait 15 seconds.
2. Plug the power plug back in. Wait for the "Power" and "WAN/LAN" lights to be solid green (the WAN/LAN light might flicker, that's okay. )
## Rebooting the Router (Ubiquiti EdgeRouter 10X)
1. Uplug the barrel power plug. Wait 15 seconds.
2. Plug the power plug back in. Wait for the indicator LED to be solid white.
## Server
### Shutdown
1. SSH into the router to reconfigure its DNS resolution.
2. Reconfigure the router's DNS resolution:
```
configure
delete system name-server 192.168.1.23
set system name-server 1.1.1.1
delete service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 192.168.1.23
set service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 1.1.1.1
commit; save; exit
```
3. Shut down Minecraft servers: `cd ~/homelab/jafner-net/config/minecraft && for service in ./*.yml; do echo "===== SHUTTING DOWN $service =====" && docker-compose -f $service down; done`
4. Shut down remaining services: `for app in ~/homelab/jafner-net/config/*; do echo "===== SHUTTING DOWN $app =====" && cd $app && docker-compose down; done`
5. Shut down the host: `sudo shutdown now`. Wait 30 seconds. If the green power LED doesn't turn off, hold the power button until it does.
### Boot
6. Press the power button on the front of the chassis to begin booting. Take note of any POST beeps during this time. Wait for the host to be accessible via SSH.
7. Check current running docker containers
8. Confirm all SMB shares are mounted with `mount -t cifs`. If not mounted, run `mount -a` for all shares.
9. Start most services: `for app in ~/homelab/jafner-net/config/*; do echo "===== STARTING $app =====" && cd $app && docker-compose up -d; done`
10. Start Minecraft servers: `cd ~/homelab/jafner-net/config/minecraft && for service in ./*.yml; do echo "===== STARTING $service =====" && docker-compose -f $service up -d; done`
11. Reconfigure the router's DNS resolution:
```
configure
delete system name-server 1.1.1.1
set system name-server 192.168.1.23
delete service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 1.1.1.1
set service dhcp-server shared-network-name LAN1 subnet 192.168.1.0/24 dns-server 192.168.1.23
commit; save; exit
```
### Shut down NAS-dependent projects
Rather than shutting down on a per-container basis, we want to shut down an entire project if any of its containers depends on the NAS.
The [nas_down.sh](/server/scripts/nas_down.sh) script uses `docker-compose config` to determine whether a project is NAS-dependent and will shut down all NAS-dependent projects. This script is also weakly-idempotent (due to the nature of `docker-compose down`).
### Start up NAS-dependent projects
Rather than starting up on a per-container basis, we want to start up an entire project if any of its containers depends on the NAS.
The [nas_up.sh](/server/scripts/nas_up.sh) script uses `docker-compose config` to determine whether a project is NAS-dependent and will start up all NAS-dependent projects. This script is also weakly-idempotent (due to the nature of `docker-compose up -d`).
### List host-side mounts for loaded containers
Mostly useful during scripting, but potentially also for troubleshooting, this one-liner will print the host side of each volume mounted in a container.
`docker inspect --format '{{range .Mounts}}{{println .Source}}{{end}}' <container_name>`
You can run this for all containers with this loop:
`for container in $(docker ps -aq); do docker ps -aq --filter "id=$container" --format '{{.Names}}' && docker inspect --format '{{range .Mounts}}{{println .Source}}{{end}}' $container; done`
Note: this is meant to be human-readable, so it prints the container's name before the list of volume mounts.
### Recreate all Docker containers one-liner
```bash
STACKS_RESTARTED=0 && for app in ~/homelab/jafner-net/config/*; do echo "===== RECREATING $app =====" && cd $app && docker-compose up -d --force-recreate && STACKS_RESTARTED=$(($STACKS_RESTARTED + 1)); done && cd ~/homelab/jafner-net/config/minecraft && for service in ./*.yml; do echo "===== RECREATING $service =====" && docker-compose -f $service up -d --force-recreate && STACKS_RESTARTED=$(($STACKS_RESTARTED + 1)); done && echo "===== DONE (restarted $STACKS_RESTARTED stacks) ====="
```
#### Recreate based on list of containers
```bash
STACKS_RESTARTED=0 && for app in calibre-web homer jdownloader2 librespeed monitoring navidrome qbittorrent send stashapp traefik; do echo "===== RECREATING $app =====" && cd ~/homelab/jafner-net/config/$app && docker-compose up -d && STACKS_RESTARTED=$(($STACKS_RESTARTED + 1)); done && echo "===== DONE (restarted $STACKS_RESTARTED stacks) =====" && cd ~
```
## NAS
### Shutdown
1. Follow the instructions to [shut down NAS-dependent projects](#shut-down-nas-dependent-projects) on the server.
2. SSH into the NAS and run `shutdown -p now`. Wait 30 seconds. If the green power LED doesn't turn off, hold the power button until it does.
3. Unplug the power connections to the disk shelf.
### Boot
4. Plug power and SAS into the disk shelf. Wait for all disks to boot. About 2-3 minutes. Wait about 30 extra seconds to be safe.
5. Plug power, ethernet, and SAS into the NAS. Power on the NAS and wait for the SSH server to become responsive. This can take more than 5 minutes. Note: The WebUI will not be accessible at `https://nas.jafner.net` until the server is also booted. It is accessible at `http://joey-nas/ui/sessions/signin`.
6. Follow the instructions to [start up NAS-dependent projects](#start-up-nas-dependent-projects) on the server.

View File

@ -1,17 +0,0 @@
Debian (and derivatives)
```bash
sudo apt update && sudo apt upgrade && \
sudo apt install git docker docker-compose && \
sudo systemctl enable docker && \
sudo usermod -aG docker $USER && \
logout
```
Arch (and derivatives)
```bash
sudo pacman -Syu && \
sudo pacman -S git docker docker-compose && \
sudo systemctl enable docker && \
sudo usermod -aG docker $USER && \
logout
```

View File

@ -1,510 +0,0 @@
# Mapping My Internet Neighborhood
# Ping List
```
www.google.com
apple.com
youtube.com
microsoft.com
play.google.com
linkedin.com
support.google.com
www.blogger.com
wordpress.org
en.wikipedia.org
cloudflare.com
docs.google.com
mozilla.org
youtu.be
maps.google.com
adobe.com
whatsapp.com
drive.google.com
plus.google.com
googleusercontent.com
accounts.google.com
bp.blogspot.com
sites.google.com
europa.eu
uol.com.br
es.wikipedia.org
facebook.com
vimeo.com
vk.com
github.com
amazon.com
istockphoto.com
t.me
line.me
search.google.com
enable-javascript.com
issuu.com
cnn.com
bbc.com
live.com
google.com.br
www.yahoo.com
dan.com
google.de
globo.com
slideshare.net
nih.gov
files.wordpress.com
who.int
forbes.com
gstatic.com
bbc.co.uk
google.co.jp
gravatar.com
wikimedia.org
developers.google.com
dropbox.com
nytimes.com
creativecommons.org
jimdofree.com
ok.ru
imdb.com
google.es
dailymotion.com
theguardian.com
fr.wikipedia.org
brandbucket.com
mail.ru
paypal.com
tools.google.com
buydomains.com
policies.google.com
pt.wikipedia.org
reuters.com
feedburner.com
www.weebly.com
youronlinechoices.com
news.google.com
medium.com
goo.gl
sedo.com
google.it
afternic.com
opera.com
www.gov.uk
booking.com
researchgate.net
myspace.com
ytimg.com
google.co.uk
fandom.com
thesun.co.uk
wsj.com
ru.wikipedia.org
wikia.com
time.com
domainmarket.com
hatena.ne.jp
list-manage.com
bit.ly
independent.co.uk
huffpost.com
google.fr
w3.org
mail.google.com
telegraph.co.uk
pinterest.com
it.wikipedia.org
dailymail.co.uk
indiatimes.com
cbsnews.com
draft.blogger.com
cdc.gov
wa.me
amazon.co.jp
bloomberg.com
webmd.com
amazon.es
networkadvertising.org
netvibes.com
amazon.de
shutterstock.com
office.com
wp.com
abril.com.br
telegram.me
cpanel.net
photos.google.com
aliexpress.com
aboutads.info
myaccount.google.com
twitter.com
marketingplatform.google
picasaweb.google.com
mirror.co.uk
tinyurl.com
google.ru
estadao.com.br
elpais.com
cnet.com
android.com
foxnews.com
amazon.co.uk
un.org
plesk.com
mediafire.com
archive.org
namecheap.com
forms.gle
get.google.com
google.pl
soundcloud.com
4shared.com
tiktok.com
fb.com
businessinsider.com
nasa.gov
imageshack.us
shopify.com
washingtonpost.com
hugedomains.com
news.yahoo.com
msn.com
ig.com.br
terra.com.br
t.co
de.wikipedia.org
huffingtonpost.com
pixabay.com
yandex.ru
storage.googleapis.com
wired.com
scribd.com
usatoday.com
nature.com
change.org
picasa.google.com
zdf.de
lefigaro.fr
usnews.com
themeforest.net
mega.nz
imageshack.com
fb.me
arxiv.org
lemonde.fr
deezer.com
netflix.com
lg.com
mit.edu
php.net
disqus.com
outlook.com
news.com.au
pbs.org
www.livejournal.com
unicef.org
sciencedirect.com
pinterest.de
express.co.uk
ja.wikipedia.org
pl.wikipedia.org
thenai.org
sciencemag.org
cointernet.com.co
ftc.gov
skype.com
vkontakte.ru
engadget.com
canada.ca
biglobe.ne.jp
hollywoodreporter.com
aol.com
disney.com
eventbrite.com
wikihow.com
stanford.edu
rambler.ru
clickbank.net
twitch.tv
offset.com
nbcnews.com
naver.com
dreamstime.com
academia.edu
abc.net.au
smh.com.au
walmart.com
ssl-images-amazon.com
bandcamp.com
surveymonkey.com
m.wikipedia.org
discord.gg
search.yahoo.com
amazon.fr
yelp.com
doubleclick.net
espn.com
oup.com
gmail.com
xing.com
ietf.org
abcnews.go.com
hp.com
newyorker.com
wiley.com
www.wix.com
thetimes.co.uk
ipv4.google.com
secureserver.net
gnu.org
alicdn.com
francetvinfo.fr
photos1.blogger.com
20minutos.es
nginx.com
ria.ru
ebay.com
zoom.us
cbc.ca
guardian.co.uk
berkeley.edu
spotify.com
techcrunch.com
buzzfeed.com
britannica.com
unesco.org
yahoo.co.jp
lexpress.fr
xbox.com
nginx.org
www.wikipedia.org
psychologytoday.com
npr.org
kickstarter.com
liveinternet.ru
discord.com
icann.org
translate.google.com
ziddu.com
sfgate.com
as.com
google.nl
nydailynews.com
ca.gov
insider.com
sputniknews.com
addthis.com
www.gov.br
akamaihd.net
plos.org
target.com
whitehouse.gov
theatlantic.com
apache.org
samsung.com
worldbank.org
goodreads.com
rakuten.co.jp
urbandictionary.com
akamaized.net
xinhuanet.com
bloglovin.com
pinterest.fr
adssettings.google.com
chicagotribune.com
books.google.com
photobucket.com
www.canalblog.com
id.wikipedia.org
leparisien.fr
nationalgeographic.com
vice.com
amzn.to
qq.com
tripadvisor.com
oracle.com
ikea.com
detik.com
ea.com
redbull.com
cambridge.org
spiegel.de
bing.com
springer.com
privacyshield.gov
ibm.com
sapo.pt
prezi.com
metro.co.uk
rtve.es
timeweb.ru
hubspot.com
ggpht.com
cornell.edu
cnil.fr
gofundme.com
rt.com
cpanel.com
windows.net
netlify.app
newsweek.com
cnbc.com
ft.com
alexa.com
dw.com
abc.es
economist.com
godaddy.com
rapidshare.com
pexels.com
gooyaabitemplates.com
zendesk.com
addtoany.com
code.google.com
sciencedaily.com
mashable.com
e-monsite.com
finance.yahoo.com
huawei.com
sendspace.com
freepik.com
elmundo.es
instagram.com
unsplash.com
doi.org
quora.com
gizmodo.com
weibo.com
linktr.ee
harvard.edu
latimes.com
steampowered.com
clarin.com
nypost.com
www.over-blog.com
googleblog.com
yadi.sk
ted.com
theverge.com
instructables.com
playstation.com
ouest-france.fr
google.co.in
about.com
bp2.blogger.com
ovh.com
lavanguardia.com
google.ca
groups.google.com
mayoclinic.org
nokia.com
imgur.com
twimg.com
qz.com
wn.com
cisco.com
dictionary.com
variety.com
groups.yahoo.com
etsy.com
noaa.gov
excite.co.jp
investopedia.com
reg.ru
welt.de
amazon.in
canva.com
corriere.it
mozilla.com
ebay.de
tes.com
airbnb.com
rottentomatoes.com
seesaa.net
adweek.com
dropboxusercontent.com
faz.net
business.google.com
steamcommunity.com
focus.de
newscientist.com
about.me
marriott.com
prnewswire.com
ieee.org
windowsphone.com
goal.com
kakao.com
softonic.com
mysql.com
venturebeat.com
si.edu
ucoz.ru
google.com.tw
blog.fc2.com
usgs.gov
orange.fr
coursera.org
jhu.edu
oecd.org
ameba.jp
amazon.ca
impress.co.jp
narod.ru
ebay.co.uk
salesforce.com
bund.de
java.com
m.me
lonelyplanet.com
telegram.org
fifa.com
weather.com
dynadot.com
naver.jp
webnode.page
over-blog-kiwi.com
tabelog.com
000webhost.com
marketwatch.com
g.co
theconversation.com
mailchimp.com
pnas.org
feedproxy.google.com
bustle.com
espn.go.com
messenger.com
zdnet.com
ubuntu.com
deloitte.com
mystrikingly.com
hbr.org
lenta.ru
google.com.au
goo.ne.jp
sagepub.com
billboard.com
parallels.com
usc.edu
thoughtco.com
thedailybeast.com
redhat.com
intel.com
scientificamerican.com
psu.edu
usda.gov
ads.google.com
cbslocal.com
mynavi.jp
gfycat.com
thefreedictionary.com
sina.com.cn
ctvnews.ca
wiktionary.org
giphy.com
howstuffworks.com
scholastic.com
com.com
```
1. Determine which domains respond to a ping at all. It's possible that some of these randomly drop 4 pings, but that will be a small number. Do accomplish this, we'll ping each host 4 times and we send the output to ping.log.
`for domain in $(cat domains.txt); do ping -c 4 $domain >> ping.log 2>&1; done`
This gives us [attach/ping.log](attach/ping.log)

View File

@ -1,40 +0,0 @@
# Full Network Diagram
```mermaid
flowchart TD;
Internet<--Symmetrical 1Gbit Fiber-->ONT;
ONT<--Cat5e-->Router;
Router<--Cat5e-->switch_homelab[NetGear 8-Port Switch for Homelab];
switch_homelab<--Cat6-->desktop_joey[Joey's Desktop];
switch_homelab<--Cat5-->desktop_bridget[Bridget's Desktop];
switch_homelab<--Cat6-->NAS;
NAS<--SFP+ DAC-->Desktop;
NAS<--SFP+ DAC-->Server;
switch_homelab<--Cat6-->Server;
switch_homelab<--Cat6-->Seedbox;
switch_homelab<--Cat5e-->Pihole;
Router<--Cat5e-->switch_basementtv[TP-Link 5-Port Switch for Basement TV];
switch_basementtv<--Cat6-->desktop_maddie[Maddie's Desktop];
switch_basementtv<--Cat5e-->client_tv_downstairs[Downstairs TV];
Router<--Cat6-->wap_basement[Ubiquiti Unifi U6-Lite];
wap_basement<--Wifi6 2.4/5GHz-->clients_wireless_basement[Basement Wireless Clients];
Router<--Cat6-->wap_upstairs[Ubiquiti Unifi UAP-AC-LR];
wap_upstairs<--Wifi5 2.4/5GHz-->clients_wireless_upstairs[Upstairs Wireless Clients];
Router<--Cat6-->desktop_mom[Mom's Desktop];
Router<--Cat6-->desktop_dad[Dad's Desktop];
Router<-->desktop_gus[Gus' Desktop];
```
# Router Interfaces
| Interface | Connected to |
|:---------:|:------------:|
| `eth0` | (Upstream) Zyxel C3000Z modem |
| `eth1` | Reserved for `192.168.2.1/24` |
| `eth2` | Homelab switch |
| `eth3` | Mom's office PC |
| `eth4` | Gus' PC |
| `eth5` | (Disconnected) Outlets behind upstairs couch |
| `eth6` | Maddie's office switch |
| `eth7` | Dad's office PC |
| `eth8` | (PoE, injected) Upstairs wireless AP |
| `eth9` | (PoE, native) Homelab wireless AP |
| `pppoe0` | PPPoE layer pysically on `eth0` |
| `switch0` | Internal router switch |

View File

@ -1,19 +0,0 @@
| Device | Max draw (W) | Cost/mo.* |
|:------:|:------------:|:--------:|
| **Networking** | **Total ~60W** | **$2.12** |
| ISP ONT |
| ISP Modem ([Zyxel c3000z](https://www.centurylink.com/content/dam/home/help/downloads/internet/c3000z-datasheet.pdf)) | [10.4W](https://www.centurylink.com/home/help/internet/modems-and-routers/modem-energy-efficiency.html) | $0.34
| Router ([Ubiquiti Edgerouter 10X](https://dl.ubnt.com/datasheets/edgemax/EdgeRouter_DS.pdf)) | 8W (excl. PoE) | $0.26 |
| AP ([Ubiquiti UAP-AC-LR](https://dl.ubnt.com/datasheets/unifi/UniFi_AC_APs_DS.pdf)) | 6.5W | $0.21 |
| AP ([Ubiquiti U6-Lite](https://dl.ui.com/ds/u6-lite_ds.pdf)) | 12W | $0.39 |
| Network Switch (Estimate) | 7W | $0.23 per switch
| **Hosts** | **Total 310W idle / 520W load** | **$13.56** |
| PiHole | 3.8W | $0.12 |
| Server | 36W idle / 136W load | $1.18 / $1.99 / **$2.81** / $3.62 / $4.44 |
| Seedbox | 30W idle / 85W load | $0.98 / $1.43 / **$1.88** / $2.33 / $2.78 |
| NAS | 90W idle / 146W load | $2.94 / $3.40 / $3.85 / $4.31 / $4.77 |
| Disk shelf | ~45W empty / 150W current / ~213W full | $1.47 empty / $4.90 current / $6.96 full |
\* Devices with high variance calculated at intervals of 25% max load (0%, 25%, 50%, 75%, 100%)
## Math
1. Assuming ([\$0.045351/kWh](https://www.mytpu.org/wp-content/uploads/All-Schedules-2020_Emergency-Rate-Delay.pdf) * (30 days per month * 24 hours per day)) / 1000W/kW = $`0.032653`/WM (dollars per watt month)

View File

@ -1,17 +0,0 @@
## From `iperf.fr` List
| Host | Port | Tested Bitrate |
|:----:|:----:|:--------------:|
| ping.online.net | 5200 | 175 Mbps |
| ping-90ms.online.net | 5200 | 173 Mbps |
| nl.iperf.014.fr | 10420 | 10.5 Mbps |
| speedtest.uztelecom.uz | 5200 | 45.9 Mbps |
| iperf.biznetnetworks.com | 5202 | 87.8 Mbps |
More here: [iperf.fr](https://iperf.fr/iperf-servers.php)
## From `masonr/yet-another-bench-script` List
| Host | Port | Tested Bitrate |
|:----:|:----:|:--------------:|
| la.speedtest.clouvider.net | 5200 | 626 Mbps |
More here: [Reddit - Homelab](https://www.reddit.com/r/homelab/comments/slojqr/any_good_public_iperf_servers/hvtkd6e/)

View File

@ -1,11 +0,0 @@
## 1. Networking
1. Power cycle the ONT. Wait until the top three lights are green.
2. Power cycle the modem. Wait until the power, WAN, and Ethernet 1 lights are green.
3. Power cycle the router
Switches and APs should not need power cycling. Once the indicator LED is solid white, everything should be back online.
## 2. Homelab
1. Power on the desktop or laptop.
2. Power on the NAS. The DS4243 will power itself on automatically. **Wait until the webui at [nas.jafner.net](http://nas.jafner.net) is responsive.**
3. Power on the Server. Once it is accessible, run a `sudo mount -a` to mount all network shares defined in `/etc/fstab`. Then run `docker start $(docker ps -aq)` to start all Docker containers. Note: Run `docker inspect -f '{{ .Mounts }}' $(docker ps -q)` to get a list of volumes for all running containers, useful for determining whether a container is reliant on a mounted directory.

View File

@ -1,6 +0,0 @@
# Restart the Docker Daemon
Sometimes it may be necessary to restart the Docker daemon (for example to apply changes made in `/etc/docker/daemon.json`) and recreate all containers. Here's how:
1. Shut down and destroy all containers: `docker stop $(docker ps -aq) && docker rm $(docker ps -aq)`.
2. *Restart* (not reload) the Docker daemon: `sudo systemctl restart docker`.
3. Recreate all containers (to use the new default loki logging): `for app in ~/homelab/jafner-net/config/*; do cd $app && docker-compose up -d; done`
4. Manually boot Minecraft containers as appropriate: `cd ~/homelab/jafner-net/config/minecraft && for server in router vanilla bmcp; do docker-compose -f $server.yml up -d; done`

View File

@ -1,15 +0,0 @@
# Secrets
Our repository contains as many configuration details as reasonable. But we must secure our secrets: passwords, API keys, encryption seeds, etc..
## Docker Env Vars
1. We store our Docker env vars in a file named after the service. For example `keycloak.env`.
2. We separate our secrets from non-secret env vars by placing them in a file with a similar name, but with `_secrets` appended to the service name. For example `keycloak_secrets.env`. These files exist only on the host for which they are necessary, and must be created manually on the host.
3. Our repository `.gitignore` excludes all files matching `*.secret`, and `*_secrets.env`.
Note: This makes secrets very fragile. Accidental deletion or other data loss can destroy the secret permanently.
## Generating Secrets
We use the password manager's generator to create secrets with the desired parameters, preferring the following parameters:
- 64 characters
- Capital letters, lowercase letters, numbers, and standard symbols (`^*@#!&$%`)
If necessary, we will reduce characterset by cutting out symbols before reducing string length.

View File

@ -1,170 +0,0 @@
# Security
## Host OS Initial Setup
For general-purpose hosts, we start from an up-to-date Debian base image. For appliances and application-specific hosts, we prefer downstream of Debian for consistency.
### General Purpose Packages
Assuming a Debian base image, we install the following basic packages:
1. `curl` to facilitate web requests for debugging.
2. `nano` as preferred terminal text editor.
3. `inxi` to compile hardware info.
4. `git` to interact with homelab config repo.
5. `htop` to view primary host resources in real time.
### Installing Docker
There are two modes of running Docker: root and rootless.
Docker was built to run as root, and running as root is much more convenient. However, any potential vulnerabilities in Docker risk privilege escalation.
#### Installing Docker in Root mode (current, deprecated)
We use the convenient, insecure install script to install docker.
1. `curl -fsSL https://get.docker.com | sudo sh` to get and run the install script.
2. `sudo systemctl enable docker` to enable the Docker daemon service.
3. `sudo usermod -aG docker $USER` to add the current user (should be "admin") to the docker group.
4. `logout` to log out as the current user. Log back in to apply new perms.
5. `docker ps` should now return an empty table.
https://docs.docker.com/engine/install/debian/
#### Installing Docker in Rootless mode (preferred)
This is the preferred process, as rootless mode mitigates many potential vulnerabilities in the Docker application and daemon.
1. `sudo apt-get update && sudo apt-get install uidmap dbus-user-session fuse-overlayfs slirp4netns` to install the prerequisite packages to enable rootless mode.
2. Set up the Docker repository:
```sh
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
```
3. Install the Docker packages:
```sh
sudo apt-get install \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin \
docker-ce-rootless-extras
```
4. Run the rootless setup script with `dockerd-rootless-setuptool.sh install`
5. `systemctl --user start docker` to start the rootless docker daemon.
6. `systemctl --user enable docker && sudo loginctl enable-linger $(whoami)` to configure the rootless docker daemon to run at startup.
7. `export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock && docker context use rootless` to configure the client to connect to the socket.
Theoretically, this should work according to the Docker docs. But when I attempted to follow these steps I got the following error when attempting to create a basic nginx container:
```
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit "docker-1c7f642e0716cf1a67c6a0c6ad4a1de3833eb82682ce62b219f423fa1014e227.scope" (properties [{Name:Description Value:"libcontainer container 1c7f642e0716cf1a67c6a0c6ad4a1de3833eb82682ce62b219f423fa1014e227"} {Name:Slice Value:"user.slice"} {Name:Delegate Value:true} {Name:PIDs Value:@au [39360]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Permission denied: unknown.
```
https://docs.docker.com/engine/security/rootless/
## Linux User Management
We create a non-root user (usually called "admin") with a strong password and passwordless sudo.
On Debian-based systems, we take the following steps:
1. As root user, run `adduser admin` to create the non-root user called "admin".
2. As root user, run `usermod -aG sudo admin` to add the new "admin" user to the sudo group.
3. As root user, run `visudo` and append this line to the end of the file: `admin ALL=(ALL) NOPASSWD:ALL`.
4. Switch to the new user with `sudo su admin`.
5. As the new "admin" user, run `passwd` to create a new, strong password. Generate this password with the password manager and store it under the SSH Hosts folder.
https://www.cyberciti.biz/faq/add-new-user-account-with-admin-access-on-linux/
https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
https://www.cyberciti.biz/faq/linux-set-change-password-how-to/
## Securing SSH
For all hosts we want to take the standard steps to secure SSH access.
1. `mkdir /home/$USER/.ssh` to create the `~/.ssh` directory for the non-root user (usually "admin").
2. Copy your SSH public key to the clipboard, then `echo "<insert pubkey here>" >> /home/admin/.ssh/authorized_keys` to enable key-based SSH access to the user.
3. Install the authenticator libpam plugin package with `sudo apt install libpam-google-authenticator`
4. Run the authenticator setup with `google-authenticator` and use the following responses:
- Do you want authentication tokens to be time-based? `y`
- Do you want me to update your "/home/$USER/.google_authenticator" file? `y`
- Do you want to disallow multiple uses of the same authentication token? `y`
- Do you want to do so? `n` (refers to increasing time skew window)
- Do you want to enable rate-limiting? `y` We enter our TOTP secret key into our second authentication method and save our one-time backup recovery codes.
5. Edit the `/etc/pam.d/sshd` file as sudo, and add this line to the top of the file `auth sufficient pam_google_authenticator.so nullok`.
6. Edit the `/etc/ssh/sshd_config` file as sudo, and ensure the following assertions exist:
- `PubkeyAuthentication yes` to enable authentication via pubkeys in `~/.ssh/authorized_keys`.
- `AuthenticationMethods publickey,keyboard-interactive` to allow both pubkey and the interactive 2FA prompt.
- `PasswordAuthentication no` to disable password-based authentication.
- `ChallengeResponseAuthentication yes` to enable 2FA interactive challenge.
- `UsePAM yes` to use the 2FA authenticator libpam module.
7. Restart the SSH daemon with `sudo systemctl restart sshd.service`.
Note: SSH root login will be disabled implicitly by requiring pubkey authentication and having no pubkeys listed in `/root/.ssh/authorized_keys`.
https://www.digitalocean.com/community/tutorials/how-to-set-up-multi-factor-authentication-for-ssh-on-ubuntu-16-04
### Disabling 2FA
Some use cases (such as programmatic access) demand 2FA be disabled.
Some day we'll figure out how to allow specific keys to bypass the 2FA requirement. But until then,
1. Edit the file `/etc/pam.d/sshd` and comment out the line `auth sufficient pam_google_authenticator.so nullok`
2. Edit the file `/etc/ssh/sshd_config` and find the `AuthenticationMethods` configuration. Replace the value `publickey,keyboard-interactive` with `publickey`.
### SSH Key Management
The process for managing SSH keys should work as follows:
1. SSH access to hosts should be controlled via keys listed in `~/.ssh/authorized_keys`.
2. One key should map to one user on one device.
3. When authorizing a key, review existing authorized keys and remove as appropriate.
4. Device keys should be stored under the "SSH Keys" folder in the password manager. The pubkey should be the "password" for easy copying, and the private component should be added as an attachment.
## Patching and Updating
In the interest of proactively mitigating security risks, we try to keep packages up to date. We have two main concerns for patching: host packages, and docker images. Each of these have their own concerns and are handled separately.
### Host Packages via Unattended Upgrades
Since Debiant 9, the `unattended-upgrades` and `apt-listchanges` are installed by default.
1. Install the packages with `sudo apt-get install unattended-upgrades apt-listchanges`.
2. Create the default automatic upgrade config with `sudo dpkg-reconfigure -plow unattended-upgrades`
By default, we will get automatic upgrades for the distro version default and security channels (e.g. `bullseye` and `bullseye-security`) with the `Debian` and `Debian-Security` labels.
https://wiki.debian.org/UnattendedUpgrades
### Debian Version Upgrade
When the time comes for a major version upgrade on a Debian system, we take the following steps as soon as realistic.
1. Update the current system with `sudo apt-get update && sudo apt-get upgrade && sudo apt-get full-upgrade`.
2. Switch the update channel for APT sources.
2a. Export the name of the new version codename to a variable with `NEW_VERSION_CODENAME=bookworm` (bookworm as an example).
2b. `for file in /etc/apt/sources.list /etc/apt/sources.list.d/*; do sudo sed "s/$VERSION_CODENAME/$NEW_VERSION_CODENAME/g" $file; done`.
3. Clean out old packages and pull the new lists `sudo apt-get clean && sudo apt-get update`
4. Update to most recent versions of all packages for new channel with `sudo apt-get upgrade && sudo apt-get full-upgrade`
5. Clean out unnecessary packages with `sudo apt-get autoremove`.
6. Reboot the host to finalize changes with `sudo shutdown -r now`.
Note: If migrating from Debian versions <12 to versions >=12, add the following repos (in addition to `main`) after step 2a: `contrib non-free non-free-firmware`.
https://wiki.debian.org/DebianUpgrade
### Docker Images
As of now, we have no automated process or tooling for updating Docker images.
We usually update Docker images one stack at a time. For example, we'll update `calibre-web` on `fighter`:
1. Navigate to the directory of the stack. `cd ~/homelab/fighter/config/calibre-web`
2. Check the images and tags to be pulled with `docker-compose config | grep image`
3. Pull the latest version of the image tagged in the compose file `docker-compose pull`
4. Restart the containers to use the new images with `docker-compose up -d --force-recreate`
Note: We can update one image from a stack by specifying the name of the service. E.g. `docker-compose pull forwardauth`

View File

@ -1,14 +0,0 @@
# Setting Up the Repository
1. Create a new Gitlab [Personal Access Token](https://gitlab.jafner.net/-/profile/personal_access_tokens) named after the host on which it will be used. It should have the scopes `read_api`, `read_user`, `read_repository`, and, optionally, `write_repository` if the host will be pushing commits back to the origin. Development hosts should have the `write_repository` permission. Note the *token name* and *token key* for step 6.
2. `mkdir ~/homelab ~/data && cd ~/homelab` Create the `~/homelab` and `~/data` directories. This should be under the `admin` user's home directory, or equivalent. *It should not be owned by root.*
3. `git init` Initialize the git repo. It should be empty at this point. We must init the repo empty in order to configure sparse checkout.
4. `git config core.sparseCheckout true && git config core.fileMode false && git config pull.ff only && git config init.defaultBranch main` Configure the repo to use sparse checkout and ignore file mode changes. Also configure default branch and pull behavior.
5. (Optional) `echo "$HOSTNAME/" > .git/info/sparse-checkout` Configure the repo to checkout only the files relevant to the host (e.g. fighter). Development hosts should not use this.
6. `git remote add -f origin https://<token name>:<token key>@gitlab.jafner.net/Jafner/homelab.git` Add the origin with authentication via personal access token and fetch. Remember to replace the placeholder token name and token key with the values from step 1.
7. `git checkout main` Checkout the main branch to fetch the latest files.
## Disabling Sparse Checkout
To disable sparse checkout, simply run `git sparse-checkout disable`.
With this, it can also be re-eneabled with `git sparse-checkout init`.
You can use these two commands to toggle sparse checkout.
Per: https://stackoverflow.com/questions/36190800/how-to-disable-sparse-checkout-after-enabled

View File

@ -1,29 +0,0 @@
# Basic Usage
First, the program must be downloaded with `pip3` (requires Python 3.6.1 `sudo apt install python3` or higher *and* FFmpeg 4.2 `sudo apt install ffmpeg` or higher) via `pip3 install spotdl`.
To download a track, album, or artist from Spotify, use:
`spotdl <url e.g. https://open.spotify.com/artist/3q7HBObVc0L8jNeTe5Gofh?si=fd6100828a764c3b>`
This is non-interactive and works programmatically.
## Using Docker Container
If the host has Docker, but you don't want to install any Python packages, you can run single commands with the Docker container with `docker run --rm -it -v "$(pwd):/data" coritsky/spotdl <url e.g. https://open.spotify.com/artist/3q7HBObVc0L8jNeTe5Gofh?si=fd6100828a764c3b>`.
# Music Library Integration
To make updating my library easier, each "Artist" folder has a file called `spot.txt` which contains only the Spotify URL for that artist. This makes it possible to run a loop similar to the following:
```sh
cd /path/to/music/library/artists
for artist in */; do
cd "$(pwd)/$artist" &&
# use spotdl if the host is already configured with spotdl, or you don't need the script to be portable
# use docker run for better portability (within my lab) at the expense of overhead
spotdl $(cat spot.txt) &&
# docker run --rm -it -v "$(pwd):/data" coritsky/spotdl $(cat spot.txt) &&
cd ..
done
```
# Links
[coritsky/spotdl on Dockerhub](https://hub.docker.com/r/coritsky/spotdl)
[Spotdl on GitHub](https://github.com/spotDL/spotify-downloader/)

View File

@ -1,15 +0,0 @@
# Homelab Tour
A tour of the services and configurations
## Hosts
For this repo we use traditional server configuration patterns, rather than Kubernetes.
## Services and Stacks
### Networking
### Volumes
### Env Vars
#### Secrets

View File

@ -1,4 +0,0 @@
1. Edit the contents of `/etc/apt/sources.list` as sudo. Make it match the [default Debian 11 sources.list](https://wiki.debian.org/SourcesList#Example_sources.list), using the contrib, non-free additional components.
2. Run `sudo apt update && sudo apt upgrade`
3. Run `sudo apt full-upgrade`
4. Reboot.

View File

@ -1,8 +0,0 @@
1. Update existing packages. Run `sudo apt-get update && sudo apt-get upgrade` to fetch and install the latest versions of existing packages from the Debian 11 release channel.
2. Reboot the system. Follow the appropriate shutdown procedure for the host.
3. Edit the `sources.list` file to point to the new release channels. Run `sudo nano /etc/apt/sources.list`, then replace the release channel names for bullseye with those for bookworm.
4. Update and upgrade packages minimally. Run `sudo apt update && sudo apt upgrade --without-new-pkgs`.
5. Fully upgrade the system. Run `sudo apt full-upgrade`.
6. Validate the SSHD config file. Run `sudo sshd -t`.
[CyberCiti.biz](https://www.cyberciti.biz/faq/update-upgrade-debian-11-to-debian-12-bookworm/)

View File

@ -1,5 +0,0 @@
PING com.com (54.219.18.140) 56(84) bytes of data.
--- com.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3065ms

View File

@ -1,7 +0,0 @@
foreach( $input in $args ) {
$output = ("$input").TrimEnd(".mkv .mp4") + ".mp4"
ffmpeg -i "$input" -c copy $output
if ($?) {
Remove-Item $input
}
}

View File

@ -1,19 +0,0 @@
foreach( $input in $args ) {
$extension = [System.IO.Path]::GetExtension("$input")
if ($extension -ne ".mp4") {
echo "Video must use mp4 container!"
pause
exit
}
Set-Location -Path ([System.IO.Path]::GetDirectoryName("$input"))
$output = [System.IO.Path]::GetDirectoryName("$input") + "\" + [System.IO.Path]::GetFileNameWithoutExtension("$input") + "-slow-mo" + [System.IO.Path]::GetExtension("$input")
echo $output
ffmpeg -i "$input" -map 0:v -c:v copy -bsf:v h264_mp4toannexb 'raw.h264'
if ($?) {
ffmpeg -fflags +genpts -r 60 -i raw.h264 -c:v copy -movflags faststart "$output"
}
Remove-Item raw.h264
}
pause

View File

@ -1,56 +0,0 @@
Write-Host @"
Select a supported transcode profile:
1) CRF 21 (~19.9 Mb/s)
2) CRF 27 (~10.3 Mb/s)
3) 1080p CRF 21 (11.9 Mb/s)
4) 1080p CRF 27 (6.2 Mb/s)
5) 720p CRF 21 (6.3 Mb/s)
6) 720p CRF 27 (3.3 Mb/s)
"@
$profile = Read-Host -Prompt 'Select a profile [2]'
Switch ($profile) {
"" {
$profile = "CRF_27"
$ffmpeg_arguments='-metadata comment="x264 CRF 27" -movflags +faststart -c:v libx264 -preset slower -crf 27'.Split(" ")
}
"1" {
$profile = "CRF_21"
$ffmpeg_arguments='-metadata comment="x264 CRF 21" -movflags +faststart -c:v libx264 -preset slower -crf 21'.Split(" ")
}
"2" {
$profile = "CRF_27"
$ffmpeg_arguments='-metadata comment="x264 CRF 27" -movflags +faststart -c:v libx264 -preset slower -crf 27'.Split(" ")
}
"3" {
$profile = "1080p_CRF_21"
$ffmpeg_arguments='-metadata comment="x264 1080p CRF 21" -movflags +faststart -vf scale=1920:1080 -c:v libx264 -preset slower -crf 21'.Split(" ")
}
"4" {
$profile = "1080p_CRF_27"
$ffmpeg_arguments='-metadata comment="x264 1080p CRF 27" -movflags +faststart -vf scale=1920:1080 -c:v libx264 -preset slower -crf 27'.Split(" ")
}
"5" {
$profile = "720p_CRF_21"
$ffmpeg_arguments='-metadata comment="x264 720p CRF 21" -movflags +faststart -vf scale=1280:720 -c:v libx264 -preset slower -crf 21'.Split(" ")
}
"6" {
$profile = "720p_CRF_27"
$ffmpeg_arguments='-metadata comment="x264 720p CRF 27" -movflags +faststart -vf scale=1280:720 -c:v libx264 -preset slower -crf 27'.Split(" ")
}
Default {
echo "Is it that hard to just enter a number?"
pause
exit
}
}
foreach( $input in $args ) {
$output = [System.IO.Path]::GetDirectoryName("$input") + "\" + [System.IO.Path]::GetFileNameWithoutExtension("$input") + "-$profile" + [System.IO.Path]::GetExtension("$input")
ffmpeg -i "$input" $ffmpeg_arguments "$output"
}
pause

View File

@ -1,173 +0,0 @@
# Quick Help
- Fighter connecting to Barbarian: `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --login && sudo mount /dev/sdb1 /mnt/iscsi/barbarian`
- Fighter connecting to Paladin: `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.12:3260" --login && sudo mount /dev/sdb1 /mnt/iscsi/paladin`
# NOTE: Adding or removing drives
> The drive letter of the iSCSI device will change (e.g. from `/dev/sde` to `/dev/sdb`) if drives are added or removed. This will cause the mount to fail.
To resolve:
0. Make sure all Docker stacks relying on the iSCSI drive are shut down.
1. Update the `fstab` entry. Edit the `/etc/fstab` file as root, and update the drive letter.
2. Re-mount the drive. Run `sudo mount -a`.
# Creating the Zvol and iSCSI share in TrueNAS Scale
1. Navigate to the dataset to use. From the TrueNAS Scale dashboard, open the navigation side panel. Navigate to "Datasets". Select the pool to use (`Tank`).
2. Create the Zvol to use. In the top-left, click "Add Zvol" ([Why not a dataset?](https://www.truenas.com/community/threads/dataset-over-zvol-or-vice-versa.45526/)). Name: `fighter`, Size for this zvol: `8 TiB`. Leave all other settings default.
3. Navigate to the iSCSI share creator. Navigate to "Shares". Open the "Block (iSCSI) Shares Targets" panel. (Optionally, set the base name per [RFC 3721 1.1](https://datatracker.ietf.org/doc/html/rfc3721.html#section-1.1) (`iqn.2020-04.net.jafner`)).
4. Create the iSCSI share. Click the "Wizard" button in the top-right.
a. Create or Choose Block Device. Name: `fighter`, Device: `zvol/Tank/fighter`, Sharing Platform: `Modern OS`.
b. Portal. Portal: `Create New`, Discovery Authentication Method: `NONE`, Discovery Authentication Group: `NONE`, Add listen: `0.0.0.0`.
c. Initiator. Leave blank to allow all hostnames and IPs to initiate. Optionally enter a list IP address(es) or hostname(s) to restrict access to the device.
d. Confirm. Review and Save.
5. Enable iSCSI service at startup. Navigate to System Settings -> Services. If it's not already running, enable the iSCSI service and check the box to "Start Automatically".
# Connecting to the iSCSI target
1. Install the `open-iscsi` package.
- `sudo apt-get install open-iscsi`
2. Get the list of available shares.
- `sudo iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10`
- The IP for `--portal` is the IP of the NAS hosting the iSCSI share.
- In my case, this command returns `192.168.1.10:3260,1 iqn.2020-03.net.jafner:fighter`.
3. Open the iSCSI session.
- `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --login`
- The name for `--targetname` is the iqn string including the share name.
- The address for `--portal` has both the IP and port used by the NAS hosting the iSCSI share.
4. Verify the session connected.
- `sudo iscsiadm --mode session --print=1`
- This should return the description of any active sessions.
[Debian.org](https://wiki.debian.org/SAN/iSCSI/open-iscsi).
# Initializing the iSCSI disk
1. Identify the device name of the new device with `sudo iscsiadm -m session -P 3 | grep "Attached scsi disk"`. In my case, `sdb`. [ServerFault](https://serverfault.com/questions/828401/how-can-i-determine-if-an-iscsi-device-is-a-mounted-linux-filesystem).
2. Partition and format the device. Run `sudo parted --script /dev/sdb "mklabel gpt" && sudo parted --script /dev/sdb "mkpart primary 0% 100%" && sudo mkfs.ext4 /dev/sdb1` [Server-world.info](https://www.server-world.info/en/note?os=Debian_11&p=iscsi&f=3).
3. Mount the new partition to a directory. Run `sudo mkdir /mnt/iscsi && sudo mount /dev/sdb1 /mnt/iscsi`. Where the path `/dev/sdb1` is the newly-created partition and the path `/mnt/iscsi` is the path to which you want it mounted.
4. Test the disk write speed of the new partition. Run `sudo dd if=/dev/zero of=/mnt/iscsi/temp.tmp bs=1M count=32768` to run a 32GB test write. [Cloudzy.com](https://cloudzy.com/blog/test-disk-speed-in-linux/).
# Connecting and mounting the iSCSI share on boot
1. Get the full path of the share's configuration. It should be like `/etc/iscsi/nodes/<share iqn>/<share host address>/default`. In my case it was `/etc/iscsi/nodes/iqn.2020-03.net.jafner:fighter/192.168.1.10,3260,1/default`. [Debian.org](https://wiki.debian.org/SAN/iSCSI/open-iscsi).
2. Set the `node.startup` parameter to `automatic`. Run `sudo sed -i 's/node.startup = manual/node.startup = automatic/g' /etc/iscsi/nodes/iqn.2020-03.net.jafner:fighter/192.168.1.10,3260,1/default`.
3. Add the new mount to `/etc/fstab`. Run `sudo bash -c "echo '/dev/sdb1 /mnt/iscsi ext4 _netdev 0 0' >> /etc/fstab"`. [Adamsdesk.com](https://www.adamsdesk.com/posts/sudo-echo-permission-denied/), [StackExchange](https://unix.stackexchange.com/questions/195116/mount-iscsi-drive-at-boot-system-halts).
# How to Gracefully Terminate iSCSI Session
1. Ensure any Docker containers currently using the device are shut down. Run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "ISCSI-DEPENDENT: $stack"; fi ; done` to get the list of iSCSI-dependent stacks. Ensure all listed stacks are OK to shut down, then run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "SHUTTING DOWN $stack" && docker-compose down; fi ; done`.
2. Unmount the iSCSI device. Run `sudo umount /mnt/iscsi`.
3. Log out of the iSCSI session. Run `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --logout`.
4. Shut down the host. Run `sudo shutdown now`.
# Systemd-ifying the process
Remove the iSCSI mount from `/etc/fstab`, but otherwise most of the steps above should be fine. (Don't forget to install and enable the `iscsid.service` systemd unit).
### Script for connecting to (and disconnecting from) iSCSI session
This script is one command, but sometimes it's useful to contain it in a script.
[`connect-iscsi.sh`](../fighter/scripts/connect-iscsi.sh)
```sh
#!/bin/bash
iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260 --login
```
[`disconnect-iscsi.sh`](../fighter/scripts/disconnect-iscsi.sh)
```sh
#!/bin/bash
iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260, 1 -u
```
### Systemd Unit for connecting iSCSI session
`/etc/systemd/system/connect-iscsi.service` with `root:root 644` permissions
```ini
[Unit]
Description=Connect iSCSI session
Requires=network-online.target
#After=
DefaultDependencies=no
[Service]
User=root
Group=root
Type=oneshot
RemainAfterExit=true
ExecStart=iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260 --login
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
### Systemd Unit for mounting the share
`/etc/systemd/system/mnt-nas-iscsi.mount` with `root:root 644` permissions
Note that the file name *must* be `mnt-nas-iscsi` if its `Where=` parameter is `/mnt/nas/iscsi`.
[Docs](https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html)
```ini
[Unit]
Description="Mount iSCSI share /mnt/nas/iscsi"
After=connect-iscsi.service
DefaultDependencies=no
[Mount]
What=/dev/disk/by-uuid/cf3a253c-e792-48b5-89a1-f91deb02b3be
Where=/mnt/nas/iscsi
Type=ext4
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
### Systemd Unit for automounting the share
`/etc/systemd/system/mnt-nas-iscsi.automount` with `root:root 644` permissions
Note that the file name *must* be `mnt-nas-iscsi` if its `Where=` parameter is `/mnt/nas/iscsi`.
[Docs](https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html)
```ini
[Unit]
Description="Mount iSCSI share /mnt/nas/iscsi"
Requires=network-online.target
#After=
[Automount]
Where=/mnt/nas/iscsi
[Install]
WantedBy=multi-user.target
```
### Quick interactive one-liner to install these scripts
This will open each file for editing in nano under the path `/etc/systemd/system/` and apply the correct permissions to the file after it has been written.
```sh
for file in /etc/systemd/system/connect-iscsi.service /etc/systemd/system/mnt-nas-iscsi.mount /etc/systemd/system/mnt-nas-iscsi.automount; do sudo nano $file && sudo chown root:root $file && sudo chmod 644 $file && sudo systemctl enable $(basename $file); done && sudo systemctl daemon-reload
```
After this, it's probably a good idea to reboot from scratch.
### Check statuses
- `sudo systemctl status connect-iscsi.service`
- `sudo systemctl status mnt-nas-iscsi.mount`
- `sudo systemctl status mnt-nas-iscsi.automount`
https://unix.stackexchange.com/questions/195116/mount-iscsi-drive-at-boot-system-halts
https://github.com/f1linux/iscsi-automount/blob/master/config-iscsi-storage.sh
https://github.com/f1linux/iscsi-automount/blob/master/config-iscsi-storage-mounts.sh
# Disabling all iSCSI units for debugging
During an extended outage of barbarian, we learned that, as configured, fighter will not boot while its iSCSI target is inaccessible. To resolve, we disabled the following systemd units:
```
iscsi.service
mnt-nas-iscsi.automount
mnt-nas-iscsi.mount
connect-iscsi.service
barbarian-wait-online.service
iscsid.service
```
Oneliners below:
- Disable: `for unit in iscsi.service mnt-nas-iscsi.automount mnt-nas-iscsi.mount connect-iscsi.service barbarian-wait-online.service iscsid.service; do systemctl disable $unit; done`
- Enable: `for unit in iscsi.service mnt-nas-iscsi.automount mnt-nas-iscsi.mount connect-iscsi.service barbarian-wait-online.service iscsid.service; do systemctl enable $unit; done`

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

View File

@ -1,4 +1,4 @@
# Using `inxi`
## Hardware Reports with `inxi`
`inxi` is a script which employs a wide array of system information utilities to assemble a holistic system summary. Check out [the repository](https://github.com/smxi/inxi) for more information.
### Install `inxi`

View File

@ -122,3 +122,178 @@ networks:
external: true
<service>:
```
## iSCSI
### Quick Help
- Fighter connecting to Barbarian: `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --login && sudo mount /dev/sdb1 /mnt/iscsi/barbarian`
- Fighter connecting to Paladin: `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.12:3260" --login && sudo mount /dev/sdb1 /mnt/iscsi/paladin`
### NOTE: Adding or removing drives
> The drive letter of the iSCSI device will change (e.g. from `/dev/sde` to `/dev/sdb`) if drives are added or removed. This will cause the mount to fail.
To resolve:
0. Make sure all Docker stacks relying on the iSCSI drive are shut down.
1. Update the `fstab` entry. Edit the `/etc/fstab` file as root, and update the drive letter.
2. Re-mount the drive. Run `sudo mount -a`.
### Creating the Zvol and iSCSI share in TrueNAS Scale
1. Navigate to the dataset to use. From the TrueNAS Scale dashboard, open the navigation side panel. Navigate to "Datasets". Select the pool to use (`Tank`).
2. Create the Zvol to use. In the top-left, click "Add Zvol" ([Why not a dataset?](https://www.truenas.com/community/threads/dataset-over-zvol-or-vice-versa.45526/)). Name: `fighter`, Size for this zvol: `8 TiB`. Leave all other settings default.
3. Navigate to the iSCSI share creator. Navigate to "Shares". Open the "Block (iSCSI) Shares Targets" panel. (Optionally, set the base name per [RFC 3721 1.1](https://datatracker.ietf.org/doc/html/rfc3721.html#section-1.1) (`iqn.2020-04.net.jafner`)).
4. Create the iSCSI share. Click the "Wizard" button in the top-right.
a. Create or Choose Block Device. Name: `fighter`, Device: `zvol/Tank/fighter`, Sharing Platform: `Modern OS`.
b. Portal. Portal: `Create New`, Discovery Authentication Method: `NONE`, Discovery Authentication Group: `NONE`, Add listen: `0.0.0.0`.
c. Initiator. Leave blank to allow all hostnames and IPs to initiate. Optionally enter a list IP address(es) or hostname(s) to restrict access to the device.
d. Confirm. Review and Save.
5. Enable iSCSI service at startup. Navigate to System Settings -> Services. If it's not already running, enable the iSCSI service and check the box to "Start Automatically".
### Connecting to the iSCSI target
1. Install the `open-iscsi` package.
- `sudo apt-get install open-iscsi`
2. Get the list of available shares.
- `sudo iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10`
- The IP for `--portal` is the IP of the NAS hosting the iSCSI share.
- In my case, this command returns `192.168.1.10:3260,1 iqn.2020-03.net.jafner:fighter`.
3. Open the iSCSI session.
- `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --login`
- The name for `--targetname` is the iqn string including the share name.
- The address for `--portal` has both the IP and port used by the NAS hosting the iSCSI share.
4. Verify the session connected.
- `sudo iscsiadm --mode session --print=1`
- This should return the description of any active sessions.
[Debian.org](https://wiki.debian.org/SAN/iSCSI/open-iscsi).
### Initializing the iSCSI disk
1. Identify the device name of the new device with `sudo iscsiadm -m session -P 3 | grep "Attached scsi disk"`. In my case, `sdb`. [ServerFault](https://serverfault.com/questions/828401/how-can-i-determine-if-an-iscsi-device-is-a-mounted-linux-filesystem).
2. Partition and format the device. Run `sudo parted --script /dev/sdb "mklabel gpt" && sudo parted --script /dev/sdb "mkpart primary 0% 100%" && sudo mkfs.ext4 /dev/sdb1` [Server-world.info](https://www.server-world.info/en/note?os=Debian_11&p=iscsi&f=3).
3. Mount the new partition to a directory. Run `sudo mkdir /mnt/iscsi && sudo mount /dev/sdb1 /mnt/iscsi`. Where the path `/dev/sdb1` is the newly-created partition and the path `/mnt/iscsi` is the path to which you want it mounted.
4. Test the disk write speed of the new partition. Run `sudo dd if=/dev/zero of=/mnt/iscsi/temp.tmp bs=1M count=32768` to run a 32GB test write. [Cloudzy.com](https://cloudzy.com/blog/test-disk-speed-in-linux/).
### Connecting and mounting the iSCSI share on boot
1. Get the full path of the share's configuration. It should be like `/etc/iscsi/nodes/<share iqn>/<share host address>/default`. In my case it was `/etc/iscsi/nodes/iqn.2020-03.net.jafner:fighter/192.168.1.10,3260,1/default`. [Debian.org](https://wiki.debian.org/SAN/iSCSI/open-iscsi).
2. Set the `node.startup` parameter to `automatic`. Run `sudo sed -i 's/node.startup = manual/node.startup = automatic/g' /etc/iscsi/nodes/iqn.2020-03.net.jafner:fighter/192.168.1.10,3260,1/default`.
3. Add the new mount to `/etc/fstab`. Run `sudo bash -c "echo '/dev/sdb1 /mnt/iscsi ext4 _netdev 0 0' >> /etc/fstab"`. [Adamsdesk.com](https://www.adamsdesk.com/posts/sudo-echo-permission-denied/), [StackExchange](https://unix.stackexchange.com/questions/195116/mount-iscsi-drive-at-boot-system-halts).
### How to Gracefully Terminate iSCSI Session
1. Ensure any Docker containers currently using the device are shut down. Run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "ISCSI-DEPENDENT: $stack"; fi ; done` to get the list of iSCSI-dependent stacks. Ensure all listed stacks are OK to shut down, then run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "SHUTTING DOWN $stack" && docker-compose down; fi ; done`.
2. Unmount the iSCSI device. Run `sudo umount /mnt/iscsi`.
3. Log out of the iSCSI session. Run `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --logout`.
4. Shut down the host. Run `sudo shutdown now`.
### Systemd-ifying the process
Remove the iSCSI mount from `/etc/fstab`, but otherwise most of the steps above should be fine. (Don't forget to install and enable the `iscsid.service` systemd unit).
#### Script for connecting to (and disconnecting from) iSCSI session
This script is one command, but sometimes it's useful to contain it in a script.
[`connect-iscsi.sh`](../fighter/scripts/connect-iscsi.sh)
```sh
#!/bin/bash
iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260 --login
```
[`disconnect-iscsi.sh`](../fighter/scripts/disconnect-iscsi.sh)
```sh
#!/bin/bash
iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260, 1 -u
```
#### Systemd Unit for connecting iSCSI session
`/etc/systemd/system/connect-iscsi.service` with `root:root 644` permissions
```ini
[Unit]
Description=Connect iSCSI session
Requires=network-online.target
#After=
DefaultDependencies=no
[Service]
User=root
Group=root
Type=oneshot
RemainAfterExit=true
ExecStart=iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260 --login
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
#### Systemd Unit for mounting the share
`/etc/systemd/system/mnt-nas-iscsi.mount` with `root:root 644` permissions
Note that the file name *must* be `mnt-nas-iscsi` if its `Where=` parameter is `/mnt/nas/iscsi`.
[Docs](https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html)
```ini
[Unit]
Description="Mount iSCSI share /mnt/nas/iscsi"
After=connect-iscsi.service
DefaultDependencies=no
[Mount]
What=/dev/disk/by-uuid/cf3a253c-e792-48b5-89a1-f91deb02b3be
Where=/mnt/nas/iscsi
Type=ext4
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
#### Systemd Unit for automounting the share
`/etc/systemd/system/mnt-nas-iscsi.automount` with `root:root 644` permissions
Note that the file name *must* be `mnt-nas-iscsi` if its `Where=` parameter is `/mnt/nas/iscsi`.
[Docs](https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html)
```ini
[Unit]
Description="Mount iSCSI share /mnt/nas/iscsi"
Requires=network-online.target
#After=
[Automount]
Where=/mnt/nas/iscsi
[Install]
WantedBy=multi-user.target
```
#### Quick interactive one-liner to install these scripts
This will open each file for editing in nano under the path `/etc/systemd/system/` and apply the correct permissions to the file after it has been written.
```sh
for file in /etc/systemd/system/connect-iscsi.service /etc/systemd/system/mnt-nas-iscsi.mount /etc/systemd/system/mnt-nas-iscsi.automount; do sudo nano $file && sudo chown root:root $file && sudo chmod 644 $file && sudo systemctl enable $(basename $file); done && sudo systemctl daemon-reload
```
After this, it's probably a good idea to reboot from scratch.
#### Check statuses
- `sudo systemctl status connect-iscsi.service`
- `sudo systemctl status mnt-nas-iscsi.mount`
- `sudo systemctl status mnt-nas-iscsi.automount`
https://unix.stackexchange.com/questions/195116/mount-iscsi-drive-at-boot-system-halts
https://github.com/f1linux/iscsi-automount/blob/master/config-iscsi-storage.sh
https://github.com/f1linux/iscsi-automount/blob/master/config-iscsi-storage-mounts.sh
#### Disabling all iSCSI units for debugging
During an extended outage of barbarian, we learned that, as configured, fighter will not boot while its iSCSI target is inaccessible. To resolve, we disabled the following systemd units:
```
iscsi.service
mnt-nas-iscsi.automount
mnt-nas-iscsi.mount
connect-iscsi.service
barbarian-wait-online.service
iscsid.service
```
Oneliners below:
- Disable: `for unit in iscsi.service mnt-nas-iscsi.automount mnt-nas-iscsi.mount connect-iscsi.service barbarian-wait-online.service iscsid.service; do systemctl disable $unit; done`
- Enable: `for unit in iscsi.service mnt-nas-iscsi.automount mnt-nas-iscsi.mount connect-iscsi.service barbarian-wait-online.service iscsid.service; do systemctl enable $unit; done`

View File

@ -0,0 +1,22 @@
# Configure SMTP Submission via ProtonMail
| Key | Value |
|:---:|:-----:|
| From Address | noreply@jafner.net |
| From Name | No Reply |
| Protocol | SMTP |
| Mail Server | smtp.gmail.com |
| Mail Server Port | 465 |
| Security | SSL (Implicit TLS) |
| SMTP Authentication | Yes |
| Username | noreply@jafner.net |
| Password | *Create a unique Application Password (see below)* |
> Note: As of now, ProtonMail's SMTP submission feature is restricted to [Proton for Business](https://proton.me/business/plans), [Visionary](https://proton.me/support/proton-plans#proton-visionary), and [Family](https://proton.me/support/proton-plans#proton-family) plans. Additionally, new accounts must submit a support ticket articulating their use-case and domains to add in order to get SMTP submission enabled for their account.
## Create a Token
1. To get a token, navigate to the [ProtonMail Settings -> IMAP/SMTP](https://account.proton.me/u/0/mail/imap-smtp), then click "Generate token".
2. Set the "Token name" to the service that will be sending emails.
3. Set the "Email address" to "noreply@jafner.net".
## References
1. [ProtonMail Support - How to set up SMTP to use business applications or devices with Proton Mail](https://proton.me/support/smtp-submission)

View File

Before

Width:  |  Height:  |  Size: 2.3 MiB

After

Width:  |  Height:  |  Size: 2.3 MiB

View File

Before

Width:  |  Height:  |  Size: 2.2 MiB

After

Width:  |  Height:  |  Size: 2.2 MiB

View File

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

Before

Width:  |  Height:  |  Size: 3.4 MiB

After

Width:  |  Height:  |  Size: 3.4 MiB

View File

Before

Width:  |  Height:  |  Size: 3.2 MiB

After

Width:  |  Height:  |  Size: 3.2 MiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 3.8 MiB

After

Width:  |  Height:  |  Size: 3.8 MiB