Braindump for March 2024 Rescue
Some checks failed
Deploy to Fighter / Deploy (push) Failing after 1s
Deploy to Fighter / Notify (push) Successful in 3s

This commit is contained in:
Joey Hafner 2024-03-30 17:54:10 -07:00
parent 494152be66
commit f8fee11980
6 changed files with 388 additions and 12 deletions

View File

@ -41,8 +41,19 @@ It might be possible to pull the part UUID from the `zpool status` command direc
# Replace Disk in Pool # Replace Disk in Pool
Once the failed disk has been identified and physically replaced, you should know the old drive's UUID (via `zpool status`) and the new drive's device name (via `lsblk` and deduction) Once the failed disk has been identified and physically replaced, you should know the old drive's UUID (via `zpool status`) and the new drive's device name (via `lsblk` and deduction)
Once the new drive is in place and you know its ID (e.g. `/dev/sdw`), run the following to begin the resilver process:
`zpool replace <pool> <part-uuid to be replace> <device id of new drive>`
E.g. `zpool replace Media d50abb30-81fd-49c6-b22e-43fcee2022fe /dev/sdx`
This will begin a new resilver operation. Good luck!
https://docs.oracle.com/cd/E19253-01/819-5461/gazgd/index.html
# Update Log # Update Log
**Most recent first** **Most recent first**
- *2024/03/12*: Replaced VLKXPS1V with VKH40L6X at Y6/X3
- *2024/02/28*: Replaced 2EKA92XX with VLKXPS1V at Y6/X3 - *2024/02/28*: Replaced 2EKA92XX with VLKXPS1V at Y6/X3
- *2024/02/27*: Replaced VJG2T4YX with VJG282NX at Y2/X3 - *2024/02/27*: Replaced VJG2T4YX with VJG282NX at Y2/X3

View File

@ -30,20 +30,24 @@ I carefully calculated that the number of times you live is once, so we're flyin
We identify which drives are in the wrong pool by running `sudo zpool status TEMP`, finding each part-uuid in the `/dev/disk/by-partuuid` directory, where it's symlinked to a standard Linux partition name (e.g. `/dev/sda1`). From there, we run `smartctl -a` against the device name and filter to get the serial number. Then we check each serial number against the table in [diskshelfmap](DISKSHELFMAP.md). I wrote a one-liner. We identify which drives are in the wrong pool by running `sudo zpool status TEMP`, finding each part-uuid in the `/dev/disk/by-partuuid` directory, where it's symlinked to a standard Linux partition name (e.g. `/dev/sda1`). From there, we run `smartctl -a` against the device name and filter to get the serial number. Then we check each serial number against the table in [diskshelfmap](DISKSHELFMAP.md). I wrote a one-liner.
```sh ```sh
for id in $(sudo zpool status TEMP | grep -E "[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}" | tr -s ' ' | cut -d' ' -f 2); do for id in $(sudo zpool status TEMP | grep -E "[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12} (ONLINE|DEGRADED)" | tr -s ' ' | cut -d' ' -f 2); do
echo -n "$id -> " echo -n "$id -> ";
ls -l /dev/disk/by-partuuid |\ ls -l /dev/disk/by-partuuid |\
grep $id |\ grep $id |\
cut -d' ' -f 11 |\ cut -d' ' -f 12 |\
xargs basename |\ cut -d'/' -f 3 |\
sed 's/^/\/dev\//' |\ sed 's/^/\/dev\//' |\
xargs sudo smartctl -a |\ xargs sudo smartctl -a |\
grep Serial |\ grep Serial |\
tr -s ' ' |\ tr -s ' ' |\
cut -d' ' -f 3 cut -d' ' -f 3
done done
``` ```
```sh
for id in $(sudo zpool status TEMP | grep -E "[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12} (ONLINE|DEGRADED)" | tr -s ' ' | cut -d' ' -f 2); do echo -n "$id -> "; ls -l /dev/disk/by-partuuid | grep $id | cut -d' ' -f 12 | cut -d'/' -f 3 | sed 's/^/\/dev\//' | xargs sudo smartctl -a | grep Serial | tr -s ' ' | cut -d' ' -f 3; done
```
Output: Output:
``` ```
dad98d96-3cbe-469e-b262-b8416dfc72ec -> 2EKA92XX dad98d96-3cbe-469e-b262-b8416dfc72ec -> 2EKA92XX
@ -225,3 +229,223 @@ We recreate our datasets with default settings and begin the copy back from `TEM
And we wait. Painfully. We can check in occasionally with `tail -f ~/copy.tmp`, and we should get an email notification when the command completes. And we wait. Painfully. We can check in occasionally with `tail -f ~/copy.tmp`, and we should get an email notification when the command completes.
Compare disk usage and file count between directories:
```
CHECKPATH="Sub/Directory";
echo "Source: /mnt/TEMP/$CHECKPATH";
echo -n " Disk usage: " && sudo du -s /mnt/TEMP/$CHECKPATH;
echo -n " File count: " && sudo find /mnt/TEMP/$CHECKPATH -type f | wc -l;
echo "Dest: /mnt/Media/$CHECKPATH";
echo -n " Disk usage: " && sudo du -s /mnt/Media/$CHECKPATH;
echo -n " File count: " && sudo find /mnt/Media/$CHECKPATH -type f | wc -l
```
Final output of `sudo zpool status Media TEMP`
```
pool: Media
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: resilvered 9.19M in 00:00:03 with 0 errors on Sun Mar 3 10:30:11 2024
config:
NAME STATE READ WRITE CKSUM
Media DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
a9df1c82-cc15-4971-8080-42056e6213dd ONLINE 0 0 0
8398ae95-9119-4dd6-ab3a-5c0af82f82f4 ONLINE 0 0 0
44ae3ae0-e8f9-4dbc-95ba-e64f63ab7460 ONLINE 0 0 0
eda6547f-9f25-4904-a5bd-8f8b4e36d859 ONLINE 0 0 0
05241f52-542c-4c8c-8f20-d34d2878c41a ONLINE 0 0 0
38cd7315-e269-4acc-a05b-e81362a9ea39 ONLINE 0 0 0
d50abb30-81fd-49c6-b22e-43fcee2022fe FAULTED 23 0 0 too many errors
90be0e9e-7af1-4930-9437-c36c24ea81c5 ONLINE 0 0 0
29b36c4c-8ad2-4dcb-9b56-08f5458817d2 ONLINE 0 0 0
d59a8281-618d-4bab-bd22-9f9f377baacf ONLINE 0 0 0
e0431a50-b5c6-459e-85bd-d648ec2c21d6 ONLINE 0 0 0
cd4808a8-a137-4121-a5ff-4181faadee64 ONLINE 0 0 0
errors: No known data errors
pool: TEMP
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 3.17M in 14:29:34 with 4 errors on Sun Mar 3 14:29:35 2024
config:
NAME STATE READ WRITE CKSUM
TEMP DEGRADED 0 0 0
raidz2-0 DEGRADED 945K 0 0
dad98d96-3cbe-469e-b262-b8416dfc72ec ONLINE 0 0 0
0fabfe6b-5305-4711-921c-926110df24b7 OFFLINE 0 0 0
864e8c2d-0925-4018-b440-81807d3c5c9a ONLINE 11 0 0
d7b9a2ec-5f26-4649-a7fb-cb1ae953825e ONLINE 0 0 0
dd453900-d8c0-430d-bc1f-c022e62417ae OFFLINE 0 0 0
507666cd-91e2-4960-af02-b15899a22487 ONLINE 0 0 0
5eaf90b6-0ad1-4ec0-a232-d704c93dae9a ONLINE 0 0 0
cf9cc737-a704-4bea-bcee-db2cfe4490b7 ONLINE 0 0 0
50cc36be-001d-4e00-a0ca-e4b557bd6852 DEGRADED 366K 0 2 too many errors
142d45e8-4f30-4492-b01f-f22cba129fee DEGRADED 596K 0 6 too many errors
```
And we get the serials of each of our drives so as to ensure the degraded drives don't get pulled back into the pool:
```sh
for id in $(sudo zpool status TEMP | grep -E "[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12} (ONLINE|DEGRADED|FAULTED)" | tr -s ' ' | cut -d' ' -f 2); do
echo -n "$id -> ";
ls -l /dev/disk/by-partuuid |\
grep $id |\
tr -s ' ' |\
cut -d' ' -f 11 |\
cut -d'/' -f 3 |\
sed 's/^/\/dev\//' |\
xargs sudo smartctl -a |\
grep Serial |\
tr -s ' ' |\
cut -d' ' -f 3
done
```
Which gives us:
```
dad98d96-3cbe-469e-b262-b8416dfc72ec -> 2EKA92XX
864e8c2d-0925-4018-b440-81807d3c5c9a -> VJG2T4YX
d7b9a2ec-5f26-4649-a7fb-cb1ae953825e -> VKH3XR2X
507666cd-91e2-4960-af02-b15899a22487 -> VJG1NP9X
5eaf90b6-0ad1-4ec0-a232-d704c93dae9a -> VKH40L6X
cf9cc737-a704-4bea-bcee-db2cfe4490b7 -> VJG2808X
50cc36be-001d-4e00-a0ca-e4b557bd6852 -> VKHNH0GX # Degraded
142d45e8-4f30-4492-b01f-f22cba129fee -> VKJWPAEX # Degraded
```
Then we hit the big red button again: `Storage -> TEMP -> Export/Disconnect`
- [X] Destroy data on this pool?
- [X] Delete configuration of shares that used this pool?
- [X] Confirm Export/Disconnect?
Cry a little bit, then hit the final Export/Disconnect button.
And we'll also grab the partuuid-to-serial mappings for the Media pool:
```
a9df1c82-cc15-4971-8080-42056e6213dd -> VJGJVTZX
8398ae95-9119-4dd6-ab3a-5c0af82f82f4 -> VKGW5YGX
44ae3ae0-e8f9-4dbc-95ba-e64f63ab7460 -> VJG282NX
eda6547f-9f25-4904-a5bd-8f8b4e36d859 -> 2EKA903X
05241f52-542c-4c8c-8f20-d34d2878c41a -> VJGRRG9X
38cd7315-e269-4acc-a05b-e81362a9ea39 -> VKH3Y3XX
90be0e9e-7af1-4930-9437-c36c24ea81c5 -> VJGR6TNX
29b36c4c-8ad2-4dcb-9b56-08f5458817d2 -> VJGJAS1X
d59a8281-618d-4bab-bd22-9f9f377baacf -> 2EKATR2X
e0431a50-b5c6-459e-85bd-d648ec2c21d6 -> VJGK56KX
cd4808a8-a137-4121-a5ff-4181faadee64 -> VJGJUWNX
```
We shutdown the server, then the NAS.
After shutting down the NAS, I realize that I am stupid. My one-liner to convert partuuid to serial only grabs devices with the ONLINE or DEGRADED status, not FAULTED. Whatever. We can fix that later.
Next, we're going to formalize a few of the datasets we had in the Media pool:
- `Media/Media/3D Printing` -> `Media/3DPrinting`
- `Media/Media/Audio` -> `Media/Audio`
- `Media/Media/Images` -> `Media/Images`
- `Media/Media/Text` -> `Media/Text`
- `Media/Media/Video` -> `Media/Video`
We're basically pulling every type of Media up one directory.
### Configuring ACLs for New Datasets
Our hosts are configured to connect as the user `smbuser` with the group `smbuser`.
So when we create a Unix ACL for a new dataset, we configure as follows:
1. Owner -> User: `smbuser` with box checked for Apply User
2. Owner -> Group: `smbuser` with box checked for Apply Group
3. Check box for Apply permissions recursively. (Confirm and continue).
4. Leave access mode matrix as default (755).
5. Save.
### Riding the Update Train
*choo choo*
It's been a while since I updated TrueNAS. This install was created a bit before TrueNAS existed, and updated once from FreeNAS (BSD) to TrueNAS Scale (Linux).
Our installed version is TrueNAS-22.12.3. Latest stable is 23.10.2. Latest beta is 24.04.
According to the [upgrade paths](https://www.truenas.com/docs/truenasupgrades/#upgrade-paths) page, our upgrade path should go:
1. To `22.12.4.2`, the final patch of 22.12.
2. To `23.10.1.3`, the latest stable version of the Cobria update train.
From there, we have the choice to upgrade to the Dragonfish nightly build ([release notes](https://www.truenas.com/docs/scale/gettingstarted/scalereleasenotes/)).
### Setting up Rsync Backups
In order to connect to our backup NAS, we use the following parameters when configuring our Rsync tasks (we'll use the `HomeVideos` dataset for example):
- Path: `/mnt/Media/HomeVideos/` We use the trailing slash. I'm not sure why, but that's how it was, and so it shall stay.
- Rsync Mode: `SSH`
- Connect using: `SSH private key stored in user's home directory` We have an SSH private key in the home directory of the `root` user.
- Remote Host: `admin@192.168.1.11`
- Remote SSH Port: `22`
- Remote Path: `/mnt/Backup/Backup/Media/Media/Video/HomeVideos` We have the data organized by the old dataset layout. Some day I'll fix that. Surely...
- User: `root`
- Direction: `Push`
- Description: `Backup: HomeVideos`
- Schedule: `Daily (0 0 * * *) At 00:00 (12:00 AM)`
- Recursive: `[X]`
- Enabled: `[X]`
- Times: `[X]`
- Compress: `[X]`
- Archive: `[X]`
- Delete: `[X]`
- Delay Updates: `[X]`
### Reorganizing our Media shares
In moving our datasets,
- `/mnt/Media/Media/Video/Movies` to `/mnt/Media/Movies`,
- `/mnt/Media/Media/Video/Shows` to `/mnt/Media/Shows`, and
- `/mnt/Media/Media/Audio/Music` to `/mnt/Media/Music`
We will need to reorganize some stuff, and reconfigure anything dependent on those datasets. This includes:
- SMB shares (`Media` will need to be replaced with `Movies` and `Shows`)
- Snapshot tasks will need to be created for the datasets
- SMB client reconfiguration. Any hosts connecting to the old `Media` share is expecting a certain directory structure below it. We'll need to cope with that.
Below I document all uses of the `/mnt/nas/media` directory in absolute host:container mappings:
- Autopirate:
- Radarr: `/mnt/nas/media/Video/Movies:/movies`
- Sonarr: `/mnt/nas/media/Video/Shows:/shows`
- Bazarr: `/mnt/nas/media/Video/Movies:/movies`, `/mnt/nas/media/Video/Shows:/tv`
- Sabnzbd: `/mnt/nas/media/Video/Movies:/movies`, `/mnt/nas/media/Video/Shows:/shows`, `/mnt/nas/media/Audio/Music:/music`
- Tdarr: `/mnt/nas/media/Video/Movies:/movies`, `/mnt/nas/media/Video/Shows:/shows`
- Tdarr-node: `/mnt/nas/media/Video/Movies:/movies`, `/mnt/nas/media/Video/Shows:/shows`
- Jellyfin:
- Jellyfin: `/mnt/nas/media/Video/Movies:/data/movies`, `/mnt/nas/media/Video/Shows:/data/tvshows`
- Plex:
- Plex: `/mnt/nas/media/Video/Movies:/movies`, `/mnt/nas/media/Video/Shows:/shows`, `/mnt/nas/media/Audio/Music:/music`
We're gonna have to refactor all of these.
Most use `MEDIA_DIR=/mnt/nas/media` in their `.env` file as the baseline.
We'll need to replace that with `MOVIES_DIR=/mnt/nas/movies` and `SHOWS_DIR=/mnt/nas/shows`. Also `MUSIC_DIR=/mnt/nas/music` I guess.
Then we'll need to find the lines in each compose file which look like `${MEDIA_DIR}/Video/Movies` and `${MEDIA_DIR}/Video/Shows` and replace them with `${MOVIES_DIR}` and `${SHOWS_DIR}` respectively.
Also `${MEDIA_DIR}/Audio/Music` to `${MUSIC_DIR}`.
None of the container-side mappings should need to be changed.
### Replacing Yet Another Disk
The drive hosting part-uuid `d50abb30-81fd-49c6-b22e-43fcee2022fe` failed 7 SMART short tests in a row while we were moving our data around. Great.
So we get the disk ID from the part uuid (we already knew it was `/dev/sdx` because of the email notifications I was getting spammed with during the move, but let's follow the exercise) with `ls -l /dev/disk/by-partuuid | grep d50abb30-81fd-49c6-b22e-43fcee2022fe`, which informed us that the partition label was `../../sdx2`. So we open the web UI, navigate to the Manage Disks panel of the Media pool, find our bad drive, make note of the serial number, and hit Offline. Once that's done, we check [diskshelfmap](./DISKSHELFMAP.md) to see where that drive was located. We physically remove the caddy from the shelf, then the drive from the caddy. Throw the new drive in, and note its serial number and document the swap in diskshelfmap. We wait a bit for the drive to be recognized. We run a quick sanity check on the new drive to make sure its SMART info looks good and the serial number matches `smartctl -a /dev/sdx`, then we kick off the replacement and resilver with `zpool replace Media d50abb30-81fd-49c6-b22e-43fcee2022fe /dev/sdx`.
Now we wait like 2 days for the resilver to finish and we hope no other drives fail in the meantime.

View File

@ -40,4 +40,100 @@ To resolve:
1. Ensure any Docker containers currently using the device are shut down. Run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "ISCSI-DEPENDENT: $stack"; fi ; done` to get the list of iSCSI-dependent stacks. Ensure all listed stacks are OK to shut down, then run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "SHUTTING DOWN $stack" && docker-compose down; fi ; done`. 1. Ensure any Docker containers currently using the device are shut down. Run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "ISCSI-DEPENDENT: $stack"; fi ; done` to get the list of iSCSI-dependent stacks. Ensure all listed stacks are OK to shut down, then run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "SHUTTING DOWN $stack" && docker-compose down; fi ; done`.
2. Unmount the iSCSI device. Run `sudo umount /mnt/iscsi`. 2. Unmount the iSCSI device. Run `sudo umount /mnt/iscsi`.
3. Log out of the iSCSI session. Run `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --logout`. 3. Log out of the iSCSI session. Run `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --logout`.
4. Shut down the host. Run `sudo shutdown now`. 4. Shut down the host. Run `sudo shutdown now`.
# Systemd-ifying the process
Remove the iSCSI mount from `/etc/fstab`, but otherwise most of the steps above should be fine. (Don't forget to install and enable the `iscsid.service` systemd unit).
### Script for connecting to (and disconnecting from) iSCSI session
This script is one command, but sometimes it's useful to contain it in a script.
[`connect-iscsi.sh`](../fighter/scripts/connect-iscsi.sh)
```sh
#!/bin/bash
iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260 --login
```
[`disconnect-iscsi.sh`](../fighter/scripts/disconnect-iscsi.sh)
```sh
#!/bin/bash
iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260, 1 -u
```
### Systemd Unit for connecting iSCSI session
`/etc/systemd/system/connect-iscsi.service` with `root:root 644` permissions
```ini
[Unit]
Description=Connect iSCSI session
Requires=network-online.target
#After=
DefaultDependencies=no
[Service]
User=root
Group=root
Type=oneshot
RemainAfterExit=true
ExecStart=iscsiadm --mode node --targetname iqn.2020-03.net.jafner:fighter --portal 192.168.1.10:3260 --login
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
### Systemd Unit for mounting the share
`/etc/systemd/system/mnt-nas-iscsi.mount` with `root:root 644` permissions
Note that the file name *must* be `mnt-nas-iscsi` if its `Where=` parameter is `/mnt/nas/iscsi`.
[Docs](https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html)
```ini
[Unit]
Description="Mount iSCSI share /mnt/nas/iscsi"
After=connect-iscsi.service
DefaultDependencies=no
[Mount]
What=/dev/disk/by-uuid/cf3a253c-e792-48b5-89a1-f91deb02b3be
Where=/mnt/nas/iscsi
Type=ext4
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
### Systemd Unit for automounting the share
`/etc/systemd/system/mnt-nas-iscsi.automount` with `root:root 644` permissions
Note that the file name *must* be `mnt-nas-iscsi` if its `Where=` parameter is `/mnt/nas/iscsi`.
[Docs](https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html)
```ini
[Unit]
Description="Mount iSCSI share /mnt/nas/iscsi"
Requires=network-online.target
#After=
[Automount]
Where=/mnt/nas/iscsi
[Install]
WantedBy=multi-user.target
```
### Quick interactive one-liner to install these scripts
This will open each file for editing in nano under the path `/etc/systemd/system/` and apply the correct permissions to the file after it has been written.
```sh
for file in /etc/systemd/system/connect-iscsi.service /etc/systemd/system/mnt-nas-iscsi.mount /etc/systemd/system/mnt-nas-iscsi.automount; do sudo nano $file && sudo chown root:root $file && sudo chmod 644 $file && sudo systemctl enable $(basename $file); done && sudo systemctl daemon-reload
```
After this, it's probably a good idea to reboot from scratch.
### Check statuses
- `sudo systemctl status connect-iscsi.service`
- `sudo systemctl status mnt-nas-iscsi.mount`
- `sudo systemctl status mnt-nas-iscsi.automount`
https://unix.stackexchange.com/questions/195116/mount-iscsi-drive-at-boot-system-halts
https://github.com/f1linux/iscsi-automount/blob/master/config-iscsi-storage.sh
https://github.com/f1linux/iscsi-automount/blob/master/config-iscsi-storage-mounts.sh

View File

@ -0,0 +1,12 @@
[Unit]
DefaultDependencies=no
After=nss-lookup.target
Before=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=sh -c 'until ping -c 1 192.168.1.10; do sleep 0.5; done'
[Install]
WantedBy=network-online.target

29
fighter/fstab.txt Normal file
View File

@ -0,0 +1,29 @@
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during installation
UUID=c8df72c4-2827-4697-af92-e245fe9ea5cf / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda1 during installation
UUID=306E-07E4 /boot/efi vfat umask=0077 0 1
# iscsi block share at /mnt/nas/iscsi
#UUID=cf3a253c-e792-48b5-89a1-f91deb02b3be /mnt/nas/iscsi ext4 _netdev 0 1
//192.168.1.10/Movies /mnt/nas/movies cifs defaults,credentials=/home/admin/.smbcred,uid=1000,gid=1000,x-systemd.requires=network-online.target 0 0
//192.168.1.10/Shows /mnt/nas/shows cifs defaults,credentials=/home/admin/.smbcred,uid=1000,gid=1000,x-systemd.requires=network-online.target 0 0
//192.168.1.10/Music /mnt/nas/music cifs defaults,credentials=/home/admin/.smbcred,uid=1000,gid=1000,x-systemd.requires=network-online.target 0 0
//192.168.1.10/3DPrinting /mnt/nas/3DPrinting cifs defaults,credentials=/home/admin/.smbcred,uid=1000,gid=1000,x-systemd.requires=network-online.target 0 0
//192.168.1.10/Text/Calibre /mnt/nas/calibre-web cifs defaults,credentials=/home/admin/.smbcred,uid=1000,gid=1000,x-systemd.requires=network-online.target>
//192.168.1.10/Torrenting /mnt/nas/torrenting cifs defaults,credentials=/home/admin/.smbcred,uid=1000,gid=1000,x-systemd.requires=network-online.target 0 0
//192.168.1.10/AV /mnt/nas/av cifs defaults,credentials=/home/admin/.smbcred,uid=1000,gid=1000,x-systemd.requires=network-online.target 0 0
//192.168.1.10/Movies /mnt/nas/movies cifs credentials=/home/admin/.smbcred,uid=1000,gid=1000,_netdev 0 0
//192.168.1.10/Shows /mnt/nas/shows cifs credentials=/home/admin/.smbcred,uid=1000,gid=1000,_netdev 0 0
//192.168.1.10/Music /mnt/nas/music cifs credentials=/home/admin/.smbcred,uid=1000,gid=1000,_netdev 0 0
//192.168.1.10/3DPrinting /mnt/nas/3DPrinting cifs credentials=/home/admin/.smbcred,uid=1000,gid=1000,_netdev 0 0
//192.168.1.10/Text/Calibre /mnt/nas/calibre-web cifs credentials=/home/admin/.smbcred,uid=1000,gid=1000,_netdev 0 0
//192.168.1.10/Torrenting /mnt/nas/torrenting cifs credentials=/home/admin/.smbcred,uid=1000,gid=1000,_netdev 0 0
//192.168.1.10/AV /mnt/nas/av cifs credentials=/home/admin/.smbcred,uid=1000,gid=1000,_netdev 0 0

View File

@ -0,0 +1,4 @@
#!/bin/bash
cat ~/homelab/fighter/fstab.txt | sudo tee > /etc/fstab
sudo systemctl daemon-reload