Update documentation related to new host: Paladin.
Some checks failed
Deploy to Wizard / Deploy (push) Failing after 1s
Deploy to Wizard / Notify (push) Successful in 1s

- Update inxi reports for NAS hosts and rename to `hardware.txt`. Include command at top of file.
- Document data safety systems for Paladin.
- Update names for DHCP static maps in VyOS to use new hostnames (joey-nas -> barbarian, joey-nas2 -> monk, joey-nas3 -> paladin).
This commit is contained in:
Joey Hafner 2024-07-12 13:25:04 -07:00
parent 651940ce8d
commit 7a64732c9b
No known key found for this signature in database
8 changed files with 308 additions and 158 deletions

30
barbarian/hardware.txt Normal file
View File

@ -0,0 +1,30 @@
# ./inxi -CDGmMNPS --dmidecode
System: Host: barbarian Kernel: 6.6.32-production+truenas x86_64 bits: 64 Console: tty pts/0
Distro: Debian GNU/Linux 12 (bookworm)
Machine: Type: Desktop Mobo: Gigabyte model: X99-SLI-CF v: x.x serial: N/A UEFI: American Megatrends v: F24a rev: 5.6
date: 01/11/2018
Memory: RAM: total: 62.65 GiB used: 2.48 GiB (4.0%)
Array-1: capacity: 512 GiB note: check slots: 8 EC: None
Device-1: DIMM_A1 size: 8 GiB speed: 2133 MT/s
Device-2: DIMM_A2 size: 8 GiB speed: 2133 MT/s
Device-3: DIMM_B1 size: 8 GiB speed: 2133 MT/s
Device-4: DIMM_B2 size: 8 GiB speed: 2133 MT/s
Device-5: DIMM_C1 size: 8 GiB speed: 2133 MT/s
Device-6: DIMM_C2 size: 8 GiB speed: 2133 MT/s
Device-7: DIMM_D1 size: 8 GiB speed: 2133 MT/s
Device-8: DIMM_D2 size: 8 GiB speed: 2133 MT/s
CPU: Info: 6-Core model: Intel Core i7-5930K bits: 64 type: MT MCP cache: L2: 15 MiB
Speed: 1200 MHz min/max: 1200/3700 MHz Core speeds (MHz): 1: 1200 2: 1200 3: 1200 4: 1200 5: 1200 6: 1200 7: 2191
8: 1200 9: 1200 10: 1200 11: 1200 12: 1200
Graphics: Device-1: NVIDIA GK208B [GeForce GT 710] driver: N/A
Display: server: No display server data found. Headless machine? tty: 154x70
Message: Unable to show advanced data. Required tool glxinfo missing.
Network: Device-1: Intel Ethernet I218-V driver: e1000e
Drives: Local Storage: total: raw: 74.53 GiB usable: 146.26 GiB used: 3.95 GiB (2.7%)
ID-1: /dev/sda vendor: Intel model: SSDSCKGW080A4 size: 74.53 GiB
Partition: ID-1: / size: 30.12 GiB used: 163.8 MiB (0.5%) fs: zfs logical: freenas-boot/ROOT/24.04.2
ID-2: /home size: 29.96 GiB used: 128 KiB (0.0%) fs: zfs logical: freenas-boot/ROOT/24.04.2/home
ID-3: /opt size: 30.03 GiB used: 72.1 MiB (0.2%) fs: zfs logical: freenas-boot/ROOT/24.04.2/opt
ID-4: /usr size: 31.85 GiB used: 1.89 GiB (5.9%) fs: zfs logical: freenas-boot/ROOT/24.04.2/usr
ID-5: /var size: 29.98 GiB used: 19.9 MiB (0.1%) fs: zfs logical: freenas-boot/ROOT/24.04.2/var
ID-6: /var/log size: 30.04 GiB used: 85.9 MiB (0.3%) fs: zfs logical: freenas-boot/ROOT/24.04.2/var/log

View File

@ -1,67 +0,0 @@
System:
Host: barbarian Kernel: 6.6.29-production+truenas arch: x86_64 bits: 64 Console: pty pts/2
Distro: Debian GNU/Linux 12 (bookworm)
Machine:
Type: Desktop Mobo: Gigabyte model: X99-SLI-CF v: x.x serial: N/A UEFI: American Megatrends
v: F24a date: 01/11/2018
Memory:
System RAM: total: 64 GiB available: 62.65 GiB used: 12.81 GiB (20.4%)
Array-1: capacity: 512 GiB note: check slots: 8 modules: 8 EC: None
Device-1: DIMM_A1 type: DDR4 size: 8 GiB speed: 2133 MT/s
Device-2: DIMM_A2 type: DDR4 size: 8 GiB speed: 2133 MT/s
Device-3: DIMM_B1 type: DDR4 size: 8 GiB speed: 2133 MT/s
Device-4: DIMM_B2 type: DDR4 size: 8 GiB speed: 2133 MT/s
Device-5: DIMM_C1 type: DDR4 size: 8 GiB speed: 2133 MT/s
Device-6: DIMM_C2 type: DDR4 size: 8 GiB speed: 2133 MT/s
Device-7: DIMM_D1 type: DDR4 size: 8 GiB speed: 2133 MT/s
Device-8: DIMM_D2 type: DDR4 size: 8 GiB speed: 2133 MT/s
CPU:
Info: 6-core model: Intel Core i7-5930K bits: 64 type: MT MCP cache: L2: 1.5 MiB
Speed (MHz): avg: 1316 min/max: 1200/3700 cores: 1: 1200 2: 1200 3: 1200 4: 1200 5: 1200
6: 1200 7: 1200 8: 1200 9: 1200 10: 1200 11: 2600 12: 1200
Graphics:
Device-1: NVIDIA GK208B [GeForce GT 710] driver: N/A
Display: server: No display server data found. Headless machine? tty: 154x70
API: N/A Message: No API data available in console. Headless machine?
Network:
Device-1: Intel Ethernet I218-V driver: e1000e
Device-2: Mellanox MT26448 [ConnectX EN 10GigE PCIe 2.0 5GT/s] driver: mlx4_core
Drives:
Local Storage: total: raw: 174.73 TiB usable: 139.03 TiB used: 59.12 TiB (42.5%)
ID-1: /dev/sda vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-2: /dev/sdb vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-3: /dev/sdc vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-4: /dev/sdd vendor: Hitachi model: HUH728080AL5200 size: 7.28 TiB
ID-5: /dev/sde vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-6: /dev/sdf vendor: Hitachi model: HUH728080AL5200 size: 7.28 TiB
ID-7: /dev/sdg vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-8: /dev/sdh vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-9: /dev/sdi vendor: Hitachi model: HUH728080AL5200 size: 7.28 TiB
ID-10: /dev/sdj vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-11: /dev/sdk vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-12: /dev/sdl vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-13: /dev/sdm vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-14: /dev/sdn vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-15: /dev/sdo vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-16: /dev/sdp vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-17: /dev/sdq vendor: Sun Microsystems model: H7280A520SUN8.0T size: 7.28 TiB
ID-18: /dev/sdr vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-19: /dev/sds vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-20: /dev/sdt vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-21: /dev/sdu vendor: Sun Microsystems model: H7280A520SUN8.0T size: 7.28 TiB
ID-22: /dev/sdv vendor: Sun Microsystems model: H7280A520SUN8.0T size: 7.28 TiB
ID-23: /dev/sdw vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-24: /dev/sdx vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-25: /dev/sdy vendor: Intel model: SSDSCKGW080A4 size: 74.53 GiB
Partition:
ID-1: / size: 32.28 GiB used: 163.6 MiB (0.5%) fs: zfs logical: freenas-boot/ROOT/24.04.1.1
ID-2: /home size: 32.12 GiB used: 128 KiB (0.0%) fs: zfs
logical: freenas-boot/ROOT/24.04.1.1/home
ID-3: /opt size: 32.19 GiB used: 72.1 MiB (0.2%) fs: zfs
logical: freenas-boot/ROOT/24.04.1.1/opt
ID-4: /usr size: 34.01 GiB used: 1.89 GiB (5.6%) fs: zfs
logical: freenas-boot/ROOT/24.04.1.1/usr
ID-5: /var size: 32.14 GiB used: 19.6 MiB (0.1%) fs: zfs
logical: freenas-boot/ROOT/24.04.1.1/var
ID-6: /var/log size: 32.2 GiB used: 88.8 MiB (0.3%) fs: zfs
logical: freenas-boot/ROOT/24.04.1.1/var/log

53
monk/hardware.txt Normal file
View File

@ -0,0 +1,53 @@
$ sudo ./inxi -CDGmMNPS --dmidecode
System:
Host: monk Kernel: 6.6.32-production+truenas arch: x86_64 bits: 64
Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
Machine:
Type: Desktop Mobo: Gigabyte model: Z77X-UD5H serial: N/A
UEFI: American Megatrends v: F16j rev: 4.6 date: 11/14/2017
Memory:
System RAM: total: 16 GiB available: 15.52 GiB used: 3.42 GiB (22.0%)
igpu: 64 MiB
Array-1: capacity: 32 GiB slots: 4 modules: 2 EC: None
Device-1: ChannelB-DIMM1 type: no module installed
Device-2: ChannelA-DIMM1 type: no module installed
Device-3: ChannelB-DIMM0 type: DDR3 size: 8 GiB speed: 1600 MT/s
Device-4: ChannelA-DIMM0 type: DDR3 size: 8 GiB speed: 1600 MT/s
CPU:
Info: quad core model: Intel Core i5-3570K bits: 64 type: MCP cache:
L2: 1024 KiB
Speed (MHz): avg: 1602 min/max: 1600/3800 cores: 1: 1602 2: 1602
3: 1602 4: 1602
Graphics:
Device-1: Intel IvyBridge GT2 [HD Graphics 4000] driver: i915 v: kernel
Display: server: No display server data found. Headless machine?
tty: 75x68
API: N/A Message: No API data available in console. Headless machine?
Network:
Device-1: Intel 82579V Gigabit Network driver: e1000e
Device-2: Qualcomm Atheros AR8151 v2.0 Gigabit Ethernet driver: atl1c
Drives:
Local Storage: total: raw: 29.55 TiB usable: 15.27 TiB
used: 272.68 GiB (1.7%)
ID-1: /dev/sda vendor: HGST (Hitachi) model: HUH728080ALE604
size: 7.28 TiB
ID-2: /dev/sdb vendor: SanDisk model: SDSSDX480GG25 size: 447.13 GiB
ID-3: /dev/sdc vendor: HGST (Hitachi) model: HUH728080ALE604
size: 7.28 TiB
ID-4: /dev/sdd vendor: HGST (Hitachi) model: HUH728080ALE604
size: 7.28 TiB
ID-5: /dev/sde vendor: HGST (Hitachi) model: HUH728080ALE604
size: 7.28 TiB
Partition:
ID-1: / size: 416.36 GiB used: 164.1 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.2
ID-2: /home size: 416.21 GiB used: 768 KiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.2/home
ID-3: /opt size: 416.28 GiB used: 74.1 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.2/opt
ID-4: /usr size: 418.33 GiB used: 2.12 GiB (0.5%) fs: zfs
logical: boot-pool/ROOT/24.04.2/usr
ID-5: /var size: 416.24 GiB used: 31.4 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.2/var
ID-6: /var/log size: 416.25 GiB used: 49.5 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.2/var/log

View File

@ -1,41 +0,0 @@
System:
Host: monk Kernel: 6.6.29-production+truenas arch: x86_64 bits: 64 Console: pty pts/0
Distro: Debian GNU/Linux 12 (bookworm)
Machine:
Type: Desktop Mobo: Gigabyte model: Z77X-UD5H serial: N/A UEFI: American Megatrends v: F16j
date: 11/14/2017
Memory:
System RAM: total: 16 GiB available: 15.52 GiB used: 12 GiB (77.3%) igpu: 64 MiB
Array-1: capacity: 32 GiB slots: 4 modules: 2 EC: None
Device-1: ChannelB-DIMM1 type: no module installed
Device-2: ChannelA-DIMM1 type: no module installed
Device-3: ChannelB-DIMM0 type: DDR3 size: 8 GiB speed: 1600 MT/s
Device-4: ChannelA-DIMM0 type: DDR3 size: 8 GiB speed: 1600 MT/s
CPU:
Info: quad core model: Intel Core i5-3570K bits: 64 type: MCP cache: L2: 1024 KiB
Speed (MHz): avg: 2563 min/max: 1600/3800 cores: 1: 2683 2: 2414 3: 2668 4: 2488
Graphics:
Device-1: Intel IvyBridge GT2 [HD Graphics 4000] driver: i915 v: kernel
Display: server: No display server data found. Headless machine? tty: 154x70
API: N/A Message: No API data available in console. Headless machine?
Network:
Device-1: Intel 82579V Gigabit Network driver: e1000e
Device-2: Qualcomm Atheros AR8151 v2.0 Gigabit Ethernet driver: atl1c
Drives:
Local Storage: total: raw: 29.55 TiB usable: 15.27 TiB used: 9.42 TiB (61.7%)
ID-1: /dev/sda vendor: HGST (Hitachi) model: HUH728080ALE604 size: 7.28 TiB
ID-2: /dev/sdb vendor: SanDisk model: SDSSDX480GG25 size: 447.13 GiB
ID-3: /dev/sdc vendor: HGST (Hitachi) model: HUH728080ALE604 size: 7.28 TiB
ID-4: /dev/sdd vendor: HGST (Hitachi) model: HUH728080ALE604 size: 7.28 TiB
ID-5: /dev/sde vendor: HGST (Hitachi) model: HUH728080ALE604 size: 7.28 TiB
Partition:
ID-1: / size: 418.77 GiB used: 164 MiB (0.0%) fs: zfs logical: boot-pool/ROOT/24.04.1.1
ID-2: /home size: 418.61 GiB used: 768 KiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.1.1/home
ID-3: /opt size: 418.68 GiB used: 74.1 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.1.1/opt
ID-4: /usr size: 420.73 GiB used: 2.12 GiB (0.5%) fs: zfs
logical: boot-pool/ROOT/24.04.1.1/usr
ID-5: /var size: 418.64 GiB used: 32 MiB (0.0%) fs: zfs logical: boot-pool/ROOT/24.04.1.1/var
ID-6: /var/log size: 418.65 GiB used: 43.6 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.1.1/var/log

150
paladin/DATA SAFETY.md Normal file
View File

@ -0,0 +1,150 @@
## TrueNAS Data Safety
### Scheduled Jobs
- Daily snapshot of all datasets at midnight.
- Daily short SMART test of all disks at 11:00 PM.
- Daily ZFS replication for all configured datasets at 01:00 AM
- Weekly rsync push tasks for all configured datasets at 01:00 AM on Sunday.
- Weekly check for scrub age threshold at 03:00 AM on Sunday. Will only run scrub if previous scrub was more than 35 days ago.
### Scrub Tasks
- boot-pool: `every 7 days`.
- Media: `0 3 * * 7`, "At 03:00 on Sunday." Threshold Days: 34.
- Tank: `0 0 * * 7`, "At 03:00 on Sunday." Threshold Days: 34.
This will cause our pools to be scrubbed once per ~5 weeks, and only ever at 3 AM on a Sunday. Scrubbing our pools is a read-intensive operation for all disks in the pool, so we prefer not to induce undue stress.
> Note: Why is the boot pool different?
> TrueNAS Scale treats the boot pool significantly differently from data pools. Rather than being configured under the Data Protection -> Scrub Tasks, scrub rules for the boot pool are much more limited, and they are configured under Boot -> Stats/Settings.
### Snapshotting
Each dataset is configured with a Periodic Snapshot Task with the following parameters:
- Snapshot Lifetime: 2 WEEK
- Naming Scheme: `auto-%Y-%m-%d_%H-%M`
- Schedule: Daily (0 0 * * *) At 00:00 (12:00AM)
- Recursive: False.
- Allow taking empty snapshots: True.
- Enabled: True.
### Rsync Tasks
> Note: Deprecated.
> These tasks have been disabled as we've moved to ZFS replication.
A subset of our datasets are configured to Rsync to Monk, our backup server.
- Media/HomeVideos
- Media/Recordings
- Media/Images
- Tank/Text
- Tank/Archive
- Tank/AppData
> Note: Why not ZFS replication?
> Legacy. Started out with Rsync and migrating would be a significant challenge. Would like to migrate at some point.
Each of our Rsync tasks is configured with the following parameters:
#### Source
- Path: `/mnt/Path/To/Dataset/` **Trailing `/` is critical.**
- User: `admin`
- Direction: Push
- Description:
#### Remote
- Rsync Mode: `SSH`
- Connect using `SSH private key stored in user's home directory`
- Remote Host: `admin@192.168.1.11`
- Remote SSH Port: `22`
- Remote Path: This is very touchy and unintuitive. See the map below.
##### Rsync Local-to-Remote Dataset Path Mapping
| Local Path | Path on Monk |
|:-:|:-:|
| `/mnt/Media/HomeVideos/` | `/mnt/Backup/Backup/Media/Media/Video/HomeVideos` |
| `/mnt/Media/Recordings/` | `/mnt/Backup/Backup/Media/Media/Video/Recordings` |
| `/mnt/Media/Images/` | `/mnt/Backup/Backup/Media/Media/Images` |
| `/mnt/Tank/Text/` | `/mnt/Backup/Backup/Tank/Text` |
| `/mnt/Tank/Archive/` | `/mnt/Backup/Backup/Tank/Archive` |
| `/mnt/Tank/AppData/` | `/mnt/Backup/Backup/Tank/AppData` |
Validate that the path is correct by running `rsync -arz -v --dry-run $local_path admin@192.168.1.11:$remote_path`. If `sending incremental file list` is followed by a blank line and then the summary (like `sent N bytes, received M bytes, XY bytes/sec`), then you're golden.
#### Schedule
- Schedule: `0 0 * * 0` "On Sundays at 00:00 (12:00 AM)"
- Recursive: True
- Enabled: True
> Note: Test then enable
> Rsync jobs should be tested manually with supervision before enabling for automated recurrence.
### ZFS Replication
- What and Where
- Source Location: On this System.
- Source: Check boxes for each of the following datasets:
- `/mnt/Media/HomeVideos`
- `/mnt/Media/Recordings`
- `/mnt/Media/Images`
- `/mnt/Tank/Text`
- `/mnt/Tank/Archive`
- `/mnt/Tank/AppData`
- Recursive: False.
- Replicate Custom Snapshots: False.
- SSH Transfer Security: Encryption (This encrypts traffic in flight, not at rest on destination.)
- Use Sudo For ZFS Commands: True.
- Destination Location: On a Different System.
- SSH Connection: admin@monk (See Note below.)
- Destination: `Backup/Backup`
- Encryption: False.
- Task Name: `Backup Non-Reproducible Datasets`
- When
- Replication Schedule: Run On a Schedule
- Schedule: Daily at 01:00 AM
- Destination Snapshot Lifetime: Same as Source
> Note: SSH Connection with non-root remote user
> For ZFS-replication-over-SSH to work properly, the user on the remote system needs superuser permissions. To get superuser permissions in a scripted environment like a replication task, the remote user needs the "Allow all sudo commands with no password" option to be True.
> On the remote system, navigate to Credentials -> Local Users -> `admin` -> Edit -> Authentication. Then set "Allow all sudo commands" and "Allow all sudo commands with no password" to True.
#### More Options
- Times: True
- Compress: True
- Archive: True
- Delete: True
- Quiet: False
- Preserve Permissions: False
- Preserve Extended Attributes: False
- Delay Updates: True
### S.M.A.R.T. Tests
- SHORT test for All Disks at 11:00 PM daily.
- This is scheduled such that it is unlikely to overlap with a snapshot task.
## Configuring an SSH Connection to Remote TrueNAS System
1. Generate a keypair for the local system.
1. Credentials -> Backup Credentials -> SSH Keypairs -> Add.
2. Name the keypair like `<localuser>@<localhostname>` (e.g. `admin@paladin`).
3. If a keypair already exists for this host (e.g. if generated manually via CLI), copy the private and public keys into their respective fields here. Otherwise, Generate Keypair.
4. Click Save.
2. Configure the SSH Connection.
1. Credentials -> Backup Credentials -> SSH Connections -> Add.
2. Name the connection like `<remoteuser>@<remotehostname>` (e.g. `admin@monk`. Note: My systems all use `admin` as the username. If you used names like `monkadmin` for the remote system, you would use `monkadmin` here.)
3. Setup Method: Manual
4. Authentication:
1. Host: `192.168.1.11`
2. Port: `22`
3. Username: `admin`
4. Private Key: `admin@paladin` (the keypair generated in step 1.)
5. Remote Host Key: Click "Discover Remote Host Key"
6. Connect Timeout (seconds): `2`
## Restore from Backup
TODO:
- Document procedure for restoring one file from most recent backup.
- Document procedure for restoring one dataset from most recent backup.
- Document procedure for restoring many files from most recent backup.
- Document procedure for restoring one file from older backup.
- Document procedure for restoring one dataset from older backup.
- Document procedure for restoring many datasets from older backup.
- Build automation for regularly restoring from backup.
- Chaos engineering?

71
paladin/hardware.txt Normal file
View File

@ -0,0 +1,71 @@
$ ./inxi -CDGmMNPS --dmidecode
System:
Host: paladin Kernel: 6.6.32-production+truenas arch: x86_64 bits: 64
Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
Machine:
Permissions: Unable to run dmidecode. Root privileges required.
Memory:
System RAM: total: 128 GiB available: 125.7 GiB used: 116.56 GiB (92.7%)
Message: For most reliable report, use superuser + dmidecode.
Report: arrays: 2 capacity: N/A installed: N/A slots: 8 active: N/A type: N/A
eec: Multi-bit ECC
Array-1: capacity: N/A installed: N/A slots: 4 modules: 0 EC: Multi-bit ECC
Device-1: DIMM_A1 type: no module installed
Device-2: DIMM_A2 type: no module installed
Device-3: DIMM_B1 type: no module installed
Device-4: DIMM_B2 type: no module installed
Array-2: capacity: N/A installed: N/A slots: 4 modules: 0 EC: Multi-bit ECC
Device-1: DIMM_C1 type: no module installed
Device-2: DIMM_C2 type: no module installed
Device-3: DIMM_D1 type: no module installed
Device-4: DIMM_D2 type: no module installed
CPU:
Info: 12-core model: Intel Xeon E5-2680 v3 bits: 64 type: MT MCP cache: L2: 3 MiB
Speed (MHz): avg: 1328 min/max: 1200/3300 cores: 1: 1197 2: 1197 3: 1720 4: 1685 5: 1197
6: 1197 7: 1197 8: 1537 9: 1792 10: 1197 11: 1197 12: 1197 13: 1197 14: 1198 15: 1200 16: 1489
17: 1200 18: 1197 19: 1197 20: 1203 21: 2095 22: 1200 23: 1197 24: 1197
Graphics:
Device-1: ASPEED Graphics Family driver: ast v: kernel
Display: server: No display server data found. Headless machine? tty: 154x70
resolution: 1600x900
API: N/A Message: No API data available in console. Headless machine?
Network:
Device-1: Mellanox MT26448 [ConnectX EN 10GigE PCIe 2.0 5GT/s] driver: mlx4_core
Device-2: Intel I210 Gigabit Network driver: igb
Device-3: Intel I210 Gigabit Network driver: igb
Drives:
Local Storage: total: raw: 174.8 TiB usable: 109.95 TiB used: 59.1 TiB (53.8%)
ID-1: /dev/sda vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-2: /dev/sdb vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-3: /dev/sdc vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-4: /dev/sdd vendor: Hitachi model: HUH728080AL5200 size: 7.28 TiB
ID-5: /dev/sde vendor: Hitachi model: HUH728080AL5200 size: 7.28 TiB
ID-6: /dev/sdf vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-7: /dev/sdg vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-8: /dev/sdh vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-9: /dev/sdi vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-10: /dev/sdj vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-11: /dev/sdk vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-12: /dev/sdl vendor: Sun Microsystems model: H7280A520SUN8.0T size: 7.28 TiB
ID-13: /dev/sdm vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-14: /dev/sdn vendor: Sun Microsystems model: H7280A520SUN8.0T size: 7.28 TiB
ID-15: /dev/sdo vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-16: /dev/sdp vendor: Sun Microsystems model: H7280A520SUN8.0T size: 7.28 TiB
ID-17: /dev/sdq vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-18: /dev/sdr vendor: Intel model: SSDSC2CT080A4 size: 74.53 GiB
ID-19: /dev/sds vendor: Intel model: SSDSC2CT080A4 size: 74.53 GiB
ID-20: /dev/sdt vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-21: /dev/sdu vendor: Hitachi model: HUH728080AL5200 size: 7.28 TiB
ID-22: /dev/sdv vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-23: /dev/sdw vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-24: /dev/sdx vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
ID-25: /dev/sdy vendor: Hitachi model: HUH728080AL4200 size: 7.28 TiB
ID-26: /dev/sdz vendor: Hitachi model: HUH72808CLAR8000 size: 7.28 TiB
Partition:
ID-1: / size: 53.94 GiB used: 164.1 MiB (0.3%) fs: zfs logical: boot-pool/ROOT/24.04.2
ID-2: /home size: 53.79 GiB used: 896 KiB (0.0%) fs: zfs logical: boot-pool/ROOT/24.04.2/home
ID-3: /opt size: 53.86 GiB used: 74.2 MiB (0.1%) fs: zfs logical: boot-pool/ROOT/24.04.2/opt
ID-4: /usr size: 55.91 GiB used: 2.12 GiB (3.8%) fs: zfs logical: boot-pool/ROOT/24.04.2/usr
ID-5: /var size: 53.82 GiB used: 31.6 MiB (0.1%) fs: zfs logical: boot-pool/ROOT/24.04.2/var
ID-6: /var/log size: 53.79 GiB used: 8 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.2/var/log

View File

@ -1,46 +0,0 @@
System:
Host: paladin Kernel: 6.6.32-production+truenas arch: x86_64 bits: 64
Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
Machine:
Type: Server Mobo: ASUSTeK model: Z10PA-U8 Series v: Rev 1.xx serial: <superuser required>
UEFI-[Legacy]: American Megatrends v: 0601 date: 06/02/2015
Memory:
System RAM: total: 128 GiB available: 125.7 GiB used: 3.1 GiB (2.5%)
Message: For most reliable report, use superuser + dmidecode.
Report: arrays: 2 capacity: N/A installed: N/A slots: 8 active: N/A type: N/A
eec: Multi-bit ECC
Array-1: capacity: N/A installed: N/A slots: 4 modules: 0 EC: Multi-bit ECC
Device-1: DIMM_A1 type: no module installed
Device-2: DIMM_A2 type: no module installed
Device-3: DIMM_B1 type: no module installed
Device-4: DIMM_B2 type: no module installed
Array-2: capacity: N/A installed: N/A slots: 4 modules: 0 EC: Multi-bit ECC
Device-1: DIMM_C1 type: no module installed
Device-2: DIMM_C2 type: no module installed
Device-3: DIMM_D1 type: no module installed
Device-4: DIMM_D2 type: no module installed
CPU:
Info: 12-core model: Intel Xeon E5-2680 v3 bits: 64 type: MT MCP cache: L2: 3 MiB
Speed (MHz): avg: 1645 min/max: 1200/3300 cores: 1: 2400 2: 2100 3: 1200 4: 1200 5: 2100
6: 1200 7: 1200 8: 1200 9: 1900 10: 3300 11: 1200 12: 2200 13: 2100 14: 1200 15: 1200 16: 1200
17: 2100 18: 1200 19: 1200 20: 1200 21: 1200 22: 1200 23: 1200 24: 3300
Graphics:
Device-1: ASPEED Graphics Family driver: ast v: kernel
Display: server: No display server data found. Headless machine? tty: 154x70
resolution: 1600x900
API: N/A Message: No API data available in console. Headless machine?
Network:
Device-1: Intel I210 Gigabit Network driver: igb
Device-2: Intel I210 Gigabit Network driver: igb
Drives:
Local Storage: total: raw: 149.06 GiB usable: 73.19 GiB used: 2.41 GiB (3.3%)
ID-1: /dev/sda vendor: Intel model: SSDSC2CT080A4 size: 74.53 GiB
ID-2: /dev/sdb vendor: Intel model: SSDSC2CT080A4 size: 74.53 GiB
Partition:
ID-1: / size: 53.95 GiB used: 164.1 MiB (0.3%) fs: zfs logical: boot-pool/ROOT/24.04.2
ID-2: /home size: 53.79 GiB used: 768 KiB (0.0%) fs: zfs logical: boot-pool/ROOT/24.04.2/home
ID-3: /opt size: 53.87 GiB used: 74.2 MiB (0.1%) fs: zfs logical: boot-pool/ROOT/24.04.2/opt
ID-4: /usr size: 55.92 GiB used: 2.12 GiB (3.8%) fs: zfs logical: boot-pool/ROOT/24.04.2/usr
ID-5: /var size: 53.82 GiB used: 31 MiB (0.1%) fs: zfs logical: boot-pool/ROOT/24.04.2/var
ID-6: /var/log size: 53.79 GiB used: 1.6 MiB (0.0%) fs: zfs
logical: boot-pool/ROOT/24.04.2/var/log

View File

@ -11,10 +11,10 @@ set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-map
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping UAP-AC-LR mac-address '18:e8:29:50:f7:5b' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping UAP-AC-LR mac-address '18:e8:29:50:f7:5b'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-desktop ip-address '192.168.1.100' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-desktop ip-address '192.168.1.100'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-desktop mac-address '04:92:26:DA:BA:C5' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-desktop mac-address '04:92:26:DA:BA:C5'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-nas ip-address '192.168.1.10' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping barbarian ip-address '192.168.1.10'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-nas mac-address '40:8d:5c:52:41:89' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping barbarian mac-address '40:8d:5c:52:41:89'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-nas2 ip-address '192.168.1.11' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping monk ip-address '192.168.1.11'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping joey-nas2 mac-address '90:2b:34:37:ce:ea' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping monk mac-address '90:2b:34:37:ce:ea'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping paladin ip-address '192.168.1.12' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping paladin ip-address '192.168.1.12'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping paladin mac-address '30:5a:3a:76:80:8f' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping paladin mac-address '30:5a:3a:76:80:8f'
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping tasmota-toes-day ip-address '192.168.1.50' set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping tasmota-toes-day ip-address '192.168.1.50'