Set up and document iSCSI device for fighter #119

Closed
opened 2024-02-10 22:38:52 -08:00 by Jafner · 6 comments
Owner

Because we've been running some of our production data on a SATA SSD RAID-0 array, I've finally decided that I don't need to lose data before moving to a more resilient setup.

  • Create a Zvol in TrueNAS for fighter to use
  • Mount and format the device on fighter
  • Configure fighter to mount the device on boot

Additional tasks:

  • Find out what happens if the iSCSI device falls out from under fighter.
  • Document the steps taken to create the target (with specific settings).
  • Document the steps taken to mount the device.
Because we've been running some of our production data on a SATA SSD RAID-0 array, I've finally decided that I don't need to lose data before moving to a more resilient setup. - [x] Create a Zvol in TrueNAS for `fighter` to use - [x] Mount and format the device on `fighter` - [x] Configure `fighter` to mount the device on boot Additional tasks: - [ ] Find out what happens if the iSCSI device falls out from under `fighter`. - [x] Document the steps taken to create the target (with specific settings). - [x] Document the steps taken to mount the device.
Author
Owner

Creating the Zvol and iSCSI share in TrueNAS Scale

  1. Navigate to the dataset to use. From the TrueNAS Scale dashboard, open the navigation side panel. Navigate to "Datasets". Select the pool to use (Tank).
  2. Create the Zvol to use. In the top-left, click "Add Zvol" (Why not a dataset?). Name: fighter, Size for this zvol: 8 TiB. Leave all other settings default.
  3. Navigate to the iSCSI share creator. Navigate to "Shares". Open the "Block (iSCSI) Shares Targets" panel. (Optionally, set the base name per RFC 3721 1.1 (iqn.2020-04.net.jafner)).
  4. Create the iSCSI share. Click the "Wizard" button in the top-right.
    a. Create or Choose Block Device. Name: fighter, Device: zvol/Tank/fighter, Sharing Platform: Modern OS.
    b. Portal. Portal: Create New, Discovery Authentication Method: NONE, Discovery Authentication Group: NONE, Add listen: 0.0.0.0.
    c. Initiator. Leave blank to allow all hostnames and IPs to initiate. Optionally enter a list IP address(es) or hostname(s) to restrict access to the device.
    d. Confirm. Review and Save.
  5. Enable iSCSI service at startup. Navigate to System Settings -> Services. If it's not already running, enable the iSCSI service and check the box to "Start Automatically".

Done!

# Creating the Zvol and iSCSI share in TrueNAS Scale 1. Navigate to the dataset to use. From the TrueNAS Scale dashboard, open the navigation side panel. Navigate to "Datasets". Select the pool to use (`Tank`). 2. Create the Zvol to use. In the top-left, click "Add Zvol" ([Why not a dataset?](https://www.truenas.com/community/threads/dataset-over-zvol-or-vice-versa.45526/)). Name: `fighter`, Size for this zvol: `8 TiB`. Leave all other settings default. 3. Navigate to the iSCSI share creator. Navigate to "Shares". Open the "Block (iSCSI) Shares Targets" panel. (Optionally, set the base name per [RFC 3721 1.1](https://datatracker.ietf.org/doc/html/rfc3721.html#section-1.1) (`iqn.2020-04.net.jafner`)). 4. Create the iSCSI share. Click the "Wizard" button in the top-right. a. Create or Choose Block Device. Name: `fighter`, Device: `zvol/Tank/fighter`, Sharing Platform: `Modern OS`. b. Portal. Portal: `Create New`, Discovery Authentication Method: `NONE`, Discovery Authentication Group: `NONE`, Add listen: `0.0.0.0`. c. Initiator. Leave blank to allow all hostnames and IPs to initiate. Optionally enter a list IP address(es) or hostname(s) to restrict access to the device. d. Confirm. Review and Save. 5. Enable iSCSI service at startup. Navigate to System Settings -> Services. If it's not already running, enable the iSCSI service and check the box to "Start Automatically". Done!
Author
Owner

Connecting to the iSCSI Share in Debian 12

  1. Install the open-iscsi package with sudo apt-get install open-iscsi.
  2. Get the list of available shares from the NAS with sudo iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10 where the IP for --portal is the IP of the NAS hosting the iSCSI share. In my case, this returns 192.168.1.10:3260,1 iqn.2020-03.net.jafner:fighter.
  3. Open the iSCSI session. Run sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --login. Where the name for --targetname is the iqn string including the share name. And where the address for --portal has both the IP and port used by the NAS hosting the iSCSI share. Verify the session connected with sudo iscsiadm --mode session --print=1, which should return the description of any active sessions. Debian.org.
  4. Format the newly-added block device.
    a. Identify the device name of the new device with sudo iscsiadm -m session -P 3 | grep "Attached scsi disk". In my case, sde. ServerFault.
    b. Partition and format the device. Run sudo parted --script /dev/sde "mklabel gpt" && sudo parted --script /dev/sde "mkpart primary 0% 100%" && sudo mkfs.ext4 /dev/sde1 Server-world.info.
    c. Mount the new partition to a directory. Run sudo mkdir /mnt/iscsi && sudo mount /dev/sde1 /mnt/iscsi. Where the path /dev/sde1 is the newly-created partition and the path /mnt/iscsi is the path to which you want it mounted.
    d. Test the disk write speed of the new partition. Run sudo dd if=/dev/zero of=/mnt/iscsi/temp.tmp bs=1M count=32768 to run a 32GB test write. Cloudzy.com.
# Connecting to the iSCSI Share in Debian 12 1. Install the `open-iscsi` package with `sudo apt-get install open-iscsi`. 2. Get the list of available shares from the NAS with `sudo iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10` where the IP for `--portal` is the IP of the NAS hosting the iSCSI share. In my case, this returns `192.168.1.10:3260,1 iqn.2020-03.net.jafner:fighter`. 3. Open the iSCSI session. Run `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --login`. Where the name for `--targetname` is the iqn string including the share name. And where the address for `--portal` has both the IP and port used by the NAS hosting the iSCSI share. Verify the session connected with `sudo iscsiadm --mode session --print=1`, which should return the description of any active sessions. [Debian.org](https://wiki.debian.org/SAN/iSCSI/open-iscsi). 4. Format the newly-added block device. a. Identify the device name of the new device with `sudo iscsiadm -m session -P 3 | grep "Attached scsi disk"`. In my case, `sde`. [ServerFault](https://serverfault.com/questions/828401/how-can-i-determine-if-an-iscsi-device-is-a-mounted-linux-filesystem). b. Partition and format the device. Run `sudo parted --script /dev/sde "mklabel gpt" && sudo parted --script /dev/sde "mkpart primary 0% 100%" && sudo mkfs.ext4 /dev/sde1` [Server-world.info](https://www.server-world.info/en/note?os=Debian_11&p=iscsi&f=3). c. Mount the new partition to a directory. Run `sudo mkdir /mnt/iscsi && sudo mount /dev/sde1 /mnt/iscsi`. Where the path `/dev/sde1` is the newly-created partition and the path `/mnt/iscsi` is the path to which you want it mounted. d. Test the disk write speed of the new partition. Run `sudo dd if=/dev/zero of=/mnt/iscsi/temp.tmp bs=1M count=32768` to run a 32GB test write. [Cloudzy.com](https://cloudzy.com/blog/test-disk-speed-in-linux/).
Author
Owner

Connecting and mounting the iSCSI share on boot

  1. Get the full path of the share's configuration. It should be like /etc/iscsi/nodes/<share iqn>/<share host address>/default. In my case it was /etc/iscsi/nodes/iqn.202-03.net.jafner:fighter/192.168.1.10,3260,1/default. Debian.org.
  2. Set the node.startup parameter to automatic. Run sudo sed -i 's/node.startup = manual/node.startup = automatic/g' /etc/iscsi/nodes/iqn.2020-03.net.jafner:fighter/192.168.1.10,3260,1/default.
  3. Add the new mount to /etc/fstab. Run sudo bash -c "echo '/dev/sde1 /mnt/iscsi ext4 _netdev 0 0' >> /etc/fstab". Adamsdesk.com, StackExchange.
# Connecting and mounting the iSCSI share on boot 1. Get the full path of the share's configuration. It should be like `/etc/iscsi/nodes/<share iqn>/<share host address>/default`. In my case it was `/etc/iscsi/nodes/iqn.202-03.net.jafner:fighter/192.168.1.10,3260,1/default`. [Debian.org](https://wiki.debian.org/SAN/iSCSI/open-iscsi). 2. Set the `node.startup` parameter to `automatic`. Run `sudo sed -i 's/node.startup = manual/node.startup = automatic/g' /etc/iscsi/nodes/iqn.2020-03.net.jafner:fighter/192.168.1.10,3260,1/default`. 3. Add the new mount to `/etc/fstab`. Run `sudo bash -c "echo '/dev/sde1 /mnt/iscsi ext4 _netdev 0 0' >> /etc/fstab"`. [Adamsdesk.com](https://www.adamsdesk.com/posts/sudo-echo-permission-denied/), [StackExchange](https://unix.stackexchange.com/questions/195116/mount-iscsi-drive-at-boot-system-halts).
Jafner referenced this issue from a commit 2024-02-11 00:24:56 -08:00
Author
Owner

Alright, we've successfully built a test instance of Nextcloud with the iSCSI device as its data partition.

Next step is to move the current instance over. Just gotta be attentive to permissions.

Alright, we've successfully built a test instance of Nextcloud with the iSCSI device as its data partition. Next step is to move the current instance over. Just gotta be attentive to permissions.
Author
Owner

We've now moved everything except Autopirate off of md0 and onto the iscsi directory.

  • Nextcloud
  • Stash
  • Minecraft
  • Autopirate
We've now moved everything except Autopirate off of md0 and onto the iscsi directory. - [X] Nextcloud - [X] Stash - [X] Minecraft - [x] Autopirate
Author
Owner

How to Gracefully Terminate iSCSI Session

  1. Ensure any Docker containers currently using the device are shut down. Run for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "ISCSI-DEPENDENT: $stack"; fi ; done to get the list of iSCSI-dependent stacks. Ensure all listed stacks are OK to shut down, then run for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "SHUTTING DOWN $stack" && docker-compose down; fi ; done.
  2. Unmount the iSCSI device. Run sudo umount /mnt/iscsi.
  3. Log out of the iSCSI session. Run sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --logout.
  4. Shut down the host. Run sudo shutdown now.
# How to Gracefully Terminate iSCSI Session 1. Ensure any Docker containers currently using the device are shut down. Run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "ISCSI-DEPENDENT: $stack"; fi ; done` to get the list of iSCSI-dependent stacks. Ensure all listed stacks are OK to shut down, then run `for stack in /home/admin/homelab/fighter/config/*; do cd $stack && if $(docker-compose config | grep -q /mnt/iscsi); then echo "SHUTTING DOWN $stack" && docker-compose down; fi ; done`. 2. Unmount the iSCSI device. Run `sudo umount /mnt/iscsi`. 3. Log out of the iSCSI session. Run `sudo iscsiadm --mode node --targetname "iqn.2020-03.net.jafner:fighter" --portal "192.168.1.10:3260" --logout`. 4. Shut down the host. Run `sudo shutdown now`.
Sign in to join this conversation.
No Label
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: Jafner/homelab#119
No description provided.