Hi all,
I need to exposs an iscsi disk to be used as a main disk in a vm. Because I am pretty new in this solution I would like to ask some tips and good practices to avoid making rookie mistakes that can really hit the performance or availability.
What are the common things I should take into account before deploying everything?
Thanks in advance
Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.
For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.
For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.
Thanks for the comment, I will try to check but performance should not be an issue. In the end it is personal selfhosted service.
Don’t present LUNs to VMs is the best practice. The only time you should do it is if you’re absolutely forced because you’re doing something that doesn’t support better shared disk options.
The you recommend to mount them via the hypervisor?
I was certainly planning to use it in the vm itself…
The problem with external LUNs is that they’re out of the control of the hypervisor. You can run into situations where migration events can cause access issues. If you can have the data presented through the hypervisor it will lead to less potential issues. Using object or NFS are also good options, if available.
Is there a specific reason to mount the lun directly opposed to creating a virtual disk? Performance, maybe?
More like, if you wanted the storage under the LUN to be shared through the VM. Essentially, mount the LUN into the VM and then run NFS/SMB from the VM as a NAS. Works out pretty well since with a little bit of trickery you can have a NAS that is also HA (assuming the storage pool doesn’t go down).
With that said, I’m very interested too.
Unless I completely misunderstood your question
Unless you are forced to use the same network interface, always use dedicated NIC, vLAN when possible.
Like others mentioned, if the VM is on a hypervisor where you can use that to present the disk, you should try that.
Examples would be NAS box with two interfaces, use second one for iscsi. Connect that to switch with different vlan. Connect something like proxmox second nic to iscsi vlan. Add remote disk in proxmox from iscsi nas. Add disk to VM.
This idea spans all different tech.
Thx, I will keep in mind, but I have consumer grade hw and I am afraid that vlan in my switch is not possible.
I any case thanks for the bunch of tips
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters ESXi VMWare virtual machine hypervisor HA Home Assistant automation software ~ High Availability NAS Network-Attached Storage
3 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.
[Thread #568 for this sub, first seen 3rd Mar 2024, 16:55] [FAQ] [Full list] [Contact] [Source code]
deleted by creator