CTL is very SCSI specific, but of course when I wrote it in 2003, it was the Copan Target Layer and ran on Linux, CAM only supported SCSI, and I only had vague hopes of getting CTL into FreeBSD one day.
Scott and Alexander have some good points.
A few thoughts:
1. Most if not all HBAs that support NVMeoF will also support SCSI. (Chelsio, Qlogic, Emulex, and Mellanox support both). Whatever we do (refactored multi-protocol CTL or separate stacks), we’ll want to allow users to run NVMeoF and SCSI target and initiator at the same time. If you go the separate target stack route, you can of course have separate peripheral driver code to connect to CAM. (I’m assuming you would still want to go through CAM…)
2. From a user standpoint, it might be nice to have a single configuration and management interface…but that could potentially make the thing more unwieldy. I guess whatever we do, we’ll want it to be well thought out.
3. It would be nice to have functionality like CTL that allows an internally-visible NVMe target implementation. We’ve got some NVMe device emulation in Bhyve, but this would be more generic, and could be used to provide storage to Bhyve VMs, or useful to test new NVMe initiator code without extra hardware or cranking up a VM.
4. As Alexander pointed out, NVMe’s ordering requirements are not as complex as SCSI. See sys/cam/ctl/ctl_ser_table.c and the OOA queue for an illustration of the SCSI complexity. NVMe also allows for multiple queues, and namespaces (which I suppose are like multiple SCSI LUNs). Performance, mainly low latency, will probably be a primary design goal. A separate stack might make that easier, although if you did it through CTL, you would split SCSI and NVMe off in the peripheral driver code (scsi_ctl.c) and the two codepaths probably wouldn’t come back together until you got to the block or Ramdisk backend.
I don’t think it must be done one way or the other. There are some tradeoffs.
I’m glad you’re getting paid to work on it, NVMe target is a feature we need in FreeBSD, and I’m sure you’ll do a good job with it. :)
Ken
—
Ken Merry
I feel that would we subtract SCSI out of CTL, there would not much left, aside of some very basic interfaces. And those may appear benefitting from taking different approaches with NVMe's multiple queues, more relaxed request ordering semantics, etc. Into recent NVMe specifications they've pumped in many things to bet on par with SCSI, but I am not sure it is similar enough to not turn common code into a huge mess. Though I haven't looked what Linux did in that front and how good idea it was there.
Post by Scott LongCTL stands for “CAM Target Layer”, but yes, it’s a Periph and it’s deeply tied to SCSI protocol, even if it’s mostly transport agnostic. I guess the answer to your question depends on the scope of your contract. It would be ideal to refactor CTL into protocol-specific sub-modules, but that might take a significant amount of work, and might not be all that satisfying at the end. I’d probably just copy CTL into a new, independent module, start replacing SCSI protocol idioms with NVMe ones, and along the way look for low-hanging fruit that can be refactored into a common library.
Scott
Post by John BaldwinOne of the things I will be working on in the near future is NVMe over fabrics
support, and specifically over TCP as Chelsio NICs include NVMe offload support
(I think PDU encap/decap similar to the cxgbei driver for iSCSI offload).
A question I have about this is if it makes sense for NVMeoF target support to
make use of ctl? From what I can see, the code in ctl today seems to be
very SCSI specific including both in the kernel and in the userland ctld
unlike the Linux target code which appears to support both NVMeoF and iSCSI
in its ctld equivalent. Is the intention for there to be a cleaner separation
here, and if so do you have any thoughts on what the design would be like?
Or should NVMeoF just be its own thing separate from ctl and ctld?
--
John Baldwin
--
Alexander Motin
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de