Tuesday, June 6, 2023
HomeUncategorizedNVMe-over-TCP offered by Lightbits is 5 times lower than NVMe-over-FC etc.

NVMe-over-TCP offered by Lightbits is 5 times lower than NVMe-over-FC etc.

Nicolas delafraye – stock.adobe.

Lightbits builds NVMe-overTCP SAN clusters for Linux servers with Intel cards, accelerating network processing, delivering millions of IOPS and Storage Public and Private Cloud

    • go through

    • Young En Serra, LeMagIT
    • Antony Adshead, Storage Editor
    • August 15, 2022 15:15

      NVMe-over-TCP is cheaper than equivalent NVMe-over-Ethernet5 ROCE SOLUTION – This is the promise of Lightbits LightOS, which enables customers to build based on Flash SAN on commodity hardware Storage cluster and use Intel NICs.

      Lamp position

      Demonstrate the system to show the equivalent of

      NVMe-over-Fibre Channel

      or ROCE/Ethernet – both solutions cost much more – LightOS was configured using an Intel Ethernet 100Gbps E810-CQDA2 card in a press conference attended by the sister publication of French Computer Weekly On a three-node cluster,

      LeMagIT

      .

      NVMe-over-TCP can run on a standard Ethernet network, which includes all common switches and card servers. At the same time, NVMe-over-Fibre Channel and NVME-over-ROCE

      Requires expensive hardware, but guarantees fast transfer rates. Their performance is due to no

      TCP protocol, which may drag down the transmission rate, Because it takes time to process packets, it slows down access. The benefit of the Intel Ethernet card is that it decodes part of this protocol to mitigate this effect.

      “Our promise is that we can deliver a high-performance SAN on low-cost hardware,” said Lightbits’ Kam Eshghi, head of strategy. “We don’t sell proprietary equipment that requires proprietary hardware. We provide a system that you can install on an available server and run on your network.”

    Cheaper private cloud storage

    Lightbits’ demo showed each 24 Linux servers equipped with dual-port 25Gbps Ethernet cards. Each server accesses 10 shared volumes on the cluster. The observable performance of the storage cluster reaches 14 million IOPS and 53GBps reads, 6 million IOPS and 23GBps writes, or 8.4 million IOPS and 32GBps mixed workloads.

    According to Eshghi, these performance levels are similar to NVMe SSDs installed directly in the server, the only downside is the longer latency, but that’s about it 200 or 300 microseconds compared to 100 microseconds .

    “At this scale, the difference is negligible,” Eshghi said. “The key to the application is latency below one millisecond.”

    In addition to inexpensive connectivity, LightOS offers features commonly found in products from major storage array manufacturers. These include managing SSDs as storage pools with hot-swappable drives, intelligently rebalancing data to reduce wear rates, and dynamic replication to avoid data loss in the event of unplanned downtime.

    “Lightbits allows

    Lightbits

    lead system architect Abel Gordon said: . “For Upstream servers provide up to 64,000 logical volumes. To present our cluster as a SAN to servers, we have a vCenter plugin, a Cinder driver for OpenStack and a CSI driver for Kubernetes. ”

    “We don’t yet support Windows servers,” Gordon said. “Our goal is to be an alternative solution for public and private cloud operators commercializing virtual machines or containers. Program. ”

    To this end, LightOS provides a management console that can assign different performance and capacity limits to different users, or provide services to different enterprise customers in public cloud scenarios. Prometheus monitoring and Grafana visualization monitoring.

    Working closely with Intel

    In another demo, a similar hardware cluster was shown, but with open source Ceph object storage and not optimized for Intel NICs.

    in demo 12 Linux servers running 8 containers in Kubernetes accessing the storage cluster simultaneously. With mixed reads and writes, the Ceph deployment achieved a rate of about 4GBps, compared to Lightbits using TLC (higher performance flash). The version is about 20GBps, or 15GBps when using large-capacity QLC drives. Ceph is Red Hat’s recommended storage private cloud for building.

    “Lightbits’ close relationship with Intel enables it to use the latest version of Intel products optimize LightOS,” said Gary McCulley of Intel’s Data Center Product Group. “In fact, if you put your system on the latest generation of servers, you automatically get more recent storage arrays than the previous generation of processors and chips. better performance. ”

    Intel is promoting its latest components to integrators using a turnkey server concept. One of them is a 1U server with 10 hot-swap NVMe SSDs, two Xeon latest generation processors, and a new 800-series Ethernet card. For testing Interested in the design of the storage workload framework, Intel chose to run it with LightOS.

    Intel’s 800 series Ethernet cards do not have fully integrated dynamic decoding of network protocols, unlike the SmartNIC 500X, which is FPGA-based, or its future Mount Evans NIC, which uses a DPU type accelerator card (Intel calls it an IPU).

    On the 800 series, the controller only speeds up the ordering between packets to avoid each packet A bottleneck occurs between server access. Intel calls this pre-IPU processing ADQ (Application Device Queue).

    However, McCulley promises that integration between LightOS and IPU-equipped cards is in the works. It will be more as a proof-of-concept rather than a fully developed product. Intel appears to be looking to commercialize its IPU-based NICs as NVMe-over-ROCE cards, thus requiring a more expensive solution than what Lightbits offers.

  • RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    LAST NEWS

    Featured NEWS