Example: tourism industry

ConnectX Ethernet Adapter Cards for OCP Spec 3

ConnectX Ethernet Adapter Cards for OCP Spec Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec Form Factor For illustration only. Actual products may vary. Ethernet Adapter Cards in the OCP form factor support speeds from 10 to 200 GbE. Combining leading features with best-in-class efficiency, Mellanox OCP Cards enable the highest data center Performance and ScaleMellanox 10, 25, 40, 50, 100 and 200 GbE Adapter Cards deliver industry-leading connectivity for performance-driven server and storage applications. Offering high bandwidth coupled with ultra-low latency, ConnectX Adapter Cards enable faster access and real-time its OCP offering, Mellanox offers a variety of OCP Adapter Cards , providing best-in-class performance and efficient computing through advanced acceleration and offload capabilities. These advanced capabilities that free up valuable CPU for other tasks, while increasing data center performance, scalability and efficiency, include: RDMA over Converged Ethernet (RoCE) NVMe-over-Fabrics (NVMe-oF) Virtual switch offloads ( , OVS offload) leveraging ASAP2 - Accelerated Switching and Packet Processing GPUD irect communication acceleration Mellanox Multi-Host for connecting multiple compute or storage hosts to a single interconnect Adapter Mellanox Socket Direct technology for improving the

provisioning, monitoring and diagnostics with ConnectX OCP3.0 cards, providing the agility and efficiency for scalability and future growth. Featuring an intuitive and graphical user interface (GUI), NEO-Host provides in-depth visibility and host networking control. NEO-Host also integrates with Mellanox NEO, Mellanox’s end-to-end data-

Tags:

  Diagnostics

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of ConnectX Ethernet Adapter Cards for OCP Spec 3

1 ConnectX Ethernet Adapter Cards for OCP Spec Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec Form Factor For illustration only. Actual products may vary. Ethernet Adapter Cards in the OCP form factor support speeds from 10 to 200 GbE. Combining leading features with best-in-class efficiency, Mellanox OCP Cards enable the highest data center Performance and ScaleMellanox 10, 25, 40, 50, 100 and 200 GbE Adapter Cards deliver industry-leading connectivity for performance-driven server and storage applications. Offering high bandwidth coupled with ultra-low latency, ConnectX Adapter Cards enable faster access and real-time its OCP offering, Mellanox offers a variety of OCP Adapter Cards , providing best-in-class performance and efficient computing through advanced acceleration and offload capabilities. These advanced capabilities that free up valuable CPU for other tasks, while increasing data center performance, scalability and efficiency, include: RDMA over Converged Ethernet (RoCE) NVMe-over-Fabrics (NVMe-oF) Virtual switch offloads ( , OVS offload) leveraging ASAP2 - Accelerated Switching and Packet Processing GPUD irect communication acceleration Mellanox Multi-Host for connecting multiple compute or storage hosts to a single interconnect Adapter Mellanox Socket Direct technology for improving the performance of multi-socket servers.

2 Enhanced security solutionsComplete End-to-End NetworkingConnectX OCP Adapter Cards are part of Mellanox s 10, 25, 40, 50, 100 and 200 GbE end-to-end portfolio for data centers which also includes switches, application acceleration packages, and cabling to deliver a unique price-performance value proposition for network and storage solutions. With Mellanox, IT managers can be assured of the highest performance, reliability and most efficient network fabric at the lowest cost for the best return on addition, Mellanox NEO -Host management software greatly simplifies host network provisioning, monitoring and diagnostics with ConnectX Cards , providing the agility and efficiency for scalability and future growth. Featuring an intuitive and graphical user interface (GUI), NEO-Host provides in-depth visibility and host networking control. NEO-Host also integrates with Mellanox NEO, Mellanox s end-to-end data-center orchestration and management platform.

3 Open Compute Project Spec OCP NIC specification extends the capabilities of OCP NIC design specification. OCP defines a different form factor and connector style than OCP The OCP specification defines two basic card sizes: Small Form Factor (SFF) and Large Form Factor (LFF). Mellanox OCP NICs are currently supported in a SFF.** Future designs may utilize LFF to allow for additional PCIe lanes and/or Ethernet ports, Open Data Center Committee (ODCC) compatible Supports the latest OCP NIC specifications All platforms: x86, Power, Arm, compute and storage Industry-leading performance TCP/IP and RDMA - for I/O consolidation SR-IO virtualization technology: VM protection and QoS Cutting-edge performance in virtualized Overlay Networks Increased Virtual Machine (VM) count per server ratioConnectX Ethernet Adapter BenefitsTARGET APPLICATIONS Data center virtualization Compute and storage platforms for public & private clouds HPC, Machine Learning, AI, Big Data, and more Clustered databases and high-throughput data warehousing Latency-sensitive financial analysis and high frequency trading Media & Entertainment Telco platformsOCP Spec OCP Spec DimensionsNon-rectangular (8000mm2)SFF.

4 76x115mm (8740mm2)PCIe LanesUp to x16 SFF: Up to x16 Maximum Power CapabilityUp to for PCIe x8 card; Up to for PCIe x16 cardSFF: Up to 80 WBaseband Connector TypeMezzanine (B2B)Edge ( pitch)Network InterfacesUp to 2 SFP side-by-side or 2 QSFP belly-to-bellyUp to two QSFP in SFF, side-by-sideExpansion DirectionN/ASideInstallation in ChassisParallel to front or rear panelPerpendicular to front/rear panel Hot SwapNoYes (pending server support)Mellanox Multi-HostUp to 4 hostsUp to 4 hosts in SFF or 8 hosts in LFFHost Management InterfacesRBT, SMBusRBT, SMBus, PCIeHost Management ProtocolsNot standardDSP0267, DSP0248 For more details, please refer to the Open Compute Project (OCP) also provides additional board real estate, thermal capacity, electrical interfaces, network interfaces, host conflagration and management. OCP also introduces a new mating technique that simplifies FRU installation and removal, and reduces overall table below shows key comparisons between the OCP Specs and Network SpeedInterface TypePCIeConnectX Mellanox Multi-Host / Socket Direct (a)Crypto Option(b)Default OPN(c)2x x8 ConnectX -4 x16 ConnectX x16 ConnectX -6 DxMCX623432AC-ADAB2x x16 ConnectX -5 Contact x16 ConnectX -6 x16 ConnectX -6 DxContact Mellanox1x x16 ConnectX -5 x16 ConnectX -6 DxContact Mellanox2x x16 ConnectX x16 ConnectX -5 x16 ConnectX -6 DxMCX623436AC-CDAB1x x16 ConnectX -6 DxMCX623435AC-VDAB2x x16 ConnectX -6 MCX613436A-VDAI(a) When using Mellanox Multi-Host or Mellanox Socket Direct in virtualization or dual-port use cases, some restrictions may apply.

5 For further details, contact Mellanox Customer Support.(b) Crypto-enabled Cards also support secure boot and secure firmware update. (c) The last digit of each OPN-suffix displays the OPN s default bracket option: B = Pull tab Thumbscrew; I = Internal Lock; E = Ejector Latch. For other bracket types, contact Mellanox. Note: ConnectX -5 Ex is an enhanced performance version that supports PCIe Gen4 and higher : Additional flavors with Mellanox Multi-Host, Mellanox Socket Direct, or Crypto-disabled/enabled are available; contact Mellanox for details. Ethernet OCP Adapter CardsSpecs & Part Virtualization and Virtual SwitchingMellanox ConnectX Ethernet adapters provide comprehensive support for virtualized data centers with Single-Root I/O Virtualization (SR-IOV), allowing dedicated Adapter resources and guaranteed isolation and protection for virtual machines within the server. I/O virtualization gives data center managers better server utilization and LAN and SAN unification while reducing cost, power and cable , virtual machines running in a server traditionally use multilayer virtual switch capabilities, like Open vSwitch (OVS).

6 Mellanox s ASAP2 - Accelerated Switch and Packet Processing technology allows for the offloading of any implementation of a virtual switch or virtual router by handling the data plane in the NIC hardware, all the while maintaining the control plane unmodified. This results in significantly higher vSwitch/vRouter performance without the associated CPU load. RDMA over Converged Ethernet (RoCE)Mellanox RoCE doesn t require any network configurations, allowing for seamless deployment and efficient data transfers with very low latencies over Ethernet networks a key factor in maximizing a cluster s ability to process data instantaneously. With the increasing use of fast and distributed storage, data centers have reached the point of yet another disruptive change, making RoCE a must in today s data centers. Flexible Multi-Host TechnologyInnovative Mellanox Multi-Host technology provides high flexibility and major savings in building next generation, scalable, high-performance data centers.

7 Mellanox Multi-Host connects multiple compute or storage hosts to a single interconnect Adapter , separating the Adapter PCIe interface into multiple and independent PCIe interfaces, without any performance degradation. Security From Zero Trust to Hero TrustIn an era where privacy of information is key and zero trust is the rule, Mellanox ConnectX OCP adapters offer a range of advanced built-in capabilities that bring security down to the end-points with unprecedented performance and offers options for AES-XTS block-level data-at-rest encryption/decryption offload starting from ConnectX -6. Additionally, ConnectX -6 Dx includes IPsec and TLS data-in-motion inline encryption/decryption offload. ConnectX -6 Dx also enables hardware-based L4 firewall, which offloads stateful connection tracking Mellanox ConnectX OCP adapters support secure firmware update, ensuring that only authentic images produced by Mellanox can be installed; this is regardless of whether the installation happens from the host, the network, or a BMC.

8 For an added level of security, ConnectX -6 Dx uses embedded Hardware Root-of-Trust (RoT) to implement secure StorageMellanox adapters support a rich variety of storage protocols and enable partners to build hyperconverged platforms where the compute and storage nodes are co-located and share the same infrastructure. Leveraging RDMA, Mellanox adapters enhance numerous storage protocols, such as iSCSI over RDMA (iSER), NFS RDMA, and SMB Direct to name a few. Moreover, ConnectX adapters also offer NVMe-oF protocols and offloads, enhancing utilization of NVMe based storage storage related hardware offload is the Signature Handover mechanism based on an advanced T-10/DIF ManagementMellanox host management sideband implementations enable remote monitor and control capabilities using RBT, MCTP over SMBus, and MCTP over PCIe Baseboard Management Controller (BMC) interface, supporting both NC-SI and PLDM management protocols using these interfaces.

9 Mellanox OCP adapters support these protocols to offer numerous Host Management features such as PLDM for Firmware Update, network boot in UEFI driver,UEFI secure boot, and more. Enhancing Machine Learning Application PerformanceMellanox adapters with built-in advanced acceleration and RDMA capabilities deliver best-in-class latency, bandwidth and message rates, and lower CPU utilization. Mellanox PeerDirect technology with NVIDIA GPUD irect RDMA enables adapters with direct peer-to-peer communication to GPU memory, without any interruption to CPU operations. Mellanox adapters also deliver the highest scalability, efficiency, and performance for a wide variety of applications, including bioscience, media and entertainment, automotive design, computational fluid dynamics and manufacturing, weather research and forecasting, as well as oil and gas industry modeling. Thus, Mellanox adapters are the best NICs for machine learning applications.

10 Mellanox Socket Direct Mellanox Socket Direct technology brings improved performance to multi-socket servers by enabling direct access from each CPU in a multi-socket server to the network through its dedicated PCIe interface. With this type of configuration, each CPU connects directly to the network; this enables the interconnect to bypass a QPI (UPI) and the other CPU, optimizing performance and improving latency. CPU utilization improves as each CPU handles only its own traffic, and not the traffic from the other CPU. Mellanox s OCP Cards include native support for socket direct technology for multi-socket servers and can support up to 8 Software SupportAll Mellanox Adapter Cards are supported by a full suite of drivers for Linux major distributions, Microsoft Windows , VMware vSphere and FreeBSD . Drivers are also available inbox in Linux main distributions, Windows and Form FactorsIn addition to the OCP Spec Cards , Mellanox Adapter Cards are available in other form factors to meet data centers specific needs, including: OCP Specification Type 1 & Type 2 mezzanine Adapter form factors, designed to mate into OCP servers.


Related search queries