Example: bachelor of science

ConnectX -5 EN Card 5 100Gb/s Ethernet Adapter Card

Adapter card . PRODUCT BRIEF. ConnectX -5 EN card . 100Gb/s Ethernet Adapter card 5 . Intelligent RDMA-enabled network Adapter card with advanced HIGHLIGHTS. application offload capabilities for High-Performance Computing, NEW FEATURES. , Cloud and Storage platforms Tag matching and rendezvous offloads Adaptive routing on reliable transport Burst buffer offloads for background ConnectX -5 EN supports two ports of 100gb Ethernet connectivity, while delivering low sub-600ns checkpointing latency, extremely high message rates, PCIe switch and NVMe over Fabric offloads. ConnectX -5. NVMe over Fabric (NVMe-oF) offloads providing the highest performance and most flexible solution for the most demanding applications and Back-end switch elimination by host markets: Machine Learning, Data Analytics, and more. chaining Embedded PCIe switch HPC ENVIRONMENTS Enhanced vSwitch/vRouter offloads ConnectX -5 delivers high bandwidth, low latency, and high computation efficiency for high performance, Flexible pipeline data intensive and scalable compute and storage platforms.

350 akmead Parkway, Suite 100, Sunnyvale, CA 05 Tel 0-0-300 • Fax 0-0-303 wwwmellanoxcom oprit 1 ellano Tecnoloies ll rits resered ellano ellano loo onnect -Direct and PDirect are reistered trademars o ellano Tecnoloies td

Tags:

  Card, Ethernet, Adapter, 100gb, 100gb s ethernet adapter card

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of ConnectX -5 EN Card 5 100Gb/s Ethernet Adapter Card

1 Adapter card . PRODUCT BRIEF. ConnectX -5 EN card . 100Gb/s Ethernet Adapter card 5 . Intelligent RDMA-enabled network Adapter card with advanced HIGHLIGHTS. application offload capabilities for High-Performance Computing, NEW FEATURES. , Cloud and Storage platforms Tag matching and rendezvous offloads Adaptive routing on reliable transport Burst buffer offloads for background ConnectX -5 EN supports two ports of 100gb Ethernet connectivity, while delivering low sub-600ns checkpointing latency, extremely high message rates, PCIe switch and NVMe over Fabric offloads. ConnectX -5. NVMe over Fabric (NVMe-oF) offloads providing the highest performance and most flexible solution for the most demanding applications and Back-end switch elimination by host markets: Machine Learning, Data Analytics, and more. chaining Embedded PCIe switch HPC ENVIRONMENTS Enhanced vSwitch/vRouter offloads ConnectX -5 delivers high bandwidth, low latency, and high computation efficiency for high performance, Flexible pipeline data intensive and scalable compute and storage platforms.

2 ConnectX -5 offers enhancements to HPC RoCE for overlay networks infrastructures by providing MPI and SHMEM/PGAS and Rendezvous Tag Matching offload, hardware PCIe Gen 4 support support for out-of-order RDMA Write and Read operations, as well as additional Network Atomic and BENEFITS. PCIe Atomic operations support. Up to 100Gb/s connectivity per port ConnectX -5 EN utilizes RoCE (RDMA over Converged Ethernet ) technology, delivering low-latency and Industry-leading throughput, low high performance. ConnectX -5 enhances RDMA network capabilities by completing the Switch Adaptive- latency, low CPU utilization and high Routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion message rate semantics, providing multipath reliability and efficient support for all network topologies including Maximizes data center ROI with DragonFly and DragonFly+. Multi-Host technology Innovative rack design for storage ConnectX -5 also supports Burst Buffer offload for background checkpointing without interfering in the and Machine Learning based on Host main CPU operations, and the innovative transport service Dynamic Connected Transport (DCT) to ensure Chaining technology extreme scalability for compute and storage systems.

3 Smart interconnect for x86, Power, Arm, and GPU-based compute and STORAGE ENVIRONMENTS storage platforms Advanced storage capabilities NVMe storage devices are gaining popularity, offering very fast storage access. The evolving NVMe over including NVMe over Fabric offloads Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX -5 offers further Intelligent network Adapter supporting enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with flexible pipeline programmability no CPU intervention, and thus improved performance and lower latency. Cutting-edge performance in Moreover, the embedded PCIe switch enables customers to build standalone storage or Machine virtualized networks including Network Function Virtualization (NFV). Learning appliances. As with the earlier generations of ConnectX adapters, standard block and file access protocols can leverage RoCE for high-performance storage access.

4 A consolidated compute and Enabler for efficient service chaining capabilities storage network achieves significant cost-performance advantages over multi-fabric networks. Efficient I/O consolidation, lowering data center costs and complexity 2018 Mellanox Technologies. All rights reserved.. For illustration only. Actual products may vary. Mellanox ConnectX -5 EN Adapter card page 2. Block Device / Native Application Mellanox Accelerated Switching And Packet Processing (ASAP2). NVMe Transport Layer SCSI NVMe Direct technology allows to offload vSwitch/vRouter by handling the Local NVMe Local NVMe iSCSI data plane in the NIC hardware while maintaining the control plane Fabric Initiator iSER. NVMe Device unmodified. As a result there is significantly higher vSwitch/vRouter NVMe Device RDMA performance without the associated CPU load. The vSwitch/vRouter offload functions that are supported by Fabric ConnectX -5 include Overlay Networks (for example, VXLAN, NVGRE, RDMA MPLS, GENEVE, and NSH) headers' encapsulation and de-capsulation, NVMe Fabric Target NVMe iSER.

5 As well as stateless offloads of inner packets, packet headers' re-write Fabric Target iSCSI enabling NAT functionality, and more. Data Path Control Path (IO Cmds, (Init, Login, SCSI. Data Fetch) etc.) NVMe Moreover, the intelligent ConnectX -5 flexible pipeline capabilities, Device Target SW. which include flexible parser and flexible match-action tables, can be ConnectX -5 enables an innovative storage rack design, Host Chaining, programmed, which enable hardware offloads for future protocols. by which different servers can interconnect directly without involving ConnectX -5 SR-IOV technology provides dedicated Adapter resources the Top of the Rack (ToR) switch. Alternatively, the Multi-Host and guaranteed isolation and protection for virtual machines (VMs). technology that was first introduced with ConnectX -4 can be used. within the server. Moreover, with ConnectX -5 Network Function Mellanox Multi-Host technology, when enabled, allows multiple Virtualization (NFV), a VM can be used as a virtual appliance.

6 With full hosts to be connected into a single Adapter by separating the PCIe data-path operations offloads as well as hairpin hardware capability interface into multiple and independent interfaces. With the various and service chaining, data can be handled by the Virtual Appliance new rack design alternatives, ConnectX -5 lowers the total cost of with minimum CPU utilization. ownership (TCO) in the data center by reducing CAPEX (cables, NICs, With these capabilities data center administrators benefit from better and switch port expenses), and by reducing OPEX by cutting down on switch port management and overall power usage. Para-Virtualized SR-IOV. VM VM VM VM. Hypervisor Hypervisor Virtual Function vSwitch Physical (VF). Function (PF). Eliminating Backend NIC SR-IOV NIC eSwitch Switch Host Chaining for Storage Backend server utilization while reducing cost, power, and cable complexity, Traditional Storage Connectivity allowing more Virtual Appliances, Virtual Machines and more tenants on the same hardware.

7 CLOUD AND ENVIRONMENTS. Cloud and customers that are developing their platforms on HOST MANAGEMENT. Software Defined Network (SDN) environments are leveraging their Mellanox host management and control capabilities include servers' Operating System Virtual-Switching capabilities to enable NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard maximum flexibility. Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267. Open V-Switch (OVS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. A virtual switch traditionally resides in the hypervisor and switching is based on twelve-tuple matching on flows. The virtual switch or virtual router software-based solution is CPU intensive, affecting system performance and preventing fully utilizing available bandwidth.

8 2018 Mellanox Technologies. All rights reserved. Mellanox ConnectX -5 EN Adapter card page 3. COMPATIBILITY. PCI Express Interface PCIe switch Downstream Port Operating Systems/Distributions* Connectivity PCIe Gen 4 Containment (DPC) enablement for RHEL/CentOS Interoperability with Ethernet PCIe Gen , and compatible PCIe hot-plug Windows switches (up to 100 GbE). , , 8, 16GT/s link rate Access Control Service (ACS) for FreeBSD Passive copper cable with ESD. peer-to-peer secure communication protection Auto-negotiates to x16, x8, x4, x2, or VMware x1 lanes Advance Error Reportng (AER) Powered connectors for optical and OpenFabrics Enterprise Distribution PCIe Atomic Process Address Space ID (PASID) (OFED) active cable support Address Transaltion Services (ATS). TLP (Transaction Layer Packet) OpenFabrics Windows Distribution Processing Hints (TPH) IBM CAPI v2 support (Coherent (WinOF-2). Accelerator Processor Interface).

9 Embedded PCIe Switch: Up to 8. bifurcations Support for MSI/MSI-X mechanisms FEATURES. Ethernet Dynamically Connected Transport Storage Offloads HPC Software Libraries 100 GbE / 50 GbE / 40 GbE / 25 GbE / (DCT) NVMe over Fabric offloads for target Open MPI, IBM PE, OSU MPI. 10 GbE / 1 GbE Enhanced Atomic operations machine (MVAPICH/2), Intel MPI. IEEE , 100 Gigabit Advanced memory mapping support, Erasure Coding offload offloading Platform MPI, UPC, Open SHMEM. Ethernet allowing user mode registration and Reed Solomon calculations Management and Control IEEE , Ethernet Consortium remapping of memory (UMR) T10 DIF Signature handover NC-SI over MCTP over SMBus 25, 50 Gigabit Ethernet , supporting On demand paging (ODP) operation at wire speed, for ingress and NC-SI over MCTP over PCIe - all FEC modes MPI Tag Matching and egress traffic Baseboard Management Controller IEEE 40 Gigabit Ethernet Rendezvous protocol offload Storage protocols: SRP, iSER, NFS interface IEEE 10 Gigabit Ethernet Out-of-order RDMA supporting RDMA, SMB Direct, NVMe-oF PLDM for Monitor and Control IEEE Energy Efficient Adaptive Routing Overlay Networks DSP0248.

10 Ethernet Burst buffer offload RoCE over Overlay Networks PLDM for Firmware Update DSP0267. IEEE based auto-negotiation In-Network Memory registration-free Stateless offloads for overlay SDN management interface for and KR startup RDMA memory access network tunneling protocols managing the eSwitch IEEE , Link CPU Offloads Hardware offload of encapsulation I2C interface for device control and Aggregation RDMA over Converged Ethernet and decapsulation of VXLAN, configuration IEEE , VLAN tags and (RoCE) NVGRE, and GENEVE overlay General Purpose I/O pins priority TCP/UDP/IP stateless offload networks SPI interface to Flash IEEE (QCN) Congestion LSO, LRO, checksum offload Hardware-Based I/O JTAG IEEE and IEEE Notification Virtualization RSS (also on encapsulated packet), Remote Boot IEEE (ETS) TSS, HDS, VLAN and MPLS tag Single Root IOV Remote boot over Ethernet IEEE (PFC) insertion/stripping, Receive flow Address translation and protection Remote boot over iSCSI.


Related search queries