Example: air traffic controller

ConnectX -4 Lx EN - Hamburgnet

ETHERNET. ADAPTER CARDS. PRODUCT BRIEF. ConnectX -4 Lx EN . 10/25/40/50 Gigabit Ethernet Adapter Cards supporting RDMA, Overlay Networks Encapsulation/Decapsulation and more ConnectX -4 Lx EN Network Controller with 10/25/40/50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and HIGHLIGHTS. applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible BENEFITS. Highest performing boards for solution for Web , Cloud, data analytics, database, and storage platforms. applications requiring high bandwidth, low latency and high message rate With the exponential increase in usage of data and I/O Virtualization Industry leading throughput and latency the creation of new applications, the demand for the ConnectX -4 Lx EN SR-IOV technology provides dedi- for Web , Cloud and Big Data highest throughput, lowest latency, virtualization and cated adapter resources and guaranteed isolation applications sophisticated data acceleration engines continues and protection for virtual machines (VMs) within Smart interconnect for x86, Power, ARM, to rise.

ConnectX-4 Lx EN Network Controller with 10/25/40/50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to …

Tags:

  Connectx, Connectx 4 lx en

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of ConnectX -4 Lx EN - Hamburgnet

1 ETHERNET. ADAPTER CARDS. PRODUCT BRIEF. ConnectX -4 Lx EN . 10/25/40/50 Gigabit Ethernet Adapter Cards supporting RDMA, Overlay Networks Encapsulation/Decapsulation and more ConnectX -4 Lx EN Network Controller with 10/25/40/50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and HIGHLIGHTS. applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible BENEFITS. Highest performing boards for solution for Web , Cloud, data analytics, database, and storage platforms. applications requiring high bandwidth, low latency and high message rate With the exponential increase in usage of data and I/O Virtualization Industry leading throughput and latency the creation of new applications, the demand for the ConnectX -4 Lx EN SR-IOV technology provides dedi- for Web , Cloud and Big Data highest throughput, lowest latency, virtualization and cated adapter resources and guaranteed isolation applications sophisticated data acceleration engines continues and protection for virtual machines (VMs) within Smart interconnect for x86, Power, ARM, to rise.

2 ConnectX -4 Lx EN enables data centers to the server. I/O virtualization with ConnectX -4 Lx and GPU-based compute and storage leverage the world's leading interconnect adapter EN gives data center administrators better server platforms for increasing their operational efficiency, improving utilization while reducing cost, power, and cable Cutting-edge performance in virtualized servers' utilization, maximizing applications complexity, allowing more Virtual Machines and overlay networks productivity, while reducing total cost of ownership more tenants on the same hardware. Efficient I/O consolidation, lowering data (TCO). center costs and complexity Overlay Networks Virtualization acceleration ConnectX -4 Lx EN provides an unmatched In order to better scale their networks, data center Power efficiency combination of 10, 25, 40, and 50 GbE bandwidth, operators often create overlay networks that carry KEY FEATURES. sub-microsecond latency and a 75 million packets traffic from individual virtual machines over logical 10/25/40/50Gb/s speeds per second message rate.

3 It includes native tunnels in encapsulated formats such as NVGRE Single and dual-port options available hardware support for RDMA over Converged and VXLAN. While this solves network scalability Erasure Coding offload Ethernet, Ethernet stateless offload engines, issues, it hides the TCP packet from the hardware Virtualization Overlay Networks,and GPUD irect Technology. offloading engines, placing higher loads on the host Low latency RDMA over Converged High Speed Ethernet Adapter CPU. ConnectX -4 Lx EN effectively addresses this Ethernet by providing advanced NVGRE, VXLAN and GENEVE CPU offloading of transport operations ConnectX -4 Lx offers the best cost effective hardware offloading engines that encapsulate Application offloading Ethernet adapter solution for 10, 25, 40 and 50Gb/s and de-capsulate the overlay protocol headers, Mellanox PeerDirectTM communication Ethernet speeds, enabling seamless networking, enabling the traditional offloads to be performed acceleration clustering, or storage.

4 The adapter reduces on the encapsulated traffic for these and other Hardware offloads for NVGRE, VXLAN. application runtime, and offers the flexibility and tunneling protocols (GENEVE, MPLS, QinQ, and so and GENEVE encapsulated traffic scalability to make infrastructure run as efficiently on). With ConnectX -4 Lx EN, data center operators End-to-end QoS and congestion control and productively as possible. can achieve native performance in the new network Hardware-based I/O virtualization architecture. RoHS-R6. 2015 Mellanox Technologies. All rights reserved. ConnectX -4 Lx EN 10/25/40/50 Gigabit Ethernet Adapter Card page 2. RDMA over Converged Ethernet (RoCE) network achieves significant cost-performance advantages over multi-fabric networks. ConnectX -4 Lx EN supports RoCE specifications delivering low-latency and high- performance Distributed RAID. over Ethernet networks. Leveraging data ConnectX -4 Lx EN delivers advanced Erasure center bridging (DCB) capabilities as well as Coding offloading capability, enabling ConnectX -4 Lx EN advanced congestion control distributed Redundant Array of Inexpensive hardware mechanisms, RoCE provides efficient Disks (RAID), a data storage technology that low-latency RDMA services over Layer 2 and combines multiple disk drive components Layer 3 networks.

5 Into a logical unit for the purposes of data Mellanox PeerDirect redundancy and performance improvement. ConnectX -4 Lx EN's Reed-Solomon capability PeerDirect communication provides high introduces redundant block calculations, efficiency RDMA access by eliminating which, together with RDMA, achieves high unnecessary internal data copies between performance and reliable storage access. components on the PCIe bus (for example, from GPU to CPU), and therefore significantly Software Support reduces application run time. ConnectX -4 All Mellanox adapter cards are supported Lx EN advanced acceleration technology by Windows, Linux distributions, VMware, enables higher cluster efficiency and FreeBSD, and Citrix XENS erver. ConnectX -4. scalability to tens of thousands of nodes. Lx EN supports various management Storage Acceleration interfaces and has a rich set of tools for configuration and management across Storage applications will see improved operating systems.

6 Performance with the higher bandwidth ConnectX -4 Lx EN delivers. Moreover, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage COMPATIBILITY. PCI EXPRESS INTERFACE CONNECTIVITY OPERATING SYSTEMS/DISTRIBUTIONS*. PCIe Gen compliant, and Interoperable with 10/25/40/50Gb Ethernet RHEL/CentOS. compatible switches Windows , , or link rate x8 Passive copper cable with ESD protection FreeBSD. Auto-negotiates to, x8, x4, x2, or x1 Powered connectors for optical and active VMware Support for MSI/MSI-X mechanisms cable support OpenFabrics Enterprise Distribution (OFED). OpenFabrics Windows Distribution (WinOF-2). 2015 Mellanox Technologies. All rights reserved. ConnectX -4 Lx EN 10/25/40/50 Gigabit Ethernet Adapter Card page 3. FEATURES SUMMARY*. ETHERNET Advanced memory mapping support, allowing LSO, LRO, checksum offload 50 GbE / 40 GbE / 25 GbE / 10 GbE / 1 GbE user mode registration and remapping of RSS (can be done on encapsulated packet), IEEE , 100 Gigabit Ethernet memory (UMR) TSS, HDS, VLAN insertion / stripping, Receive 25G Ethernet Consortium 25, 50 Gigabit On demand paging (ODP) registration free flow steering Ethernet RDMA memory access Intelligent interrupt coalescence IEEE 40 Gigabit Ethernet STORAGE OFFLOADS REMOTE BOOT.

7 IEEE 10 Gigabit Ethernet RAID offload - erasure coding (Reed-Salomon) Remote boot over Ethernet IEEE Energy Efficient Ethernet offload Remote boot over iSCSI. IEEE based auto-negotiation and KR OVERLAY NETWORKS PXE and UEFI. startup Stateless offloads for overlay networks and PROTOCOL SUPPORT. Proprietary Ethernet protocols tunneling protocols OpenMPI, IBM PE, OSU MPI (MVAPICH/2), (20/40 GBASE-R2, 50 GBASE-R4) Hardware offload of encapsulation and Intel MPI. EEE , Link Aggregation decapsulation of NVGRE and VXLAN overlay Platform MPI, UPC, Open SHMEM. networks IEEE , VLAN tags and priority TCP/UDP, MPLS, VxLAN, NVGRE, GENEVE. HARDWARE-BASED I/O VIRTUALIZATION. IEEE (QCN) Congestion iSER, NFS RDMA, SMB Direct Notification Single Root IOV. uDAPL. EEE (ETS) Multi-function per port MANAGEMENT AND CONTROL. IEEE (PFC) Address translation and protection INTERFACES. IEEE Multiple queues per virtual machine NC-SI, MCTP over SMBus and MCTP over IEEE 1588v2 Enhanced QoS for vNICs PCIe Jumbo frame support ( ) VMware NetQueue support Baseboard Management Controller interface ENHANCED FEATURES VIRTUALIZATION SDN management interface for managing the Hardware-based reliable transport SRIOV: Up to 512 Virtual Function eSwitch Collective operations offloads SRIOV: Up to 16 Physical Functions per host I2C interface for device control and Vector collective operations offloads Virtualizing Physical Functions on a physical configuration port General Purpose I/O pins PeerDirectTM RDMA (aka GPUD irect communication acceleration SRIOV on every Physical Function SPI interface to Flash 64/66 encoding 1K ingress and egress QoS levels JTAG IEEE and IEEE Hardware-based reliable multicast Guaranteed QoS for VMs Extended Reliable Connected transport (XRC) CPU OFFLOADS.)

8 Dynamically Connected transport (DCT) RDMA over Converged Ethernet (RoCE). Enhanced Atomic operations TCP/UDP/IP stateless offload * This section describes hardware features and capabilities. Please refer to the driver release notes for feature availability Ordering Part Number Description MCX4111A-XCAT ConnectX -4 Lx EN network interface card, 10 GbE single-port SFP+, x8, tall bracket, ROHS R6. MCX4121A-XCAT ConnectX -4 Lx EN network interface card, 10 GbE dual-port SFP+, x8, tall bracket, ROHS R6. MCX4111A-ACAT ConnectX -4 Lx EN network interface card, 25 GbE single-port SFP28, x8, tall bracket, ROHS R6. MCX4121A-ACAT ConnectX -4 Lx EN network interface card, 25 GbE dual-port SFP28, x8, tall bracket, ROHS R6. MCX4131A-BCAT ConnectX -4 Lx EN network interface card, 40 GbE single-port QSFP28, x8, tall bracket, ROHS R6. MCX4131A-GCAT ConnectX -4 Lx EN network interface card, 50 GbE single-port QSFP28, x8, tall bracket, ROHS R6. 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085.

9 Tel: 408-970-3400 Fax: 408-970-3403. Copyright 2015. Mellanox Technologies. All rights reserved. Mellanox, BridgeX, ConnectX , CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MLNX-OS, PhyX, SwitchX, UFM, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, MetroX, Open Ethernet, ScalableHPC and Unbreakable-Link are trademarks of Mellanox Technologies, Ltd. MLNX-67-11 Rev All other trademarks are property of their respective owners.


Related search queries