Intelligent ConnectX-5 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, introduce new acceleration engines for maximizing High Performance, Web 2.0, Cloud, Data Analytics and Storage platforms. ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets. ConnectX-5 enables higher HPC performance with new Message Passing Interface (MPI) offloads, such as MPI Tag Matching and MPI AlltoAll operations, advanced dynamic routing, and new capabilities to perform various data algorithms. Moreover, ConnectX-5 Accelerated Switching and Packet Processing (ASAP2™) technology enhances offloading of virtual switches, for example, Open V-Switch (OVS), which results in significantly higher data transfer performance without overloading the CPU. Together with native RDMA and RoCE support, ConnectX-5 dramatically improves Cloud and NFV platform efficiency. Mellanox offers an alternate ConnectX-5 Socket Direct™ card to enable 100Gb/s transmission rate also for servers without x16 PCIe slots. The adapter''s 16-lane PCIe bus is split into two 8-lane buses, with one bus accessible through a PCIe x8 edge connector and the other bus through an x8 parallel connector to an Auxiliary PCIe Connection Card. The two cards are connected using a dedicated harness. Moreover, the card brings improved performance by enabling direct access from each CPU in a dual-socket server to the network through its dedicated PCIe x8 interface.