Lenovo Flex System IB6131 InfiniBand Switch
Work Faster and More Efficiently
The Flex System IB6131 InfiniBand Switch is designed to offer the performance you need to support clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications, helping to reduce task completion time and lower the cost per operation. The switch supports 40 GBps QDR InfiniBand and can be upgraded to 56 GBps FDR InfiniBand.
The Flex System IB6131 InfiniBand Switch can be installed in the Flex System chassis, which provides a high bandwidth, low latency fabric for Enterprise Data Centers (EDC), high-performance computing (HPC), and embedded environments. When used in conjunction with IB6132 InfiniBand QDR and FDR dual-port mezzanine I/O cards, these switches will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.
Flex System IB6131 InfiniBand Switch
Supports clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications. Virtual Protocol Interconnect simplifies system development by serving multiple fabrics with one hardware design.
Powered for HPC and Finance
- Ultra-high performance with full bisectional bandwidth at both Fourteen Data Rate (FDR) and Quad Data Rate (QDR) speeds
- Capability of up to 18 uplink ports for 14 servers allowing high-speed throughput with zero oversubscription
- Suited for clients running InfiniBand infrastructure in High Performance Computing and Financial Services
- When operating at FDR speed, less than 170 nanoseconds measured latency node to node — nearly half of the typical QDR InfiniBand latency
- Forward Error Correction–resilient
- Low power consumption
- Capability to scale to larger node counts to create a low latency clustered solution and reduce packet hops
Rapid Response Time
As transaction volumes rise, your existing compute and storage clustering interconnects may have trouble keeping up. Yet you know that response time matters; you must keep up with the competition and new regulations demand real-time risk analysis. You need to be able to scale your network and storage capabilities to meet the demands of your applications.
High Performance for the Most Challenging Tasks
The Flex System™ IB6131 InfiniBand Switch is designed to offer the performance you need to support clustered databases, parallel processing, transactional services and high-performance embedded I/O applications, reducing task completion time and lowering cost per operation. Virtual Protocol Interconnect also simplifies system development by serving multiple fabrics with one hardware design.
Computing Power and Efficiency
This switch is designed for low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. Combined with the InfiniBand FDR adapter, your organization can achieve efficient computing by offloading from the CPU protocol processing and data movement overhead such as RDMA and Send/Receive semantics allowing more processor power for the application.
Tech Specs
Internal ports:
- Fourteen internal ports that can operate at 40 GBps QDR or 56 GBps FDR. An optional Feature-on-Demand (FoD) upgrade is required to enable ports to operate at 56 GBps. FDR requires IB6131 FDR InfiniBand Adapter (90Y3454).
- One 1 GBE port is connected to the chassis management module.
External ports:
- Eighteen QSFP ports auto-sensing 10 GBps, 20 GBps, or 40 GBps QDR (or 56 GBps FDR with optional upgrade) supporting QSFP copper direct-attach cables (DAC). DAC cables are not included and must be purchased separately.
- One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module.
- One external Ethernet port with RJ-45 connector for switch configuration and management.
The InfiniBand QDR and FDR switches based on Mellanox technology are unmanaged switches.
- No embedded subnet manager.
- Switch requires subnet management from an external source.
InfiniBand Trade Association (IBTA): 1.3 and 1.21 compliant.
PowerPC® based MLNX-OS management.
InfiniBand: Auto-negotiation of 10 GBps, 20 GBps, 40 GBps, or 56 GBps.
Mellanox Quality of Service (QoS): Nine InfiniBand virtual lanes for all ports, 8 data transport lanes, and 1 management lane.
Management: Baseboard, performance, and device management agents for full InfiniBand in-band management.
Switching Performance: Simultaneous wire-speed any port to any port.
Addressing: 48,000 unicast addresses maximum per subnet, 16,000 multicast addresses per subnet.
Switching Capacity: 2 TBps for FDR and 1.44 TBps for QDR
Standards Supported: IBTA (InfiniBand Trade Association) 1.3 compliant
Supported InfiniBand I/O Adapter Cards:
- Flex System IB6132 2-port FDR InfiniBand Adapter
- Flex System IB6132 2-port QDR InfiniBand Adapter
- Flex System IB6132D 2-port FDR InfiniBand Adapter
Network Cabling Requirements:
InfiniBand:
- 1 m, 3 m, or 5 m InfiniBand QDR or 3 m InfiniBand FDR copper QSFP cables listed in the Supported cables section.
- Other IBTA compliant QSFP cables
External Ethernet RJ45 management port:
- Unshielded Twisted Pair (UTP) Category 6
- UTP Category 5e (100 meters (328.1 ft) maximum)
- UTP Category 5 (100 meters (328.1 ft) maximum)
RS-232 serial cable:
- Console cable DB9-to-mini-USB or RJ45-to-mini-USB (nonstandard use of USB connector) that comes with optional Flex System Management Serial Access Cable, 90Y9338
Warranty: There is a 1-year, customer-replaceable unit (CRU) limited warranty. When installed in a chassis, these switches assume your system’s base warranty and any Lenovo Services upgrade.
Physical Specifications:
These are the approximate dimensions and weight of the switch:
- Height: 30 mm (1.2 inches)
- Width: 401 mm (15.8 inches)
- Depth: 317 mm (12.5 inches)
- Weight: 3.7 kg (8.1 lb)
Shipping dimensions and weight (approximate):
- Height: 114 mm (4.5 in)
- Width: 508 mm (20.0 in)
- Depth: 432 mm (17.0 in)
- Weight: 4.1 kg (9.1 lb)