Download Power10 9080-HEX E1080 Manual
Clients need applications and data to be enterprise-grade everywhere without adding complexity and cost. The Power E1080 is the newest addition to IBM Power, the industry's best-in-class server platform for security and reliability. The Power E1080 introduces the essential enterprise hybrid cloud platform--uniquely architected to help you securely and efficiently scale core operational and AI applications in a hybrid cloud. The Power E1080 simplifies end-to-end encryption and brings AI where your data resides for faster insights. This helps enable greater workload flexibility and agility while accomplishing more work. The Power E1080 can help you:
Family 9080+04 IBM Power E1080 Enterprise server
IBM United States Sales ManualRevised: June 14, 2022
Table of contents | ||||||||||||||||||||
|
Product life cycle dates
Type Model | Announced | Available | Marketing Withdrawn | Service Discontinued |
---|---|---|---|---|
9080-HEX | 2021-09-08 | 2021-09-17 | - | - |
Back to top
Abstract
Power E1080
Clients need applications and data to be enterprise-grade everywhere without adding complexity and cost. The Power E1080 is the newest addition to IBM Power, the industry's best-in-class server platform for security and reliability. The Power E1080 introduces the essential enterprise hybrid cloud platform--uniquely architected to help you securely and efficiently scale core operational and AI applications in a hybrid cloud. The Power E1080 simplifies end-to-end encryption and brings AI where your data resides for faster insights. This helps enable greater workload flexibility and agility while accomplishing more work. The Power E1080 can help you:
- Respond faster to business demands with unmatched performance for efficient scaling and consistent pay-for-use consumption across public and private clouds
- Protect data with from core to cloud using full memory encryption at the processor level to support end-to-end security across public and private clouds without impacting performance
- Streamline insights and automation by running AI inferencing directly where your operational data resides
- Maximize availability and reliability with built-in advanced recovery and self-healing for infrastructure redundancy and disaster recovery in IBM Cloud
Power E1080 brings AI to where your operational data resides
You can drive business insights faster, meet service level agreements (SLAs), and eliminate security risk associated with data movement by bringing AI to where your data resides.
Each Power10 processor single chip module (SCM) contains two memory controllers. Four 10-core 3.65 - 3.90 GHz (max), four 12-core 3.6 - 4.15 GHz (max), or four 15-core 3.55 - 4.00 GHz (max) are used in each system node, providing 40 cores to a 160-core system (#EDP2), 48 cores to a 196-core system (#EDP3), or 60 cores to a 240-core system (#EDP4)(1). As few as 16 cores in the system can be activated or up to 100% of the cores in the system can be activated. Increments of one core at a time is available through built-in Capacity Upgrade on Demand (CUoD) functions to the full capacity of the system.
The system control unit provides redundant FSPs, the operator panel, and the Vital Product Data (VPD).
An optional external DVD can be attached with a USB cable when a USB adapter is installed in a node.
The memory supported in this server are the next-generation differential dual inline memory modules (DIMMs) implemented by Power called DDIMMs, which utilize DDR4 DRAM memory.
Power E1080 memory options are available as 128 GB (#EMC1), 256 GB (#EMC2), 512 GB (#EMC3), and 1024 GB (#EMC4) memory features. Each memory feature provides four DDIMMs. Each system node supports a maximum of 16 memory features and up to 64 DDIMM slots. Using 1024 GB DDIMM features yields a maximum of 16 TB per node. A two-node system has a maximum of 32 memory features and 32 TB capacity. A four-node system has a maximum of 64 TB capacity. Minimum memory activations of 50% of the installed capacity are required.
The 19-inch PCIe 4U I/O expansion drawer (#EMX0) provides 12 additional slots for PCIe adapters. Up to four PCIe I/O expansion drawers can be attached per system node. For example, a two-node system can have a maximum of eight PCIe I/O expansion drawers for a total of 96 PCIe slots in the I/O drawers with no PCIe slots in the system node.
Direct attached storage is supported with the EXP24SX SFF Gen2-bay drawer (#ESLS), an expansion drawer with 24 2.5-inch form-factor SAS bays.
IBM Power Private Cloud Solution with Dynamic Capacity
The Power Private Cloud Solution with Dynamic Capacity is an infrastructure offering that enables you to take advantage of cloud agility and economics while getting the same business continuity and flexibility that you already enjoy from Power servers. The Power Private Cloud Solution offers:
- Cost optimization with pay-for-use pricing
- Automated and consistent IT management with Red Hat Ansible for Power
- IBM Proactive Support for Power systems services
- IBM Systems Lab Services Assessment and implementation assistance
Both Elastic and Shared Utility Capacity options are now available on all Power E1080 systems.
Elastic Capacity on Power E1080 systems enables clients to deploy pay-for-use consumption of processor, memory and supported operating systems, by the day, across a collection of Power E1080 and Power E980 systems.
Shared Utility Capacity on Power E1080 systems provides enhanced multisystem resource sharing and by-the-minute tracking and consumption of compute resources across a collection of systems within a Power Enterprise Pool (2.0). It delivers a complete range of flexibility to tailor initial system configurations with the right mix of purchased and pay-for-use consumption of processor, memory, and software. Clients with existing Power Enterprise Pools of Power E980 systems can simply add Power E1080 systems into their pool and migrate to them at the rate and pace of their choosing, as any Power E980 and Power E1080 server may seamlessly interoperate and share compute resources within the same pool.
A Power Private Cloud Solution infrastructure consolidated onto Power E1080 systems has the potential to greatly simplify system management so IT teams can focus on optimizing their business results instead of moving resources around within their data center.
Shared Utility Capacity resources are easily tracked by virtual machine (VM) and monitored by an IBM Cloud Management Console (CMC), which integrates with local Hardware Management Consoles (HMC) to manage the pool and track resource use by system and VM, by the minute, across a pool.
You no longer need to worry about overprovisioning capacity on each system to support growth, as all available processor and memory on all systems in a pool are activated and available for use.
Base Capacity for processor, memory, and supported operating system entitlement resources is purchased on each Power E980 and Power E1080 system and is then aggregated across a defined pool of systems for consumption monitoring.
Metered Capacity is the additional installed processor and memory resource above each system's Base Capacity. It is activated and made available for immediate use when a pool is started, then monitored by the minute by a CMC.
Metered resource usage is charged only for minutes exceeding the pool's aggregate Base resources, and usage charges are debited in real-time against your purchased Capacity Credits (5819-CRD) on account.
IBM offers a Private Cloud Capacity Assessment and Implementation Service performed by Systems Lab Services professionals, which can be preselected at time of purchase or requested for qualifying Power E1080 servers.
Power to Cloud services
To assist clients with their move to the cloud, IBM is bundling 10,000 points with every Power E1080 server purchase that can be redeemed for onsite cloud deployment services. For additional details, see the IBM Power to Cloud Reward Program website. For those clients looking to create their own private cloud, expert services are available for cloud provisioning and automation with IBM Cloud PowerVC Manager with a heavy focus on creating and supporting a DevOps cloud implementation.
For those clients looking for a hybrid cloud solution, Design for Hybrid Cloud Workshop services are available to help you produce best-of-breed applications using IBM API Connect and IBM Cloud with IBM Power.
To learn more about all the new cloud capabilities that come with the Power E1080 server, see the IBM Power Enterprise Cloud Index website.
CMC for Power
The CMC is a cloud-native platform that provides apps that give powerful insights into your Power infrastructure across data centers and geographies. With no additional software or infrastructure setup, you can get single pane of glass views of your inventory, software levels, and resource capacity utilization, as well as launch-in- context of your on-premises software, such as IBM PowerVC and IBM PowerHA.
Power E1080 server Power10 hardware components
- Ten, twelve, or fifteen core processors
- Up to 240 Power10 processor cores in one to four systems nodes; up to 64 TB of 2933 MHz, DDR4 DRAM memory, and six PCIe Gen4 x16 or PCIe Gen5 x8 and two PCIe Gen5 x8 I/O expansion slots per system node enclosure, with a maximum of 32 per system
- Redundant clocking in each system node
- Four non-volatile memory express (NVMe) drive bays per system node for boot purposes
- System control unit, providing redundant system master FSP and support for the operations panel, the system VPD, and external attached DVD
- 19-inch PCIe Gen3 4U I/O expansion drawer and PCIe fan-out modules, supporting a maximum of 192 PCIe slots and four I/O expansion drawers per node.
- PCIe Gen1, Gen2, Gen3, Gen4, and Gen5 adapter cards supported in the system node, and PCIe Gen1, Gen2, Gen3, and Gen4 adapter cards supported in I/O expansion drawer
- EXP24SX SFF drawer with 24 2.5-inch form-factor SAS bays
- Dynamic LPAR support for adjusting workload placement of processor and memory resources
- CoD for processors and memory to help respond more rapidly and seamlessly to changing business requirements and growth
- Active Memory Expansion (AME) that is optimized onto the processor chip
- Active Memory Mirroring (AMM) to enhance resilience by mirroring critical memory used by the PowerVM hypervisor.
- Power Enterprise Pools that support unsurpassed enterprise flexibility for workload balancing and system maintenance
Note: (1) EDP4 is not available to order in China.
Model abstract 9080-HEX
The Power E1080 server provides the underlying Power10 hardware components:
- The most powerful and scalable server in the IBM Power portfolio:
- Up to 240 Power 10 technology-based processor cores
- Up to 64 TB memory
- Up to 32 PCIe Gen4 x16 / PCIe Gen5x8 slots in system nodes
- Up to 192 PCIe Gen3 slots with expansion drawers
- Up to over 4,000 directly attached SAS disks or solid-state drives (SSDs)
- Up to 2,000 VMs (LPARs) per system
- System control unit, providing redundant system master Flexible Service Processor (FSP)
The Power E1080 supports:
- IBM AIX, IBM i, and Linux environments
- Capacity on demand (CoD) processor and memory options
- IBM Power System Private Cloud Solution with Dynamic Capacity
Back to top
Highlights
The IBM Power E1080, the most powerful and scalable server in the IBM Power portfolio, provides the following underlying hardware components:
- Up to 240 Power10 technology-based processor cores
- Up to 64 TB memory
- Up to 32 Peripheral Component Interconnect Express (PCIe) Gen5 slots in system nodes
- Up to 192 PCIe Gen3 slots with expansion drawers
- Up to over 4,000 directly attached serial-attached SCSI (SAS) disks or solid-state drives (SSDs)
- Up to 1,000 virtual machines (VMs) per system
- System control unit, providing redundant system master Flexible Service Processor (FSP)
The Power E1080 supports:
- IBM AIX, IBM i, and Linux environments
- Capacity on demand (CoD) processor and memory options
- IBM Power System Private Cloud Solution with Dynamic Capacity
Back to top
Description
Security, operational efficiency, and real-time intelligence to respond quickly to market changes are now nonnegotiable for IT. In an always-on environment of constant change, you need to automate and accelerate critical operations, while ensuring 24/7 availability and staying ahead of cyberthreats. You need applications and data to be enterprise-grade everywhere, but without adding complexity and cost.
Power servers are already the most reliable and secure in their class. Today, the new Power E1080 extends that leadership and introduces the essential enterprise hybrid cloud platform--uniquely architected to help you securely and efficiently scale core operational and AI applications anywhere in a hybrid cloud. Now you can encrypt all data simply without management overhead or performance impact and drive insights faster with AI at the point of data. You can also gain workload deployment flexibility and agility with a single hybrid cloud currency while doing more work.
Power E1080feature summary
The following features are available on the Power E1080 server:
- One to four 5U system nodes
- The Power E1080 server will support three and four systems nodes by December 10, 2021.
- One 2U system control unit
- One to four processor features per system with four single-chip modules (SCMs) per feature:
- 3.65 - 3.90 GHz, 40-core Power10 processor (#EDP2)
- 3.6 - 4.15 GHz, 48-core Power10 processor (#EDP3)
- 3.55 - 4.00 GHz, 60-core Power10 processor (#EDP4)(1)
- CoD processor core activation features available on a per-core basis
- 64 DDIMM slots per system node
- DDR4 DDIMM memory cards:
- 128 GB (4 x 32 GB), (#EMC1)
- 256 GB (4 x 64 GB), (#EMC2)
- 512 GB (4 x 128 GB), (#EMC3)
- 1024 GB (4 x 256 GB), (#EMC4)
- CoD memory activation features include:
- 100 GB Mobile Memory Activations (#EDAB)
- 500 GB Mobile Memory Activations (#EMBK)
- AME optimized onto the processor chip (#EM8F)
- Six PCIe Gen4 x16 or PCIe Gen5 x8 and two PCIe Gen5 x8 I/O low- profile expansion slots per system node (maximum 32 in a 4-node system)
- One USB port to support external attached DVD when a USB adapter is installed in a node
- Redundant hot-swap AC power supplies in each system node drawer
- Two HMC ports in the system control unit
- Optional PCIe I/O expansion drawer with PCIe slots:
- Zero to four drawers per system node drawer (#EMX0).
- Each I/O drawer holds one or two 6-slot PCIe fan-out modules (#EMXH).
- Each fan-out module attaches to the system node through a PCIe optical cable adapter (#EJ24).
System nodes
Each 5 EIA or 5U system node of the server has four air-cooled SCMs optimized for performance and scalability. The Power E1080 SCMs can have ten, twelve, and fifteen Power10 cores running at up to 4.15 GHz and simultaneous multithreading that executes up to eight threads per core. Each SCM has dual memory controllers to deliver up to 409 GBps of peak memory bandwidth per socket or 1636 GBps per node. Using PCIe Gen5 I/O controllers, which are also integrated onto each SCM to further reduce latency, up to 576 GBps peak I/O bandwidth is available per node. Thus, a Power E1080 system bandwidth can help provide maximum processor performance, enabling applications to run faster and be more responsive.
Each system node has 64 DDIMM slots and can support up to 16 TB of DDR4 memory. Thus, a four-node server can have up to 64 TB of memory. The system node has four internal NVMe U.2 (2.5-in. 7mm form factor) SSDs. Each SSD is driven from a x4 PCIe Gen4 connection. Each system node has eight PCIe slots, which six are Gen4 x16 or Gen5 x8 and two are Gen5 x8, low profile. Thus, a four-node server can have up to 32 PCIe slots. PCIe expansion units can optionally expand the number of PCIe slots on the server.
A system node is ordered using a processor feature. Each processor feature will deliver a set of four identical SCMs in one system node. All processor features in the system must be identical. Cable features are required to connect system node drawers to the system control unit and to other system nodes.
Processor core activations
Each Power E1080 server requires a minimum of sixteen permanent processor core activations, using either static activations or Linux on Power activations. This minimum is per system, not per node. The rest of the cores can be permanently or temporary activated or remain inactive until needed. The activations are not specific to hardware cores, SCMs, or nodes. They are known to the system as a total number of activations of different types and used or assigned by the Power hypervisor appropriately.
A variety of activations fit different usage and pricing options. Static activations are permanent and support any type of application environment on this server. Mobile activations are ordered against a specific server, but they can be moved to any server within the Power Enterprise Pool and can support any type of application.
60-core (#EDP4)(2) 48-core (#EDP3) 40-core (#EDP2) ------------------------ ----------------------- ----------------------- 1-core static activation 1-core static 1-core static (#EDPD)(2) activation (#EDPC) activation (#EDPB) 1-core Power Linux 1-core Power Linux 1-core Power Linux (#ELCM)(2) (#ELCQ) (#ELCL)
Note: (2) Features EDP4, EDPD, and ELCM are not available to order in China.
Memory
Differential DIMMs (DDIMMs) are extremely high-performance, high- reliability, intelligent, and DRAM memory. DDR4 technology is employed and provide the performance. DDIMMs are placed in DDIMM slots in the system node.
Each system node has 64 memory DDIMM slots, and at least half of the memory slots are always physically filled. Sixteen DDIMM slots are local to each of the four SCMs in the server, but SCMs and their cores have access to all the other memory in the server. When filling the other memory slots in the SCM, a quantity of four DDIMMs must be used. Thus, the DDIMM slots of the SCMs are from 50% to 100% filled. The system node (four SCMs) DDIMM slots can have 32, 36, 40, 44, 48, 52, 56, 60 and 64 DDIMMs physically installed (quad plugging rules).
To assist with the quad plugging rules, four DDIMMs are ordered using one memory feature number. Select from 128 GB feature EMC1 (4 x 32 DDR4), 256 GB feature EMC2 (4 x 64 DDR4), 512 GB feature EMC3 (4 x 128 DDR4), or 1024 GB feature EMC4 (4 x 256 DDR4).
All DDIMMs must be identical on the same SCM, so if you're using eight DDIMMs, both memory features on an SCM must be identical. A different SCM in the same system node can use a different memory feature. For example, one system node could technically use 128 GB, 256 GB, 512 GB, and 1024 GB memory features.
To provide more flexible pricing, memory activations are ordered separately from the physical memory and can be permanent or temporary. Activation features can be used on DDR4 memory features and used on any size memory feature. Activations are not specific to a DDIMM, but they are known as a total quantity to the server. The Power hypervisor determines what physical memory to use.
Memory activation features are:
- 1 GB Memory Activations (#EMAZ), static
- 100 GB Memory Activations (#EMQZ), static
- 100 GB Mobile Memory Activations (#EDAB)
- 500 GB Memory Activations for Power Linux (#ELME)
A minimum of 50% of the total physical memory capacity of a server must have permanent memory activations ordered for that server. For example, a server with a total of 8 TB of physical memory must have at least 4 TB of permanent memory activations ordered for that server. These activations can be static, mobile, or Linux on Power. At least 25% must be static activations or Linux on Power activations. For example, a server with a total of 8 TB physical memory must have at least 2 TB of static activations or Linux on Power activations. The 50% minimum cannot be fulfilled using mobile activations ordered on a different server.
The minimum activations ordered with MES orders of additional physical memory features will depend on the existing total installed physical memory capacity and the existing total installed memory activation features. If you already have installed more than 50% activations for your existing system, then you can order less than 50% activations for the MES ordered memory. The resulting configuration after the MES order of physical memory and any MES activations must meet the same 50% and 25% minimum rules above.
For the best possible performance, it is generally recommended that memory be installed evenly across all system node drawers and all SCMs in the system. Balancing memory across the installed system planar cards enables memory access in a consistent manner and typically results in better performance for your configuration.
Though maximum memory bandwidth is achieved by filling all the memory slots, plans for future memory additions should be considered when deciding which memory feature size to use at the time of initial system order.
The AME is an option that can increase the effective memory capacity of the system. See the AME information later in this section.
Power Enterprise Pools with Mobile and Shared Utility Capacity
Power Enterprise Pools 2.0 is the strategic, automated resource sharing technology designed for Power E980 and E1080 systems.
The following capabilities are designed to provide a smooth migration path from Power E980 systems to Power E1080 systems within the same Power Enterprise Pool:
- Capacity Credits for Power (5819-CRD) may be applied to a Power Enterprise Pool (2.0) containing a combination of Power E1080 and E980 systems. Metered Capacity resources consumed will be debited at the same rate for both E1080 and E980 systems.
- For each Base Activation feature purchased new on a Power E1080 server that is replacing a Power E980 server in the same Power Enterprise Pool, up to three (3) Base Activation features may be exchanged from the E980 server for three (3) new, corresponding Power E1080 Base Activation features, at no additional charge, when the Power E980 system is being removed from the pool.
- To support customers migrating to Power E1080 from Power E980 systems in a Power Enterprise Pool (1.0) with Mobile Capacity, a one- time migration is being enabled to allow a quantity of Power9 Mobile Processor and Memory activation features, purchased initially on a Power E980 system, to migrate and convert to similar features on a Power E1080 system, within the same Power Enterprise Pool, at no additional charge, when the Power E980 system is being removed from the pool.
- All offers of exchange of Base Capacity features and migration of Mobile Capacity features are designed to be executed via the Entitled Systems Support portal, and are subject to its availability within a country.
System control unit
The 2U system control unit provides redundant system master FSP. Additionally, it contains the operator panel and the system VPD. One system control unit is required for each server. A unique feature number is not used to order the system control unit. One is shipped with each Power E1080 server. Two FSPs in the system control unit are ordered using two EDFP features. All system nodes connect to the system control unit using the cable features EFCH, EFCE, EFCF, and EFCG.
The system control unit is powered from the system nodes. UPIC cables provide redundant power to the system control unit. In a single node system, two UPIC cables are attached to system node 1. In a two-node, three-node, or four-node system, one UPIC cable attaches to system node 1 and one UPIC cable attaches to system node 2. They are ordered with features EFCH. Only one UPIC cable is enough to power the system control unit, and the others are in place for redundancy.
System node PCIe slots
- Each system node enclosure provides excellent configuration flexibility and expandability with eight half-length, low-profile (half-high) PCIe Gen5 slots. The slots are labeled C0 through C7. C0, C1, C2, C5, C6, and C7 are x16 and C3, and C4 are x8.
- These PCIe slots can be used for either low-profile PCIe adapters or for attaching a PCIe I/O drawer.
- A blind swap cassette (BSC) is used to house the low-profile adapters that go into these slots. The server is shipped with a full set of BSCs, even if the BSCs are empty. A feature number to order additional low-profile BSCs is not required or announced.
- If additional PCIe slots beyond the system node slots are required, a system node x16 slot is used to attach a six-slot expansion module in the I/O drawer. An I/O drawer holds two expansion modules that are attached to any two x16 PCIe slots in the same system node or in different system nodes.
- PCIe Gen1, Gen2, Gen3, Gen4, and Gen5 adapter cards are supported in these Gen5 slots. The set of PCIe adapters that are supported is found in the Sales Manual, identified by feature number.
- Concurrent repair and add/removal of PCIe adapter cards is done by HMC-guided menus or by operating system support utilities.
- The system nodes sense which PCIe adapters are installed in their PCIe slots; if an adapter requires higher levels of cooling, they automatically speed up the fans to increase airflow across the PCIe adapters.
PCIe I/O expansion drawer
The 19-inch PCIe 4U I/O expansion drawer (#EMX0) provides slots to hold PCIe adapters that cannot be placed into a system node. The PCIe I/O expansion drawer (#EMX0) and two PCIe fan-out modules (#EMXH) provide 12 PCIe I/O full-length, full-height slots. One fan-out module provides six PCIe slots labeled C1 through C6. The C1 and C4 are x16 slots, and C2, C3, C5, and C6 are x8 slots.
PCIe Gen1, Gen2, Gen3, and Gen4 and full-high adapter cards are supported. The set of full-high PCIe adapters that are supported is found in the Sales Manual, identified by feature number. See the PCI Adapter Placement manual for the 9080-HEX for details and rules associated with specific adapters supported and their supported placement in x8 or x16 slots.
Up to four PCIe I/O drawers per node can be attached to the Power E1080 server. Using two 6-slot fan-out modules per drawer provides a maximum of 48 PCIe slots per system node. With two system nodes, up to 96 PCIe slots (8 I/O drawers) are supported. With a 4-node Power E1080 server, up to 192 PCIe slots (16 I/O drawers) are supported.
Additional PCIe I/O drawer configuration flexibility is provided to the Power E1080 servers. Zero, one, two, three, or four PCIe I/O drawers can be attached per system node. As an alternative, a half drawer that consists of just one PCIe fan-out module in the I/O drawer is also supported, enabling a lower-cost configuration if fewer PCIe slots are required. Thus, a system node supports the following half- drawer options: one half drawer, two half drawers, three half drawers, or four half drawers. Because there is a maximum of four feature EMX0 drawers per node, a single system node cannot have more than four half drawers. A server with more system nodes can support more half drawers up to four per node. A system can also mix half drawers and full PCIe I/O drawers. The maximum of four PCIe drawers per system node applies whether a full or half PCIe drawer.
PCIe drawers can be concurrently added to the server at a later time. The drawer being added can have either one or two fan-out modules. Note that adding a second fan-out module to a half-full drawer does require scheduled downtime.
PCIe I/O drawer attachment and cabling
- A PCIe x16 to optical CXP converter adapter (#EJ24) and 2.0 M (#ECCR), 10.0 M (#ECCY), or 20 M (#ECCZ) CXP 12X Active Optical Cables (AOC) connect the system node to a PCIe fan-out module in the I/O expansion drawer. One ECCR, ECCY, or ECCZ feature ships two AOC cables from IBM.
- The two AOC cables connect to two CXP ports on the fan-out module and to two CXP ports on the feature EJ24 adapter. The top port of the fan-out module must be cabled to the top port of the feature EJ24 port. Likewise, the bottom two ports must be cabled together.
- It is recommended but not required that one I/O drawer be attached to two different system nodes in the same server (one drawer module attached to one system node and the other drawer module attached to a different system node). This can help provide cabling for higher availability configurations.
- It is generally recommended that any attached PCIe I/O expansion drawer be located in the same rack as the Power10 server for ease of service, but expansion drawers can be installed in separate racks if the application or other rack content requires it. If you are attaching a large number of cables, such as SAS cables or CAT5/CAT6 Ethernet cables, to a PCIe I/O drawer, then it is generally better to place that feature EMX0 drawer in a separate rack for better cable management.
Limitation: When this cable is ordered with a system in a rack specifying IBM Plant integration, IBM Manufacturing will ship SAS cables longer than three meters in a separate box and not attempt to place the cable in the rack. This is because the longer SAS cable is probably used to attach to an EXP24S drawer in a different rack.
- Concurrent repair and add/removal of PCIe adapter cards is done by HMC-guided menus or by operating system support utilities.
- A BSC is used to house the full-high adapters that go into these slots. The BSC is the same BSC as used with 12X attached I/O drawers (#5802, #5803, #5877, #5873) of the previous-generation server. The drawer is shipped with a full set of BSCs, even if the BSCs are empty. A feature to order additional full-high BSCs is not required or announced.
- A maximum of 16 EXP24s drawers are needed per PCIe drawer (#EMX0) to enable SAS cables to be properly handled by the feature EMX0 cable management bracket.
EXP24SX disk/SSD drawer
- Direct attached storage is supported with the EXP24SX SFF Gen2-bay drawer (#ESLS), an expansion drawer with 24 2.5-inch form-factor SAS bays.
- The Power E1080 server supports up to 4,032 drives with a maximum of 168 EXP24SX drawers. The maximum of 16 EXP24SX drawers per PCIe I/O drawer due to cabling considerations remains unchanged.
- The EXP24SX SFF Gen2-bay drawer (#ESLS) is an expansion drawer with 24 2.5-inch form-factor SAS bays. Slot filler panels are included for empty bays when initially shipped. The EXP24SX supports up to 24 hot-swap SFF-2 SAS HDDs or SSDs. It uses only two EIA of space in a 19-inch rack. The EXP24SX includes redundant AC power supplies and uses two power cords.
- With AIX, Linux, and VIOS, you can order the EXP24SX with four sets of 6 bays, two sets of 12 bays, or one set of 24 bays (mode 4, 2, or 1). With IBM i, you can order the EXP24SX as one set of 24 bays (mode 1). Mode setting is done by IBM Manufacturing, and there is no option provided to change the mode after it is shipped from IBM. Note that when changing modes, a skilled, technically qualified person should follow the special documented procedures. Improperly changing modes can potentially destroy existing RAID sets, prevent access to existing data, or allow other partitions to access another partition's existing data. Hire an expert to assist if you are not familiar with this type of reconfiguration work.
- The EXP24SX SAS ports are attached to a SAS PCIe adapter or pair of adapters using SAS YO or X cables.
- To maximize configuration flexibility and space utilization, the system node does not have integrated SAS bays or integrated SAS controllers. PCIe adapters and the EXP24SX can be used to provide direct access storage.
- To further reduce possible single points of failure, EXP24SX configuration rules consistent with previous Power servers are used. IBM i configurations require the drives to be protected (RAID or mirroring). Protecting the drives is highly recommended, but not required for other operating systems. All Power operating system environments that are using SAS adapters with write cache require the cache to be protected by using pairs of adapters.
- It is recommended for SAS cabling ease that the EXP24SX drawer be located in the same rack in which the PCIe adapter is located. However, it is often a good availability practice to split a SAS adapter pair across two PCIe drawers/nodes for availability and that may make the SAS cabling ease recommendation difficult or impossible to implement.
- HDDs and SSDs that were previously located in POWER8 system units or in feature 5802 or 5803 12X-attached I/O drawers (SFF-1 bays) can be retrayed and placed in EXP24S drawers. See feature conversions previously announced on the POWER8 servers. Ordering a conversion ships an SFF-2 tray or carriage onto which you can place your existing drive after removing it from the existing SFF-1 tray/carriage. The order also changes the feature number so that IBM configuration tools can better interpret what is required.
- A maximum of 16 EXP24SX drawers is needed per PCIe drawer (#EMX0) to enable SAS cables to be properly handled by the feature EMX0 cable management bracket.
- A maximum of 999 SSDs can be ordered within a single order (initial or MES) of the system.
DVD and boot devices
A device capable of reading a DVD may be attached to the system and available to perform operating system installation, maintenance, problem determination, and service actions such as maintaining system firmware and I/O microcode at their latest levels. In addition, the system must be attached to a network with software such as AIX NIM server or Linux Install Manager configured to perform these functions:
- Disk or SSD located in an EXP24S drawer attached to a PCIe adapter
- A network through LAN adapters
- A SAN attached to a Fibre Channel (FC) or FC over Ethernet adapters and indicated to the server by the 0837 specify feature
- Assuming option 1 above, the minimum system configuration requires at least one SAS disk drive in the system for AIX and Linux and two for IBM i. If you are using option 3 above, a disk or SSD drive is not required.
- For IBM i, a DVD drive must be available on the server when required.
- A DVD can optionally be in the system control unit, or one or more DVDs can be located in an external enclosure such as a 7226-1U3 multimedia drawer.
Racks
The Power E1080 server is designed to fit a standard 19-inch rack. IBM Development has tested and certified the system in the IBM Enterprise rack (7965-S42). You can choose to place the server in other racks if you are confident those racks have the strength, rigidity, depth, and hole pattern characteristics required. You should work with IBM Service to determine the appropriateness of other racks.
It is highly recommended that the Power E1080 server be ordered with an IBM 42U enterprise rack (7965-S42). An initial system order is placed in a 7965-S42 rack. This is done to ease and speed client installation, provide a more complete and higher quality environment for IBM Manufacturing system assembly and testing, and provide a more complete shipping package.
The 7965-S42 is a two-meter enterprise rack that provides 42U or 42 EIA of space. Clients who don't want this rack can remove it from the order, and IBM Manufacturing will then remove the server from the rack after testing and ship the server in separate packages without a rack. Use the factory-deracking feature ER21 on the order to do this.
Front door options supported with Power E1080 system nodes for the 42U slim enterprise rack (7695-S42), front acoustic door #ECRA, high- end appearance front door #ECRF, cost-effective plain front door #ECRM.
Recommendation The 7965-S42 has optimized cable routing, so all 42U may be populated with equipment.
The 7965-S42 rack does not need 2U on either the top or bottom for cable egress.
The system control unit is located below system node 1, with system node 1 on top of it, system node 2 on top of that, and so on.
With the two-meter 7965-S42, a rear rack extension of 12.7 cm (5 in.) (#ECRK) provides space to hold cables on the side of the rack and keep the center area clear for cooling and service access.
Recommendation Include the above extensions when approximately more than 16 I/O cables per side are present or may be added in the future, when using the short-length, thinner SAS cables, or when using thinner I/O cables, such as Ethernet. If you use longer-length, thicker SAS cables, fewer cables will fit within the rack.
SAS cables are most commonly found with multiple EXP24SX SAS drawers (#ESLS) driven by multiple PCIe SAS adapters. For this reason, it is good practice to keep multiple EXP24SX drawers in the same rack as the PCIe I/O drawer or in a separate rack close to the PCIe I/O drawer, using shorter, thinner SAS cables. The feature ECRK extension can be good to use even with smaller numbers of cables because it enhances the ease of cable management with the extra space it provides.
Multiple service personnel are required to manually remove or insert a system node drawer into a rack, given its dimensions and weight and content.
Recommendation To avoid any delay in service, obtain an optional lift tool (#EB2Z). One feature EB2Z lift tool can be shared among many servers and I/O drawers. The EB2Z lift tool provides a hand crank to lift and position up to 159 kg (350 lb). The EB2Z lift tool is 1.12 meters x 0.62 meters (44 in. x 24.5 in.). Note that a single system node can weigh up to 86.2 kg (190 lb).
A lighter, lower-cost lift tool is FC EB3Z(3) (lift tool) and EB4Z(3) (angled shelf kit for lift tool). The EB3Z(3) lift tool provides a hand crank to lift and position a server up to 400 lb. Note that a single system node can weigh up to 86.2 kg (190 lb).
Note: (3) Features EB3Z and EB4Z are not available to order in Albania, Bahrain, Bulgaria, Croatia, Egypt, Greece, Jordan, Kuwait, Kosovo, Montenegro, Morocco, Oman, UAE, Qatar, Saudi Arabia, Serbia, Slovakia, Slovenia, Taiwan, and Ukraine.
Reliability, Availability, and Serviceability
PCIe I/O expansion drawer and racks
IBM Manufacturing can factory-integrate the PCIe I/O expansion drawer (#EMX0) with new server orders. Because expansion drawers complicate the access to vertical PDUs if located at the same height, IBM recommends accommodating PDUs horizontally on racks that have one or more PCIe I/O expansion drawers. Following this recommendation, IBM Manufacturing will always assemble the integrated rack configuration with horizontally mounted PDUs unless CRSP (#0469) is on the order. When specifying CSRP, you must provide the locations where the PCIe I/O expansion drawers should be placed and avoid locating them adjacent to vertical PDU locations EIA 6 through 16 and 21 through 31.
Additional PCIe I/O drawers (#EMX0) for an already installed server can be MES ordered with or without a rack. When you want IBM Manufacturing to place these MES I/O drawers into a rack and ship them together (factory integration), then the racks should be ordered as features on the same order as the I/O drawers. Regardless of the rack-integrated system to which the PCIe I/O expansion drawer is attached to, if the expansion drawer is ordered as factory-integrated, the PDUs in the rack will be defaulted to be placed horizontally to enhance cable management. Vertical PDUs can be used only if CSRP (#0469) is on the order.
After the rack with expansion drawers is delivered, you may rearrange the PDUs from horizontal to vertical. However, the IBM configurator tools will continue to assume the PDUs are placed horizontally for the matter of calculating the free space still available in the rack for additional future orders.
Power distribution units (PDUs)
- The Power E1080 server factory that is integrated into an IBM rack uses horizontal PDUs located in the EIA drawer space of the rack instead of the typical vertical PDUs found in the side pockets of a rack. This is done to aid cable routing. Each horizontal PDU occupies 1U. Vertically mounting the PDUs to save rack space can cause cable routing challenges and interfere with optimal service access.
- When mounting the horizontal PDUs, it is a good practice to place them almost at the top or almost at the bottom of the rack, leaving 2U or more of space at the very top or very bottom open for cable management. Mounting a horizontal PDU in the middle of the rack is generally not optimal for cable management.
- Two possible PDU ratings are supported: 60A (orderable in most countries) and 30A.
- The 60A PDU supports four system node power supplies and one I/O expansion drawer or eight I/O expansion drawers.
- The 30A PDU supports two system node power supplies and one I/O expansion drawer or four I/O expansion drawer
- Rack-integrated system orders require at least two of either feature 7109, 7188, or 7196.
- Intelligent PDU with universal UTG0247 connector (#7109) is for an intelligent AC power distribution unit (PDU+) that enables users to monitor the amount of power being used by the devices that are plugged in to this PDU+. This AC power distribution unit provides 12 C13 power outlets. It receives power through a UTG0247 connector. It can be used for many different countries and applications by varying the PDU-to-wall power cord, which must be ordered separately. Each PDU requires one PDU-to-wall power cord. Supported power cords include the following features: 6489, 6491, 6492, 6653, 6654, 6655, 6656, 6657, 6658, or 6667.
- The PDU (#7188) mounts in a 19-inch rack and provides 12 C13 power outlets. The PDU has six 16A circuit breakers, with two power outlets per circuit breaker. System units and expansion units must use a power cord with a C14 plug to connect to the feature 7188. One of the following line cords must be used to distribute power from a wall outlet to the feature 7188: feature 6489, 6491, 6492, 6653, 6654, 6655, 6656, 6657, 6658, or 6667.
- The three-phase PDU (#7196) provides six C19 power outlets and is rated up to 48A. It has a 4.3 m (14 ft) fixed power cord to attach to the power source (IEC309 60A plug (3P+G)). A separate to-the-wall power cord is not required or orderable. Use the power cord 2.8 m (9.2 ft), drawer to wall/IBM PDU, (250V/10A) (#6665) to connect devices to this PDU. These power cords are different than the ones used on the feature 7188 and 7109 PDUs. Supported countries for the feature 7196 PDU are Antigua and Barbuda, Aruba, Bahamas, Barbados, Belize, Bermuda, Bolivia, Brazil, Canada, Cayman Islands, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guam, Guatemala, Haiti, Honduras, Indonesia, Jamaica, Japan, Mexico, Netherlands Antilles, Nicaragua, Panama, Peru, Puerto Rico, Surinam, Taiwan, Trinidad and Tobago, United States, and Venezuela.
System node power
- Four AC power supplies provide 2 + 2 redundant power for enhanced system availability. A system node is designed to continue functioning with just two working power supplies. A failed power supply can be hot swapped but must remain in the system until the replacement power supply is available for exchange.
- Four AC power cords are used for each system node (one per power supply) and are ordered using the AC Power Channel feature (#EMXA). The channel carries power from the rear of the system node to the hot- swap power supplies located in the front of the system node where they are more accessible for service.
System control unit power
- The system control unit is powered from the system nodes. UPIC cables provide redundant power to the system control unit. In a single node system two UPIC cables are attached to system node 1. In a two-node, three-node, or four-node system, one UPIC cable attaches to system node 1 and one UPIC cable attaches to system node 2. They are ordered with feature EFCA. Only one UPIC cable is enough to power the system control unit, and the other is in place for redundancy.
Concurrent maintenance or hot-plug options
The following options are maintenance or hot-plug capable:
- EXP24S SAS storage enclosure drawer.
- Drives in the EXP24S storage enclosure drawer.
- NVMe U.2 drives.
- PCI extender cards, optical PCIe link IO expansion card.
- PCIe I/O adapters.
- PCIe I/O drawers.
- PCIe to USB conversion card.
- External SMP cables.
- System node AC power supplies: Two functional power supplies must remain installed at all times while the system is operating.
- System node fans.
- System control unit fans.
- System control unit operations panel.
- Time of Day battery.
- UPIC interface card in SCU.
- UPIC power cables from system node to system control unit.
If the system boot device or system console is attached using an I/O adapter feature, that adapter may not be hot-plugged if a nonredundant topology has been implemented.
You can access hot-plug procedures in the product documentation at IBM Documentation website.
Active Memory Expansion
AME is an innovative technology supporting the AIX operating system that helps enable the effective maximum memory capacity to be larger than the true physical memory maximum. Compression and decompression of memory content can enable memory expansion up to 100% or more. This can enable a partition to do significantly more work or support more users with the same physical amount of memory. Similarly, it can enable a server to run more partitions and do more work for the same physical amount of memory.
AME uses CPU resource to compress and decompress the memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice, but the degree of expansion varies on how compressible the memory content is. It also depends on having adequate spare CPU capacity available for this compression and decompression.
The Power E1080 includes a hardware accelerator designed to boost AME efficiency and uses less Power core resource. You have a great deal of control over AME usage. Each individual AIX partition can turn on or turn off AME. Control parameters set the amount of expansion desired in each partition to help control the amount of CPU used by the AME function. An IPL is required for the specific partition that is turning memory expansion. When turned on, monitoring capabilities are available in standard AIX performance tools, such as lparstat, vmstat, topas, and svmon.
A planning tool is included with AIX, enabling you to sample actual workloads and estimate both how expandable the partition's memory is and how much CPU resource is needed. Any Power model can run the planning tool. In addition, a one-time, 60-day trial of AME is available to enable more exact memory expansion and CPU measurements. You can request the trial using the Capacity on Demand web page.
AME is enabled by chargeable hardware feature EM89, which can be ordered with the initial order of the system or as an MES order. A software key is provided when the enablement feature is ordered, which is applied to the system node. An IPL is not required to enable the system node. The key is specific to an individual system and is permanent. It cannot be moved to a different server.
The additional CPU resource used to expand memory is part of the CPU resource assigned to the AIX partition running AME. Normal licensing requirements apply.
Active Memory Mirroring
Active Memory Mirroring (AMM) is available to enhance resilience by mirroring critical memory used by the PowerVM hypervisor, so that it can continue operating in the event of a memory failure.
IBM i operating system
For clients loading the IBM i operating system, the four-digit numeric QPRCFEAT value is generally the same as the four-digit numeric feature number for the processors used in the system. The Power E1080 3.9 GHz processor feature is an exception to this rule. For the Power E1080:
Feature Description ------- ---------------------------------------------------------------- EDP2 Processor (3.65 - 3.9 GHz 40-core node) - QPRCFEAT value for the system is EDP2. EDP3 Processor (3.6 - 4.15 GHz 48-core node) - QPRCFEAT value for the system is EDP3. EDP4(2) Processor (3.55 - 4.0 GHz 60-core node) - QPRCFEAT value for the system is EDP4.
Power E1080 is IBM i software tier P30
If the 5250 Enterprise Enablement function is to be used on the server, order one or more feature ED2Z or order the full system 5250 enablement feature ED30. Feature ED2Z provides one processor core's worth of 5250 capacity, which can be spread across multiple physical processor cores or multiple partitions.
Note: (2) Features EDP4, EDPD, and ELCM are not available to order in China.
Capacity Backup for IBM i
The Capacity Backup (CBU) designation can help meet your requirements for a second system to use for backup, high availability, and disaster recovery. It enables you to temporarily transfer IBM i processor license entitlements and 5250 Enterprise Enablement entitlements purchased for a primary machine to a secondary CBU-designated system. Temporarily transferring these resources instead of purchasing them for your secondary system may result in significant savings. Processor activations cannot be transferred as part of this CBU offering, however programs such as Power Enterprise Pools are available for moving or sharing processor activations.
The CBU specify feature number 4891 is available only as part of a new server purchase to a 9080-HEX. Certain system prerequisites must be met, and system registration and approval are required before the CBU specify feature can be applied on a new server. A used system that has an existing CBU feature cannot be registered. The only way to attain a CBU feature that can be registered is with a plant order.
Standard IBM i terms and conditions do not allow either IBM i processor license entitlements or 5250 Enterprise Enablement entitlements to be transferred permanently or temporarily. These entitlements remain with the machine on which they were ordered. When you register the association between your primary and on-order CBU system on the CBU registration website, you must agree to certain terms and conditions regarding the temporary transfer.
After a CBU system designation is approved and the system is installed, you can temporarily move your IBM i processor license entitlements and 5250 Enterprise Enablement entitlements from the primary system to the CBU system. The CBU system can then better support fail-over and role swapping for a full range of test, disaster recovery, and high availability scenarios. Temporary entitlement transfer means that the entitlement transfer from the primary system to the CBU system and may remain in use on the CBU system as long as the registered primary and CBU system are in deployment for the high availability or disaster recovery operation. The primary system for a Power E1080 server can be any of the following:
- 9080-HEX
- 9080-M9S
These systems have IBM i software licenses with an IBM i P30 software tier, or higher. The primary machine must be in the same enterprise as the CBU system.
Before you can temporarily transfer IBM i processor license entitlements from the registered primary system, you must have more than one IBM i processor license on the primary machine and at least one IBM i processor license on the CBU server. An activated processor must be available on the CBU server to use the transferred entitlement. You may then transfer any IBM i processor entitlements above the minimum of one entitlement (more may be required depending on the replication technology), assuming the total IBM i workload on the primary system does not require the IBM i entitlement you would like to transfer during the time of the transfer. During this temporary transfer, you may see IBM i license "Out of Compliance" warning messages from the CBU system. Such messages that arise in the situation of the temporarily transferred IBM i entitlements Machine may be ignored.
Before you can temporarily transfer 5250 entitlements, you must have more than one 5250 Enterprise Enablement entitlement on the primary server and at least one 5250 Enterprise Enablement entitlement on the CBU system. You may then transfer the entitlements that are not required on the primary server during the time of transfer and that are above the minimum of one entitlement.
For example, if you have a 64-core Power E980 as your primary system with twenty IBM i processor license entitlements (nineteen above the minimum) and two 5250 Enterprise Enablement entitlements (one above the minimum), you can temporarily transfer up to nineteen IBM i entitlements and one 5250 Enterprise Enablement.
If your primary or CBU machine is sold or discontinued from use, any temporary entitlement transfers must be returned to the machine on which they were originally acquired. For CBU registration and further information, see the Capacity Backup website.
Reliability
The reliability of systems starts with components, devices, and subsystems that are designed to be highly reliable. During the design and development process, subsystems go through rigorous verification and integration testing processes. During system manufacturing, systems go through a thorough testing process ensure product quality.
Power E1080 system RAS
The Power E1080 comes with dual line cord redundancy along with n+1 power supply redundancy and n+1 fan rotor redundancy. Power supplies and fans are concurrently maintainable.
The system servi