HP APOLLO 2000 User Manual

HP Apollo 2000 System  
User Guide  
Abstract  
This document is for the person who installs, administers, and troubleshoots servers and storage systems. HP assumes you are qualified in the  
servicing of computer equipment and trained in recognizing hazards in products with hazardous energy levels.  
Part Number: 797871-001  
March 2015  
Edition: 1  
Contents  
Contents  
3
Contents  
4
Contents  
5
Contents  
6
HP Apollo 2000 System  
Introduction  
The HP Apollo 2000 System consists of a chassis and nodes.  
Chassis  
HP Apollo r2200 Chassis (12 low-profile LFF hot-plug drives)  
HP Apollo r2600 Chassis (24 SFF hot-plug drives)  
Nodes  
HP ProLiant XL170r Gen9 Server Nodes (1U)  
HP ProLiant XL190r Gen9 Server Nodes (2U)  
One chassis can support a maximum of:  
Four 1U nodes  
Two 1U nodes and one 2U node  
Two 2U nodes  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
HP Apollo 2000 System  
8
 
Component identification  
Chassis front panel components  
HP Apollo r2200 Chassis  
Item  
Description  
Left bezel ear  
1
2
3
4
Low-profile LFF hot-plug drives  
Right bezel ear  
Chassis serial label pull tab  
HP Apollo r2600 Chassis  
Item  
Description  
Left bezel ear  
1
2
3
4
SFF HP SmartDrives  
Right bezel ear  
Chassis serial label pull tab  
Component identification  
9
 
Chassis front panel LEDs and buttons  
Item  
Description  
Status  
Power On/Standby button and  
system power LED (Node 1)*  
Solid green = System on  
1
Flashing green = Performing power on sequence  
Solid amber = System in standby  
Off = No power present**  
Power On/Standby button and  
system power LED (Node 2)*  
Solid green = System on  
2
Flashing green = Performing power on sequence  
Solid amber = System in standby  
Off = No power present**  
Health LED (Node 2)*  
Health LED (Node 1)*  
Health LED (Node 3)*  
Health LED (Node 4)*  
Solid green = Normal  
Flashing amber = System degraded  
Flashing red = System critical†  
3
4
5
6
7
Solid green = Normal  
Flashing amber = System degraded  
Flashing red = System critical†  
Solid green = Normal  
Flashing amber = System degraded  
Flashing red = System critical†  
Solid green = Normal  
Flashing amber = System degraded  
Flashing red = System critical†  
Power On/Standby button and  
system power LED (Node 4)*  
Solid green = System on  
Flashing green = Performing power on sequence  
Solid amber = System in standby  
Off = No power present**  
UID button/LED*  
Solid blue = Activated  
Flashing blue:  
8
1 Hz/cycle per sec = Remote management or firmware  
upgrade in progress  
4 Hz/cycle per sec = iLO manual soft reboot sequence  
initiated  
8 Hz/cycle per sec = iLO manual hard reboot sequence in  
progress  
Off = Deactivated  
Component identification 10  
 
Item  
Description  
Status  
Power On/Standby button and  
system power LED (Node 3)*  
Solid green = System on  
9
Flashing green = Performing power on sequence  
Solid amber = System in standby  
Off = No power present**  
* When the LEDs described in this table flash simultaneously, a power fault has occurred. For more information, see  
"Power fault LEDs (on page 16)."  
** Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has  
occurred, or the front I/O cable is disconnected.  
† If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health  
status.  
Chassis rear panel components  
Four 1U nodes  
Item  
Description  
Node 4  
1
2
3
4
5
6
7
Node 3  
RCM module  
Power Supply 2  
Power Supply 1  
Node 2  
Node 1  
Two 2U nodes  
Item  
Description  
Node 3  
1
Component identification 11  
 
Item  
Description  
RCM module  
Power Supply 2  
Power Supply 1  
Node 1  
2
3
4
5
Chassis rear panel LEDs  
Item  
Description  
Status  
Power supply 2 LED  
Solid green = Normal  
Off = One or more of the following conditions  
exists:  
1
Power is unavailable  
Power supply failed  
Power supply is in standby mode  
Power supply error  
Power supply 1 LED  
Solid green = Normal  
2
Off = One or more of the following conditions  
exists:  
Power is unavailable  
Power supply failed  
Power supply is in standby mode  
Power supply error  
Component identification 12  
 
Node rear panel components  
1U node rear panel components  
Item  
Description  
Node serial number and iLO label pull tab  
1
2
3
4
5
6
SUV connector  
USB 3.0 connector  
Dedicated iLO port (optional)  
NIC connector 1  
NIC connector 2  
2U node rear panel components  
Item  
Description  
Node serial number and iLO label pull tab  
SUV connector  
1
2
3
4
5
USB 3.0 connector  
Dedicated iLO port (optional)  
NIC connector 1  
Component identification 13  
 
Item  
Description  
NIC connector 2  
6
Node rear panel LEDs and buttons  
1U node  
Item  
Description  
Status  
Power button/LED*  
Solid green = System on  
1
Flashing green (1 Hz/cycle per sec) =  
Performing power on sequence  
Solid amber = System in standby  
Off = No power present**  
UID button/LED*  
Solid blue = Activated  
Flashing blue:  
2
1 Hz/cycle per sec = Remote management  
or firmware upgrade in progress  
4 Hz/cycle per sec = iLO manual soft  
reboot sequence initiated  
8 Hz/cycle per sec = iLO manual hard  
reboot sequence in progress  
Off = Deactivated  
Health LED*  
Solid green = Normal  
Flashing green (1 Hz/cycle per sec) = iLO is  
rebooting  
Flashing amber = System degraded  
Flashing red (1 Hz/cycle per sec) = System  
critical†  
3
4
Do not remove LED  
Flashing white (1 Hz/cycle per sec) = Do not  
remove the node. Removing the node may  
terminate the current operation and cause data  
loss.  
Off = The node can be removed.  
Component identification 14  
 
Item  
Description  
Status  
iLO activity LED  
Green or flashing green = Network activity  
Off = No network activity  
5
iLO link LED  
Green = Linked to network  
Off = No network connection  
6
7
8
NIC link LED*  
NIC activity LED*  
Green = Linked to network  
Off = No network connection  
Green or flashing green = Network activity  
Off = No network activity  
* When the LEDs described in this table flash simultaneously, a power fault has occurred. For more information, see  
"Power fault LEDs (on page 16)."  
** Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has  
occurred, or the front I/O cable is disconnected.  
† If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health  
status.  
2U node  
Item  
Description  
Status  
Power button/LED*  
Solid green = System on  
Flashing green = Performing power on  
sequence  
1
Solid amber = System in standby  
Off = No power present**  
UID button/LED*  
Solid blue = Activated  
Flashing blue:  
2
1 Hz/cycle per sec = Remote management  
or firmware upgrade in progress  
4 Hz/cycle per sec = iLO manual soft  
reboot sequence initiated  
8 Hz/cycle per sec = iLO manual hard  
reboot sequence in progress  
Off = Deactivated  
Component identification 15  
Item  
Description  
Status  
Health LED*  
Solid green = Normal  
Flashing amber = System degraded  
Flashing red = System critical†  
3
Do not remove LED  
Flashing white (1 Hz/cycle per sec) = Do not  
remove the node. Removing the node may  
terminate the current operation and cause data  
loss.  
4
Off = The node can be removed.  
iLO activity LED  
iLO link LED  
Green or flashing green = Network activity  
Off = No network activity  
5
6
7
8
Green = Linked to network  
Off = No network connection  
NIC link LED*  
NIC activity LED*  
Green = Linked to network  
Off = No network connection  
Green or flashing green = Network activity  
Off = No network activity  
* When the LEDs described in this table flash simultaneously, a power fault has occurred. For more information, see  
"Power fault LEDs (on page 16)."  
** Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has  
occurred, or the front I/O cable is disconnected.  
† If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health  
status.  
Power fault LEDs  
The following table provides a list of power fault LEDs, and the subsystems that are affected. Not all power  
faults are used by all servers.  
Subsystem  
LED behavior  
1 flash  
System board  
Processor  
2 flashes  
3 flashes  
4 flashes  
5 flashes  
6 flashes  
Memory  
Riser board PCIe slots  
FlexibleLOM  
Removable HP Flexible Smart Array  
controller/Smart SAS HBA controller  
7 flashes  
8 flashes  
9 flashes  
System board PCIe slots  
Power backplane or storage backplane  
Power supply  
System board components  
NOTE: HP ProLiant XL170r and XL190r Gen9 Server Nodes share the same system board.  
Component identification 16  
     
Item  
Description  
Bayonet board slot  
1
DIMMs for processor 2  
DIMMs for processor 1  
PCIe x40 riser board connector*  
System maintenance switch  
Mini-SAS connector 1 (SATA x4)  
Internal USB 3.0 connector  
Mini-SAS connector 2 (SATA x4)  
PCIe x24 riser board connector*  
Dedicated iLO port connector  
NMI header  
2
3
4
5
6
7
8
9
10  
11  
12  
13  
14  
15  
16  
17  
18  
PCIe x16 riser board connector*  
microSD slot  
System battery  
M.2 SSD riser connector  
TPM connector  
Processor 1  
Processor 2  
* For more information on the riser board slots supported by the onboard PCI riser connectors, see "PCIe riser board slot  
definitions (on page 27)."  
System maintenance switch  
Position  
Default  
Function  
Off  
Off = iLO security is enabled.  
On = iLO security is disabled.  
S1  
Off  
Off = System configuration can be  
changed.  
S2  
On = System configuration is locked.  
Off  
Off  
Reserved  
Reserved  
S3  
S4  
Component identification 17  
 
Position  
Default  
Function  
Off  
Off = Power-on password is enabled.  
On = Power-on password is disabled.  
S5  
Off  
Off  
Off = No function  
On = ROM reads system configuration  
as invalid.  
S6  
S7  
Off = Set default boot mode to UEFI.  
On = Set default boot mode to legacy.  
Reserved  
Reserved  
Reserved  
Reserved  
Reserved  
S8  
S9  
S10  
S11  
S12  
To access the redundant ROM, set S1, S5, and S6 to on.  
When the system maintenance switch position 6 is set to the On position, the system is prepared to erase all  
system configuration settings from both CMOS and NVRAM.  
CAUTION: Clearing CMOS and/or NVRAM deletes configuration information. Be sure to  
properly configure the server or data loss could occur.  
IMPORTANT: Before using the S7 switch to change to Legacy BIOS Boot Mode, be sure the HP  
Dynamic Smart Array B140i Controller is disabled. Do not use the B140i controller when the  
server is in Legacy BIOS Boot Mode.  
NMI functionality  
An NMI crash dump creates a crash dump log before resetting a system which is not responding.  
Crash dump log analysis is an essential part of diagnosing reliability problems, such as failures of operating  
systems, device drivers, and applications. Many crashes freeze a system, and the only available action for  
administrators is to restart the system. Resetting the system erases any information which could support  
problem analysis, but the NMI feature preserves that information by performing a memory dump before a  
system reset.  
To force the system to invoke the NMI handler and generate a crash dump log, do one of the following:  
Use the iLO Virtual NMI feature.  
Short the NMI header ("System board components" on page 16).  
For more information, see the HP website (http://www.hp.com/support/NMI).  
Component identification 18  
 
DIMM slot locations  
DIMM slots are numbered sequentially (1 through 8) for each processor. The supported AMP modes use the  
letter assignments for population guidelines.  
NOTE: The arrow indicates the front of the chassis.  
Fan locations  
Drive numbering  
Component identification 19  
   
CAUTION: To prevent improper cooling and thermal damage, do not operate the chassis unless  
all bays are populated with a component or a blank.  
NOTE: A storage cable option must be installed in a node for the node to correspond to drives in  
the chassis.  
HP Apollo r2200 Chassis drive numbering  
One 1U node corresponds to a maximum of three low-profile LFF hot-plug drives:  
Node 1 corresponds to drives 1-1 through 1-3.  
Node 2 corresponds to drives 2-1 through 2-3.  
Node 3 corresponds to drives 3-1 through 3-3.  
Node 4 corresponds to drives 4-1 through 4-3.  
One 2U node corresponds to a maximum of six low-profile LFF hot-plug drives:  
Node 1 corresponds to drives 1-1 through 2-3.  
Node 3 corresponds to drives 3-1 through 4-3.  
HP Apollo r2600 Chassis drive numbering  
One 1U node corresponds to a maximum of six SFF HP SmartDrives.  
Node 1 corresponds to drives 1-1 through 1-6.  
Node 2 corresponds to drives 2-1 through 2-6.  
Node 3 corresponds to drives 3-1 through 3-6.  
Node 4 corresponds to drives 4-1 through 4-6.  
If a P840 Smart Array controller is installed, one 2U node corresponds to a maximum of twelve SFF HP  
SmartDrives.  
Node 1 corresponds to drives 1-1 through 2-6.  
Component identification 20  
 
Node 3 corresponds to drives 3-1 through 4-6.  
One 2U node corresponds to a maximum of eight SFF HP SmartDrives if using the HP Dynamic Smart Array  
B140i Controller, HP H240 Host Bus Adapter, or HP P440 Smart Array Controller.  
Node 1 corresponds to drives 1-1, 1-2, 1-4, 1-5, 2-1, 2-2, 2-3 and 2-5.  
Node 3 corresponds to drives 3-1, 3-2, 3-3, 3-5, 4-1, 4-2, 4-4 and 4-5.  
For more information on installing a storage controller, see "Controller options (on page 96)."  
Component identification 21  
M.2 SATA SSD bay numbering  
Bay 9  
Bay 10  
Hot-plug drive LED definitions  
HP SmartDrive LED definitions  
HP SmartDrives are the latest HP drive technology, and they are supported beginning with ProLiant Gen8  
servers and server blades. The HP SmartDrive is not supported on earlier generation servers and server  
blades. Identify an HP SmartDrive by its carrier, shown in the following illustration.  
Component identification 22  
     
When a drive is configured as a part of an array and connected to a powered-up controller, the drive LEDs  
indicate the condition of the drive.  
Item LED  
Status  
Definition  
Locate  
Solid blue  
Flashing blue  
The drive is being identified by a host application.  
The drive carrier firmware is being updated or requires an update.  
1
2
3
Activity ring  
Rotating green  
Off  
Drive activity  
No drive activity  
Do not remove  
Solid white  
Do not remove the drive. Removing the drive causes one or more of  
the logical drives to fail.  
Off  
Removing the drive does not cause a logical drive to fail.  
Drive status  
Solid green  
The drive is a member of one or more logical drives.  
4
Flashing green  
The drive is rebuilding or performing a RAID migration, strip size  
migration, capacity expansion, or logical drive extension, or is  
erasing.  
Flashing  
amber/green  
The drive is a member of one or more logical drives and predicts  
the drive will fail.  
Flashing amber The drive is not configured and predicts the drive will fail.  
Solid amber  
Off  
The drive has failed.  
The drive is not configured by a RAID controller.  
The blue Locate LED is behind the release lever and is visible when illuminated.  
IMPORTANT: The HP Dynamic Smart Array B140i Controller is only available in UEFI Boot Mode.  
It cannot be enabled in Legacy BIOS Boot Mode. If the B140i controller is disabled, drives  
connected to the system board Mini-SAS connectors operate in AHCI or Legacy mode. Under this  
condition:  
The drives cannot be a part of a hardware RAID or a logical drive.  
The Locate, Drive status, and Do not remove LEDs of the affected drives are disabled.  
Use BIOS/Platform Configuration (RBSU) in the UEFI System Utilities ("HP UEFI System Utilities" on  
page 149) to enable or disable the B140i controller (System Configuration BIOS/Platform  
Configuration (RBSU) System Options SATA Controller Options Embedded SATA  
Configuration).  
Component identification 23  
Low-profile LFF hot-plug drive LED definitions  
Item  
Definition  
Fault/UID (amber/blue)  
Online/Activity (green)  
1
2
Online/activity Fault/UID LED  
Definition  
LED (green)  
(amber/blue)  
Alternating amber The drive has failed, or a predictive failure alert has been received for  
On, off, or  
flashing  
and blue  
this drive; it also has been selected by a management application.  
Steadily blue  
The drive is operating normally, and it has been selected by a  
management application.  
On, off, or  
flashing  
Amber,  
Flashing (1 Hz)  
A predictive failure alert has been received for this drive. Replace the  
drive as soon as possible.  
On  
Off  
The drive is online, but it is not active currently.  
On  
Amber,  
Flashing (1 Hz)  
Do not remove the drive. Removing a drive may terminate the current  
operation and cause data loss.  
Flashing (1 Hz)  
The drive is part of an array that is undergoing capacity expansion or  
stripe migration, but a predictive failure alert has been received for this  
drive. To minimize the risk of data loss, do not replace the drive until  
the expansion or migration is complete.  
Off  
Do not remove the drive. Removing a drive may terminate the current  
operation and cause data loss.  
The drive is rebuilding, erasing, or it is part of an array that is  
undergoing capacity expansion or stripe migration.  
Flashing (1 Hz)  
Flashing (4 Hz)  
Amber,  
Flashing (1 Hz)  
The drive is active, but a predictive failure alert has been received for  
this drive. Replace the drive as soon as possible.  
Off  
The drive is active, and it is operating normally.  
Flashing (4 Hz)  
Off  
Steadily amber  
A critical fault condition has been identified for this drive, and the  
controller has placed it offline. Replace the drive as soon as possible.  
Amber,  
Flashing (1 Hz)  
A predictive failure alert has been received for this drive. Replace the  
drive as soon as possible.  
Off  
Off  
Off  
The drive is offline, a spare, or not configured as part of an array.  
Component identification 24  
 
RCM module components  
Item Description  
iLO connector  
1
2
3
HP APM 2.0 connector  
iLO connector  
IMPORTANT: Use either the HP APM port or an iLO port to connect to a network. Having both  
ports connected at the same time results in a loopback condition.  
IMPORTANT: Do not connect both iLO ports to the network at the same time. Only one iLO port  
can be connected to the network, while the other iLO port can be used only as a connection to a  
second enclosure. Having both ports connected at the same time results in a loopback condition.  
Component identification 25  
 
RCM module LEDs  
Item Description  
iLO activity LED Green or flashing green = Network activity  
Off = No network activity  
1
2
3
4
iLO link LED Green = Linked to network  
Off = No network connection  
iLO link LED Green = Linked to network  
Off = No network connection  
iLO activity LED Green or flashing green = Network activity  
Off = No network activity  
Component identification 26  
 
PCIe riser board slot definitions  
Single-slot left PCI riser cage assembly  
Form factor  
Slot number  
Slot description  
1
PCIe3 x16 (16, 8, 4, 1) for  
Processor 1  
Low-profile PCIe card  
Single-slot 1U node right PCI riser cage assembly  
Form factor  
Slot number  
Slot description  
2
PCIe3 x16 (16, 8, 4, 1) for  
Processor 2  
Low-profile PCIe NIC card  
Component identification 27  
   
FlexibleLOM 1U node riser cage assembly  
Form factor  
Slot number  
Slot description  
FlexibleLOM slot  
PCIe3 x8 for Processor 1  
FlexibleLOM  
Single-slot 2U node PCI riser cage assembly  
Form factor  
Slot number  
Slot description  
1
PCIe3 x16 (16, 8, 4, 1) for  
Processor 1  
Low-profile PCIe card  
Component identification 28  
FlexibleLOM 2U node riser cage assembly  
Item  
Form factor  
Slot number  
Slot description  
FlexibleLOM  
FlexibleLOM slot  
PCIe3 x8 for Processor 1  
1
Storage controller or graphic 2  
card  
PCIe3 x16 (16, 8, 4, 1) for  
Processor 1  
2
Three-slot PCI riser cage assembly  
Item  
Form factor  
Slot number  
Slot description  
Storage controller or graphic 3  
card  
PCIe3 x16 (16, 8, 4, 1) for  
Processor 1  
1
Low-profile PCIe NIC card  
2
PCIe3 x16 (16, 8, 4, 1) for  
Processor 2  
2
3
Graphic card  
4
PCIe3 x16 (16, 8, 4, 1) for  
Processor 2  
Component identification 29  
Three-slot GPU-direct PCI riser cage assembly  
Item  
Form factor  
Slot number  
Slot description  
Storage controller or graphic 3  
card  
PCIe3 x16 (16, 8, 4, 1) for  
Processor 2  
1
Low-profile PCIe NIC card  
2
PCIe3 x16 (16, 8, 4, 1) for  
Processor 2  
2
3
Graphic card  
4
PCIe3 x16 (16, 8, 4, 1) for  
Processor 2  
Component identification 30  
Operations  
Power up the nodes  
The SL/XL Chassis Firmware initiates an automatic power-up sequence when the nodes are installed. If the  
default setting is changed, use one of the following methods to power up each node:  
Use a virtual power button selection through iLO.  
Press and release the Power On/Standby button.  
When the node goes from the standby mode to the full power mode, the node power LED changes from  
amber to green.  
For more information about iLO, see the HP website (http://www.hp.com/go/ilo).  
Power down the system  
IMPORTANT: When the nodes are in standby mode, auxiliary power is still being provided to  
the system.  
1.  
2.  
Power down the node (on page 31).  
Disconnect the power cords from the power supplies.  
Power down the node  
Before powering down the node for any upgrade or maintenance procedures, perform a backup of critical  
server data and programs.  
IMPORTANT: When the node is in standby mode, auxiliary power is still being provided to the  
system.  
To power down the node, use one of the following methods:  
Press and release the Power On/Standby button.  
This method initiates a controlled shutdown of applications and the OS before the node enters standby  
mode.  
Press and hold the Power On/Standby button for more than 4 seconds to force the node to enter  
standby mode.  
This method forces the node to enter standby mode without properly exiting applications and the OS.  
If an application stops responding, you can use this method to force a shutdown.  
Use a virtual power button selection through iLO.  
This method initiates a controlled remote shutdown of applications and the OS before the node enters  
standby mode.  
Before proceeding, verify the node is in standby mode by observing that the system power LED is amber.  
Operations 31  
       
Remove the node from the chassis  
CAUTION: To avoid damage to the node, always support the bottom of the node when removing  
it from the chassis.  
1.  
2.  
3.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis:  
a. Loosen the thumbscrew.  
b. Pull back the handle and remove the node.  
1U node  
2U node  
Operations 32  
   
CAUTION: To avoid damage to the device, do not use the removal handle to carry it.  
4.  
Place the node on a flat, level surface.  
Remove the RCM module  
To remove the component:  
1.  
2.  
3.  
4.  
Power down the system (on page 31).  
Access the product rear panel.  
Disconnect all cables from the RCM module.  
Remove the RCM module.  
Remove the power supply  
To remove the component:  
1.  
2.  
3.  
4.  
5.  
Power down the system (on page 31).  
Access the product rear panel.  
If installed, remove the RCM module (on page 33).  
Release the power cord from the relief strap.  
Remove all power:  
a. Disconnect the power cord from the power source.  
b. Disconnect the power cord from the chassis.  
Operations 33  
     
6.  
Remove the power supply.  
Remove the chassis from the rack  
WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the  
equipment:  
Observe local occupational health and safety requirements and guidelines for manual  
material handling.  
Remove all installed components from the chassis before installing or moving the chassis.  
Use caution and get help to lift and stabilize the chassis during installation or removal,  
especially when the chassis is not fastened to the rack.  
WARNING: To reduce the risk of personal injury or damage to the equipment, you must  
adequately support the chassis during installation and removal.  
WARNING: Always use at least two people to lift the chassis into the rack. If the chassis is being  
loaded into the rack above chest level, a third person must assist with aligning the chassis with the  
rails while the other two people support the weight of the chassis.  
1.  
2.  
Power down the system (on page 31).  
Disconnect all peripheral cables from the nodes and chassis.  
IMPORTANT: Label the drives before removing them. The drives must be returned to their  
original locations.  
3.  
4.  
5.  
6.  
7.  
Remove all nodes from the chassis ("Remove the node from the chassis" on page 32).  
If installed, remove the security bezel (on page 35).  
Remove all drives ("Removing the drive" on page 35).  
If installed, remove the RCM module (on page 33).  
Remove all power supplies ("Remove the power supply" on page 33).  
Operations 34  
   
8.  
Loosen the thumbscrews and extend the chassis from the rack.  
9.  
Remove the chassis from the rack.  
For more information, see the documentation that ships with the rack mounting option.  
10. Place the chassis on a flat surface.  
Remove the security bezel  
To access the front panel components, unlock and then remove the security bezel.  
Removing the drive  
CAUTION: For proper cooling, do not operate the node without the access panel, baffles,  
expansion slot covers, or blanks installed. If the server supports hot-plug components, minimize  
the amount of time the access panel is open.  
Operations 35  
     
1.  
2.  
If installed, remove the security bezel (on page 35).  
Remove the drive:  
o
SFF HP SmartDrive  
o
Low-profile LFF hot-plug drive  
Remove the chassis access panel  
1.  
2.  
3.  
4.  
5.  
6.  
7.  
8.  
9.  
Power down the system (on page 31).  
Disconnect all peripheral cables from the nodes and chassis.  
Remove all nodes from the chassis ("Remove the node from the chassis" on page 32).  
If installed, remove the security bezel (on page 35).  
Remove all drives ("Removing the drive" on page 35).  
If installed, remove the RCM module (on page 33).  
Remove all power supplies ("Remove the power supply" on page 33).  
Remove the chassis from the rack (on page 34).  
Unlock the access panel latch using the T-15 Torx screwdriver and release the access panel latch.  
10. Slide the access panel back about 1.5 cm (0.5 in).  
Operations 36  
   
11. Lift and remove the access panel.  
Install the chassis access panel  
1.  
Install the chassis access panel.  
a. Place the access panel and align the pin on the chassis, and slide it towards the front of the server.  
b. Lock the access panel latch using the T-15 Torx screwdriver.  
2.  
3.  
4.  
5.  
6.  
7.  
Install the chassis into the rack ("Installing the chassis into the rack" on page 59).  
Install all nodes, drives and power supplies ("Chassis component installation" on page 60).  
If removed, install the security bezel ("Security bezel option" on page 64).  
If removed, install the RCM module ("Rack control management (RCM) module" on page 67).  
Connect all peripheral cables to the nodes and chassis.  
Power up the nodes (on page 31).  
Operations 37  
   
Remove the 1U left rear I/O blank  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Remove the 1U left rear I/O blank.  
Install the 1U left rear I/O blank  
1.  
Install the 1U left rear I/O blank.  
2.  
3.  
Install the node into the chassis.  
Connect all peripheral cable to the node.  
Operations 38  
     
4.  
Power up the node ("Power up the nodes" on page 31).  
Remove the 1U right rear I/O blank  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
o
o
Remove the 1U left rear I/O blank (on page 38).  
Remove the single-slot left PCI riser cage assembly (on page 48).  
6.  
Remove the 1U right rear I/O blank.  
Operations 39  
   
Install the 1U right rear I/O blank  
1.  
Install the 1U right rear I/O blank.  
2.  
Do one of the following:  
o
o
Install the 1U left rear I/O blank (on page 38).  
Install the single-slot left PCI riser cage assembly ("Single-slot left PCI riser cage assembly option" on  
page 85).  
3.  
4.  
5.  
Install the node into the chassis ("Installing a node into the chassis" on page 60).  
Connect all peripheral cables to the node.  
Power up the node ("Power up the nodes" on page 31).  
Remove the 2U rear I/O blank  
1.  
2.  
3.  
4.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Operations 40  
     
5.  
Remove the 2U rear I/O blank.  
Install the 2U node rear I/O blank  
1.  
Install the 2U rear I/O blank.  
2.  
3.  
4.  
Install the node into the chassis ("Installing a node into the chassis" on page 60).  
Connect all peripheral cables to the node.  
Power up the node ("Power up the nodes" on page 31).  
Remove the air baffle  
1.  
2.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Operations 41  
     
3.  
4.  
5.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
If installed in a 2U node, remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node  
6.  
7.  
If installed in a 2U node, remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage  
Remove the air baffle:  
o
1U air baffle  
o
2U air baffle  
Install the air baffle  
1.  
Install the air baffle:  
Operations 42  
   
o
1U air baffle  
o
2U air baffle  
2.  
3.  
4.  
5.  
Install any removed PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
Install the node into the chassis ("Installing a node into the chassis" on page 60).  
Connect all peripheral cables to the node.  
Power up the node ("Power up the nodes" on page 31).  
Remove the bayonet board assembly  
1.  
2.  
3.  
4.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Operations 43  
   
5.  
6.  
If installed in a 2U node, remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node  
If installed ina 2U node, remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage  
7.  
8.  
9.  
If a graphic card/ coprocessor power cable is installed, disconnect it from the bayonet board.  
If a B140i SATA cable is installed, disconnect it from the connectors on the system board.  
Remove the bayonet board assembly from the node.  
o
1U bayonet board assembly  
o
2U bayonet board assembly  
Install the bayonet board assembly  
1.  
Install the bayonet board assembly into the node:  
Operations 44  
   
o
1U bayonet board assembly  
o
2U bayonet board assembly  
.
2.  
3.  
If any SATA or Mini-SAS cables are installed, secure the cables under the thin plastic covers along the  
side of the node tray.  
If removed, connect the B140i SATA cable to the connectors on the system board ("B140i 1U node  
4.  
5.  
If a graphic card/ coprocessor power cable was removed, connect it to the bayonet board.  
If removed, install the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node riser cage  
6.  
If removed, install the three-slot PCI riser cage assembly ("Three-slot PCI riser cage assembly options" on  
page 93).  
7.  
8.  
Install the node into the chassis ("Installing a node into the chassis" on page 60).  
Connect all peripheral cables to the node.  
Operations 45  
9.  
Power up the node ("Power up the nodes" on page 31).  
Remove the bayonet board bracket  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
If installed in a 2U node, remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node  
6.  
If installed in a 2U node, remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage  
7.  
8.  
9.  
If a graphic card/ coprocessor power cable is installed, disconnect it from the bayonet board.  
If a B140i SATA cable is installed, disconnect it from the connectors on the system board.  
Remove the bayonet board assembly from the node ("Remove the bayonet board assembly" on page  
10. Remove the bayonet board bracket from the bayonet board.  
1U bayonet board bracket  
o
Operations 46  
   
o
2U bayonet board bracket  
Install the bayonet board bracket  
NOTE: If a storage cable is connected to the 2U bayonet board, route the cable under the  
padding before installing the 2U bayonet board bracket.  
1.  
Install the bayonet board bracket onto the bayonet board.  
o
1U bayonet board bracket  
Operations 47  
   
o
2U bayonet board bracket  
2.  
3.  
Install the bayonet board assembly into the node ("Install the bayonet board assembly" on page 44).  
If any SATA or Mini-SAS cables are installed, secure the cables under the thin plastic covers along the  
side of the node tray.  
4.  
If removed, connect the B140i SATA cable to the connectors on the system board ("B140i 1U node  
5.  
6.  
If a graphic card/ coprocessor power cable was removed, connect it to the bayonet board..  
If removed, install the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node riser cage  
assembly option" on page 92) or the three-slot PCI riser cage assembly ("Three-slot PCI riser cage  
7.  
8.  
9.  
Install the node into the chassis ("Installing a node into the chassis" on page 60).  
Connect all peripheral cables to the nodes.  
Power up the node ("Power up the nodes" on page 31).  
Remove the PCI riser cage assembly  
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the  
internal system components to cool before touching them.  
CAUTION: To prevent damage to the server or expansion boards, power down the server, and  
disconnect all power cords before removing or installing the PCI riser cage.  
Single-slot left PCI riser cage assembly  
To remove the component:  
1.  
2.  
3.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Operations 48  
     
4.  
5.  
Place the node on a flat, level surface.  
In a 2U node, remove the three-slot riser cage assembly ("Three-slot PCI riser cage assemblies" on page  
6.  
Remove the single-slot left PCI riser cage assembly:  
o
1U node  
o
2U node  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
Operations 49  
Single-slot 1U node right PCI riser cage assembly  
To remove the component:  
1.  
2.  
3.  
4.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Do one of the following:  
a. Remove the 1U left rear I/O blank (on page 38).  
b. Remove the single-slot left PCI riser cage assembly (on page 48).  
Remove the single-slot 1U node right PCI riser cage assembly.  
5.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
FlexibleLOM 1U node riser cage assembly  
To remove the component:  
1.  
2.  
3.  
4.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Do one of the following:  
a. Remove the 1U left rear I/O blank (on page 38).  
b. Remove the single-slot left PCI riser cage assembly (on page 48).  
Operations 50  
 
5.  
Remove the FlexibleLOM 1U node riser cage assembly.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
Single-slot 2U node PCI riser cage assembly  
To remove the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Remove the FlexibleLOM 2U node riser cage assembly (on page 52).  
Operations 51  
   
6.  
Remove the single-slot 2U node PCI riser cage assembly.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
FlexibleLOM 2U node riser cage assembly  
To remove the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Remove the FlexibleLOM 2U node riser cage assembly.  
Operations 52  
   
Three-slot PCI riser cage assemblies  
NOTE: The three-slot PCI riser cage assembly and the three-slot GPU-direct PCI riser cage  
assembly, share the same riser cage but have a different riser board. For more information on the  
riser board slot specifications, see "PCIe riser board slot definitions (on page 27)."  
To remove the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Remove the three-slot riser cage assembly.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
Operations 53  
   
Setup  
Installation overview  
To set up and install the HP Apollo 2000 System:  
1.  
2.  
3.  
Set up and install the rack. For more information, see the documentation that ships with the rack.  
Prepare the chassis ("Preparing the chassis" on page 58).  
Install any hardware options into the chassis and nodes ("Hardware options installation" on page 64).  
NOTE: Install the chassis into the rack before installing drives, power supplies, the RCM module,  
or nodes.  
4.  
5.  
6.  
7.  
8.  
9.  
Install the chassis into the rack ("Installing the chassis into the rack" on page 59).  
Install all nodes, drives and power supplies ("Chassis component installation" on page 60).  
Power up the chassis ("Powering up the chassis" on page 62).  
Install an operating system ("Installing the operating system" on page 62).  
Install the system software ("Installing the system software" on page 63).  
Register the server ("Registering the server" on page 63).  
Optional services  
Delivered by experienced, certified engineers, HP Care Pack services help you keep your servers up and  
running with support packages tailored specifically for HP ProLiant systems. HP Care Packs let you integrate  
both hardware and software support into a single package. A number of service level options are available  
to meet your needs.  
HP Care Pack Services offer upgraded service levels to expand your standard product warranty with  
easy-to-buy, easy-to-use support packages that help you make the most of your server investments. Some of  
the Care Pack services are:  
Hardware support  
o
o
o
6-Hour Call-to-Repair  
4-Hour 24x7 Same Day  
4-Hour Same Business Day  
Software support  
o
o
o
o
Microsoft®  
Linux  
HP ProLiant Essentials (HP SIM and RDP)  
VMware  
Integrated hardware and software support  
Critical Service  
o
Setup 54  
 
o
o
o
Proactive 24  
Support Plus  
Support Plus 24  
Startup and implementation services for both hardware and software  
For more information on HP Care Pack Services, see the HP website  
Optimum environment  
When installing the server, select a location that meets the environmental standards described in this section.  
Space and airflow requirements  
To allow for servicing and adequate airflow, observe the following space and airflow requirements when  
deciding where to install a rack:  
Leave a minimum clearance of 85.09 cm (33.5 in) in front of the rack.  
Leave a minimum clearance of 76.2 cm (30 in) behind the rack.  
Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack  
or row of racks.  
HP nodes draw in cool air through the front door and expel warm air through the rear door. Therefore, the  
front and rear rack doors must be adequately ventilated to allow ambient room air to enter the cabinet, and  
the rear door must be adequately ventilated to allow the warm air to escape from the cabinet.  
CAUTION: To prevent improper cooling and damage to the equipment, do not block the  
ventilation openings.  
When vertical space in the rack is not filled by a server or rack component, the gaps between the  
components cause changes in airflow through the rack and across the servers. Cover all gaps with blanking  
panels to maintain proper airflow.  
CAUTION: Always use blanking panels to fill empty vertical spaces in the rack. This arrangement  
ensures proper airflow. Using a rack without blanking panels results in improper cooling that can  
lead to thermal damage.  
The 9000 and 10000 Series Racks provide proper server cooling from flow-through perforations in the front  
and rear doors that provide 64 percent open area for ventilation.  
CAUTION: When using a Compaq branded 7000 series rack, install the high airflow rack door  
insert (PN 327281-B21 for 42U rack, PN 157847-B21 for 22U rack) to provide proper  
front-to-back airflow and cooling.  
CAUTION: If a third-party rack is used, observe the following additional requirements to ensure  
adequate airflow and to prevent damage to the equipment:  
Front and rear doors—If the 42U rack includes closing front and rear doors, you must allow  
5,350 sq cm (830 sq in) of holes evenly distributed from top to bottom to permit adequate  
airflow (equivalent to the required 64 percent open area for ventilation).  
Side—The clearance between the installed rack component and the side panels of the rack  
must be a minimum of 7 cm (2.75 in).  
Setup 55  
 
Temperature requirements  
To ensure continued safe and reliable equipment operation, install or position the system in a well-ventilated,  
climate-controlled environment.  
The maximum recommended ambient operating temperature (TMRA) for most server products is 35°C  
(95°F). The temperature in the room where the rack is located must not exceed 35°C (95°F).  
CAUTION: To reduce the risk of damage to the equipment when installing third-party options:  
Do not permit optional equipment to impede airflow around the server or to increase the  
internal rack temperature beyond the maximum allowable limits.  
Do not exceed the manufacturer’s TMRA.  
Power requirements  
Installation of this equipment must comply with local and regional electrical regulations governing the  
installation of information technology equipment by licensed electricians. This equipment is designed to  
operate in installations covered by NFPA 70, 1999 Edition (National Electric Code) and NFPA-75, 1992  
(code for Protection of Electronic Computer/Data Processing Equipment). For electrical power ratings on  
options, refer to the product rating label or the user documentation supplied with that option.  
WARNING: To reduce the risk of personal injury, fire, or damage to the equipment, do not  
overload the AC supply branch circuit that provides power to the rack. Consult the electrical  
authority having jurisdiction over wiring and installation requirements of your facility.  
CAUTION: Protect the server from power fluctuations and temporary interruptions with a  
regulating uninterruptible power supply. This device protects the hardware from damage caused  
by power surges and voltage spikes and keeps the system in operation during a power failure.  
When installing more than one server, you might need to use additional power distribution devices to safely  
provide power to all devices. Observe the following guidelines:  
Balance the server power load between available AC supply branch circuits.  
Do not allow the overall system AC current load to exceed 80% of the branch circuit AC current rating.  
Do not use common power outlet strips for this equipment.  
Provide a separate electrical circuit for the server.  
For more information on the hot-plug power supply and calculators to determine server power consumption  
in various system configurations, see the HP Power Advisor website  
Electrical grounding requirements  
The server must be grounded properly for proper operation and safety. In the United States, you must install  
the equipment in accordance with NFPA 70, 1999 Edition (National Electric Code), Article 250, as well as  
any local and regional building codes. In Canada, you must install the equipment in accordance with  
Canadian Standards Association, CSA C22.1, Canadian Electrical Code. In all other countries, you must  
install the equipment in accordance with any regional or national electrical wiring codes, such as the  
International Electrotechnical Commission (IEC) Code 364, parts 1 through 7. Furthermore, you must be sure  
Setup 56  
 
that all power distribution devices used in the installation, such as branch wiring and receptacles, are listed  
or certified grounding-type devices.  
Because of the high ground-leakage currents associated with multiple servers connected to the same power  
source, HP recommends the use of a PDU that is either permanently wired to the building’s branch circuit or  
includes a nondetachable cord that is wired to an industrial-style plug. NEMA locking-style plugs or those  
complying with IEC 60309 are considered suitable for this purpose. Using common power outlet strips for  
the server is not recommended.  
Server warnings and cautions  
WARNING: This server is very heavy. To reduce the risk of personal injury or damage to the  
equipment:  
Observe local occupational health and safety requirements and guidelines for manual  
material handling.  
Get help to lift and stabilize the product during installation or removal, especially when the  
product is not fastened to the rails. HP recommends that a minimum of two people are required  
for all rack server installations. A third person may be required to help align the server if the  
server is installed higher than chest level.  
Use caution when installing the server or removing the server from the rack; it is unstable when  
not fastened to the rails.  
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the  
internal system components to cool before touching them.  
WARNING: To reduce the risk of personal injury, electric shock, or damage to the equipment,  
remove the power cord to remove power from the server. The front panel Power On/Standby  
button does not completely shut off system power. Portions of the power supply and some internal  
circuitry remain active until AC power is removed.  
CAUTION: Protect the server from power fluctuations and temporary interruptions with a  
regulating uninterruptible power supply. This device protects the hardware from damage caused  
by power surges and voltage spikes and keeps the system in operation during a power failure.  
CAUTION: Do not operate the server for long periods with the access panel open or removed.  
Operating the server in this manner results in improper airflow and improper cooling that can  
lead to thermal damage.  
Rack warnings  
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:  
The leveling jacks are extended to the floor.  
The full weight of the rack rests on the leveling jacks.  
The stabilizing feet are attached to the rack if it is a single-rack installation.  
The racks are coupled together in multiple-rack installations.  
Only one component is extended at a time. A rack may become unstable if more than one  
component is extended for any reason.  
Setup 57  
 
WARNING: To reduce the risk of personal injury or equipment damage when unloading a rack:  
At least two people are needed to safely unload the rack from the pallet. An empty 42U rack  
can weigh as much as 115 kg (253 lb), can stand more than 2.1 m (7 ft) tall, and might  
become unstable when being moved on its casters.  
Never stand in front of the rack when it is rolling down the ramp from the pallet. Always handle  
the rack from both sides.  
WARNING: To reduce the risk of personal injury or damage to the equipment, adequately  
stabilize the rack before extending a component outside the rack. Extend only one component at  
a time. A rack may become unstable if more than one component is extended.  
WARNING: When installing a server in a telco rack, be sure that the rack frame is adequately  
secured at the top and bottom to the building structure.  
Identifying the contents of the server shipping carton  
Unpack the server shipping carton and locate the materials and documentation necessary for installing the  
server. All the rack mounting hardware necessary for installing the server into the rack is included with the  
rack or the server.  
The contents of the server shipping carton include:  
Server  
Power cord  
Rack rail hook-and-loop strap  
Rack mounting hardware kit  
Printed setup documentation  
In addition to the supplied items, you might need:  
T-25 Torx screwdriver (to loosen the shipping screws located inside the node quick-release latch rack  
ears)  
T-10/T-15 Torx screwdriver  
Flathead screwdriver (to remove the knockout on the dedicated iLO connector opening)  
Hardware options  
Preparing the chassis  
Before installing the chassis into the rack, you must remove the nodes, the drives, and the power supplies.  
Because a fully populated chassis is heavy, removing these components facilitates moving and installing the  
chassis.  
1.  
2.  
3.  
Remove the power supply (on page 33).  
Remove all drives ("Removing the drive" on page 35).  
Setup 58  
   
Installing hardware options  
Install any hardware options before initializing the server. For options installation information, see the option  
documentation. For server-specific information, see "Hardware options installation (on page 64)."  
Installing the chassis into the rack  
WARNING: Always use at least two people to lift the chassis into the rack. If the chassis is being  
loaded into the rack above chest level, a third person must assist with aligning the chassis with the  
rails while the other two people support the weight of the chassis.  
WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the  
equipment:  
Observe local occupational health and safety requirements and guidelines for manual  
material handling.  
Remove all installed components from the chassis before installing or moving the chassis.  
Use caution and get help to lift and stabilize the chassis during installation or removal,  
especially when the chassis is not fastened to the rack.  
WARNING: To avoid risk of personal injury or damage to the equipment, do not stack anything  
on top of rail-mounted equipment or use it as a work surface when extended from the rack.  
CAUTION: Always plan the rack installation so that the heaviest item is on the bottom of the rack.  
Install the heaviest item first, and continue to populate the rack from the bottom to the top.  
The chassis requires installation in a rack. To install the rack rails, see the Quick Deploy Rail System  
Installation Instructions that ship with the rack hardware kit.  
You can install up to twenty-one chassis in a 42U, 1200 mm deep rack. If you are installing more than one  
chassis, install the first chassis in the bottom of the rack, and then install additional chassis by moving up the  
rack with each subsequent chassis. Plan the rack installation carefully, because changing the location of  
installed components might be difficult.  
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:  
The rack is bolted to the floor using the concrete anchor kit.  
The leveling feet extend to the floor.  
The full weight of the rack rests on the leveling feet.  
The racks are coupled together in multiple rack installations.  
Only one component is extended at a time. If more than one component is extended, a rack  
might become unstable.  
WARNING: To reduce the risk of personal injury or equipment damage, be sure that the rack is  
adequately stabilized before installing the chassis.  
CAUTION: Be sure to keep the product parallel to the floor when installing the chassis. Tilting the  
product up or down could result in damage to the slides.  
Setup 59  
   
Install the chassis into the rack and tighten the thumbscrews.  
Chassis component installation  
Installing a node into the chassis  
1U node  
Setup 60  
     
2U node  
Installing a drive  
1.  
2.  
Remove the drive blank ("Removing a drive blank" on page 65).  
Install the drives ("Drive options" on page 64).  
Installing the power supplies  
CAUTION: Do not mix power supplies with different efficiency and wattage in the chassis. Install  
only one type of power supply in a single chassis.  
1.  
2.  
If installing a second power supply, remove the power supply blank.  
Slide the power supplies into the power supply bays until they click into place.  
3.  
If needed, install an RCM module ("Rack control management (RCM) module" on page 67).  
Setup 61  
 
4.  
Connect all power cords and secure them with the strain release straps.  
Powering up the chassis  
Connect the AC or DC power cables, depending on the power configuration.  
When the circuit breakers are powered, the chassis and HP Advanced Power Manager have power. By  
default, each installed component also powers up. Examine the HP Advanced Power Manager for any errors  
which may prevent installed components from powering up.  
HP Advanced Power Manager (optional)  
To install, configure, and access HP APM, see the HP Advanced Power Manager User Guide on the HP  
Powering on and selecting boot options in UEFI Boot  
Mode  
On servers operating in UEFI Boot Mode, the boot controller and boot order are set automatically.  
1.  
2.  
Press the Power On/Standby button.  
During the initial boot:  
o
To modify the server configuration ROM default settings, press the F9 key in the HP ProLiant POST  
screen to enter the UEFI System Utilities screen. By default, the System Utilities menus are in the  
English language.  
o
If you do not need to modify the server configuration and are ready to install the system software,  
press the F10 key to access Intelligent Provisioning.  
For more information on automatic configuration, see the UEFI documentation on the HP website  
Installing the operating system  
To operate properly, the node must have a supported operating system installed. For the latest information on  
operating system support, see the HP website (http://www.hp.com/go/supportos).  
IMPORTANT: HP ProLiant XL servers do not support operating system installation with Intelligent  
Provisioning, but do support the maintenance features. For more information, see the Performing  
Maintenance section of the HP Intelligent Provisioning User Guide and online help.  
To install an operating system on the node, use one of the following methods:  
Manual installation—Insert the operating system CD into the USB-attached DVD-ROM drive (user  
provided) and reboot the node. You must download the HP Service Pack for ProLiant from the SPP  
download site (http://www.hp.com/go/spp/download) and create SPP media so that you can install  
the drivers.  
Remote deployment installation—Use Insight Control server provisioning for an automated solution to  
remotely deploy an operating system.  
Setup 62  
     
For additional system software and firmware updates, download the HP Service Pack for ProLiant from the HP  
website (http://www.hp.com/go/spp/download). Software and firmware should be updated before using  
the node for the first time, unless any installed software or components require an older version.  
For more information on using these installation methods, see the HP website (http://www.hp.com/go/ilo).  
Installing the system software  
To access and configure Intelligent Provisioning on a single node:  
1.  
2.  
Access Intelligent Provisioning by rebooting the server and pressing F10.  
The first time you log into Intelligent Provisioning, follow the steps to set preferences and activate  
Intelligent Provisioning.  
3.  
4.  
From the Home screen, click Perform Maintenance, and then click Firmware Update.  
Ensure the latest drivers are available for installation. Select Intelligent Provisioning Software from the  
list of firmware, and click Update. If the check box is not selected, the latest drivers are already installed.  
Registering the server  
To experience quicker service and more efficient support, register the product at the HP Product Registration  
Setup 63  
     
Hardware options installation  
Introduction  
If more than one option is being installed, read the installation instructions for all the hardware options and  
identify similar steps to streamline the installation process.  
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the  
internal system components to cool before touching them.  
CAUTION: To prevent damage to electrical components, properly ground the server before  
beginning any installation procedure. Improper grounding can cause electrostatic discharge.  
Security bezel option  
The security bezel helps prevent unauthorized physical access to the front panel components. Install the  
security bezel and then lock it with the key provided with the kit.  
Drive options  
The embedded HP Dynamic Smart Array B140i Controller only supports SATA devices. For SAS drive  
installation, install an HP Host Bus Adapter or an HP Smart Array Controller board option.  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
Hardware options installation 64  
       
Removing a drive blank  
1.  
2.  
If installed, remove the security bezel (on page 35).  
Remove the drive blank.  
Installing a hot-plug drive  
WARNING: To reduce the risk of injury from electric shock, do not install more than one drive  
carrier at a time.  
The chassis can support up to 12 drives in an LFF configuration and up to 24 drives in an SFF configuration.  
To install the component:  
1.  
2.  
3.  
If installed, remove the security bezel (on page 35).  
Remove the drive blank ("Removing a drive blank" on page 65).  
Prepare the drive.  
o
SFF HP SmartDrive  
Hardware options installation 65  
   
o
Low-profile LFF hot-plug drive  
4.  
Install the drive:  
o
SFF HP SmartDrive  
o
Low-profile LFF hot-plug drive  
5.  
6.  
Determine the status of the drive from the drive LED definitions ("HP SmartDrive LED definitions" on page  
If removed, install the security bezel ("Security bezel option" on page 64).  
To configure arrays, see the HP Smart Storage Administrator User Guide on the HP website  
Hardware options installation 66  
Node blank  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
Install the node blank into the left side of the server chassis.  
Install the node blank into the right side of the server chassis.  
Rack control management (RCM) module  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
Power down the nodes ("Power down the node" on page 31).  
Hardware options installation 67  
   
2.  
3.  
4.  
Disconnect each power cord from the power source.  
Remove the power supply relief strap from the handle on the bottom power supply.  
Install the rack control management module onto the bottom power supply.  
5.  
6.  
Reconnect all power:  
a. Connect each power cord to the power source.  
b. Connect the power cord to the chassis.  
Power up the nodes (on page 31).  
IMPORTANT: Use either the HP APM port or an iLO port to connect to a network. Having both  
ports connected at the same time results in a loopback condition.  
IMPORTANT: Do not connect both iLO ports to the network at the same time. Only one iLO port  
can be connected to the network, while the other iLO port can be used only as a connection to a  
second enclosure. Having both ports connected at the same time results in a loopback condition.  
Hardware options installation 68  
7.  
If using the RCM module iLO ports to connect the chassis to a network, connect all cables to the RCM  
module and the network. Multiple chassis can be connected to the same network.  
NOTE: Arrow indicates connection to the network.  
8.  
If installing HP APM, see the HP Advanced Power Manager User Guide on the HP website  
RCM 2.0 to 1.0 adapter cable  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
Power down the system. ("Power down the system" on page 31)  
Disconnect each power cord from the power source.  
Install the rack control management module ("Rack control management (RCM) module" on page 67).  
Hardware options installation 69  
 
4.  
Connect the RCM 2.0 to 1.0 adapter cable to the RCM module.  
5.  
Reconnect all power:  
a. Connect each power cord to the power source.  
b. Connect the power cord to the chassis.  
Power up the nodes (on page 31).  
6.  
7.  
To install, configure, and access HP APM, see the HP Advanced Power Manager User Guide on the HP  
Redundant fan option  
Fan population guidelines  
To provide sufficient airflow to the system if a fan fails, the server supports redundant fans.  
Hardware options installation 70  
 
Configuration  
Fan  
bay 1  
Fan  
Fan  
Fan  
bay 2  
Fan  
Fan  
Fan bay Fan bay Fan  
Fan  
bay 6  
Empty  
Fan  
Fan  
bay 7  
Empty  
Fan  
Fan bay  
8
Empty  
Fan  
3
4
bay 5  
Empty  
Fan  
Fan  
Fan  
Fan  
Fan  
Non-redundant  
Redundant  
In a redundant fan mode:  
o
If one fan fails, the system continues to operate without redundancy. This condition is indicated by  
a flashing amber Health LED.  
o
If two fans fail, the system shuts down.  
The minimum fan requirement for this server to power on is four fans (fans 1, 2, 3, and 4).  
Installing the fan option  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
7.  
8.  
9.  
Power down the system (on page 31).  
Disconnect all peripheral cables from the nodes and chassis.  
Remove the node from the chassis (on page 32).  
If installed, remove the security bezel (on page 35).  
Remove all drives ("Removing the drive" on page 35).  
If installed, remove the RCM module (on page 33).  
Remove all power supplies ("Remove the power supply" on page 33).  
Remove the chassis from the rack (on page 34).  
Remove the access panel ("Remove the chassis access panel" on page 36).  
10. Install the redundant fans in the left and right fan cages.  
Hardware options installation 71  
 
11. Connect the fan cables to the power connectors.  
12. Install the access panel ("Install the chassis access panel" on page 37)  
13. Install the chassis into the rack ("Installing the chassis into the rack" on page 59).  
14. If removed, install the security bezel ("Security bezel option" on page 64).  
15. Install all nodes, drives and power supplies ("Chassis component installation" on page 60).  
16. Reconnect all power:  
a. Connect each power cord to the power source.  
b. Connect the power cord to the chassis.  
17. Connect all peripheral cables to the nodes.  
18. Power up the nodes (on page 31).  
Memory options  
IMPORTANT: This node does not support mixing LRDIMMs or RDIMMs. Attempting to mix any  
combination of these DIMMs can cause the server to halt during BIOS initialization.  
The memory subsystem in this node can support LRDIMMs and RDIMMs:  
RDIMMs offer address parity protection.  
LRDIMMs support higher densities than single- and dual-rank RDIMMs, and higher speeds than  
quad-rank RDIMMs. This support enables you to install more high capacity DIMMs, resulting in higher  
system capacities and higher bandwidth.  
All types are referred to as DIMMs when the information applies to all types. When specified as LRDIMM or  
RDIMM, the information applies to that type only. All memory installed in the node must be the same type.  
The server supports the following RDIMM and LRDIMM speeds:  
Single- and dual-rank PC4-2133 (DDR4-2133) RDIMMs and LRDIMMs operating at up to 2133 MT/s  
Speed and capacity  
Hardware options installation 72  
 
DIMM type  
DIMM rank  
Single-rank  
Single-rank  
Dual-rank  
Dual-rank  
Dual-rank  
Dual-rank  
Quad-rank  
DIMM capacity  
4 GB  
Native speed (MT/s)  
2133  
2133  
2133  
2133  
2133  
2133  
2133  
RDIMM  
RDIMM  
RDIMM  
RDIMM  
LRDIMM  
RDIMM  
LRDIMM  
8 GB  
8 GB  
16 GB  
16 GB  
32 GB  
32 GB  
Populated DIMM speed (MT/s)  
DIMM type  
DIMM rank  
Single-rank  
Dual-rank  
1 DIMM per channel 2 DIMMs per channel  
2133  
2133  
2133  
2133  
2133  
2133  
2133  
2133  
RDIMM  
RDIMM  
LRDIMM  
LRDIMM  
Dual-rank  
Quad-rank  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
HP SmartMemory  
HP SmartMemory authenticates and unlocks certain features available only on HP Qualified memory and  
verifies whether installed memory has passed HP qualification and test processes. Qualified memory is  
performance-tuned for HP ProLiant and BladeSystem servers and provides future enhanced support through  
HP Active Health and manageability software.  
Memory subsystem architecture  
The memory subsystem in this node is divided into channels. Each processor supports four channels, and  
each channel supports two DIMM slots, as shown in the following table.  
Channel  
Population order  
Slot number  
A
E
8
7
1
B
F
6
5
2
3
4
C
G
1
2
D
H
3
4
For the location of the slot numbers, see "DIMM slot locations (on page 19)."  
This multi-channel architecture provides enhanced performance in Advanced ECC mode. This architecture  
also enables Online Spare Memory mode.  
DIMM slots in this server are identified by number and by letter. Letters identify the population order. Slot  
numbers indicate the DIMM slot ID for spare replacement.  
Hardware options installation 73  
 
Single-, dual-, and quad-rank DIMMs  
To understand and configure memory protection modes properly, an understanding of single-, dual-, and  
quad-rank DIMMs is helpful. Some DIMM configuration requirements are based on these classifications.  
A single-rank DIMM has one set of memory chips that is accessed while writing to or reading from the  
memory. A dual-rank DIMM is similar to having two single-rank DIMMs on the same module, with only one  
rank accessible at a time. A quad-rank DIMM is, effectively, two dual-rank DIMMs on the same module. Only  
one rank is accessible at a time. The node memory control subsystem selects the proper rank within the DIMM  
when writing to or reading from the DIMM.  
Dual- and quad-rank DIMMs provide the greatest capacity with the existing memory technology. For  
example, if current DRAM technology supports 8-GB single-rank DIMMs, a dual-rank DIMM would be 16  
GB, and a quad-rank DIMM would be 32 GB.  
LRDIMMs are labeled as quad-rank DIMMs. There are four ranks of DRAM on the DIMM, but the LRDIMM  
buffer creates an abstraction that allows the DIMM to appear as a dual-rank DIMM to the system. The  
LRDIMM buffer isolates the electrical loading of the DRAM from the system to allow for faster operation. This  
allows higher memory operating speed compared to quad-rank RDIMMs.  
DIMM identification  
To determine DIMM characteristics, use the label attached to the DIMM and the following illustration and  
table.  
Description  
Definition  
Capacity  
4 GB  
8 GB  
16 GB  
32 GB  
1
2
Rank  
1R = Single-rank  
2R = Dual-rank  
4R = Quad-rank  
Data width  
x4 = 4-bit  
x8 = 8-bit  
3
4
Memory generation DDR4  
Hardware options installation 74  
 
Description  
Definition  
Maximum memory  
speed  
2133 MT/s  
5
CAS latency  
P=15  
6
7
DIMM type  
R = RDIMM (registered)  
L = LRDIMM (load reduced)  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
Memory configurations  
To optimize node availability, the node supports the following AMP modes:  
Advanced ECC—Provides up to 4-bit error correction and enhanced performance over Lockstep mode.  
This mode is the default option for this node.  
Online spare memory—Provides protection against failing or degraded DIMMs. Certain memory is  
reserved as spare, and automatic failover to spare memory occurs when the system detects a DIMM that  
is degrading. This allows DIMMs that have a higher probability of receiving an uncorrectable memory  
error (which would result in system downtime) to be removed from operation.  
Advanced Memory Protection options are configured in the BIOS/Platform Configuration (RBSU). If the  
requested AMP mode is not supported by the installed DIMM configuration, the node boots in Advanced  
ECC mode. For more information, see the HP UEFI System Utilities User Guide for HP ProLiant Gen9 Servers  
Maximum capacity  
DIMM type  
DIMM rank  
One processor  
32 GB  
Two processors  
64 GB  
Single-rank (4 GB)  
Single-rank (8 GB)  
Dual-rank (8 GB)  
Dual-rank (16 GB)  
RDIMM  
RDIMM  
RDIMM  
RDIMM  
64GB  
128 GB  
64 GB  
128 GB  
128 GB  
256 GB  
Dual-rank (16 GB)  
Dual-rank (32 GB)  
Quad-rank (32 GB)  
128 GB  
256 GB  
256 GB  
256 GB  
512 GB  
512 GB  
LRDIMM  
RDIMM  
LRDIMM  
For the latest memory configuration information, see the QuickSpecs on the HP website  
Advanced ECC memory configuration  
Advanced ECC memory is the default memory protection mode for this node. Standard ECC can correct  
single-bit memory errors and detect multi-bit memory errors. When multi-bit errors are detected using  
Standard ECC, the error is signaled to the node and causes the node to halt.  
Advanced ECC protects the node against some multi-bit memory errors. Advanced ECC can correct both  
single-bit memory errors and 4-bit memory errors if all failed bits are on the same DRAM device on the DIMM.  
Advanced ECC provides additional protection over Standard ECC because it is possible to correct certain  
memory errors that would otherwise be uncorrected and result in a node failure. Using HP Advanced  
Hardware options installation 75  
 
Memory Error Detection technology, the node provides notification when a DIMM is degrading and has a  
higher probability of uncorrectable memory error.  
Online Spare memory configuration  
Online spare memory provides protection against degraded DIMMs by reducing the likelihood of  
uncorrected memory errors. This protection is available without any operating system support.  
Online spare memory protection dedicates one rank of each memory channel for use as spare memory. The  
remaining ranks are available for OS and application use. If correctable memory errors occur at a rate  
higher than a specific threshold on any of the non-spare ranks, the node automatically copies the memory  
contents of the degraded rank to the online spare rank. The node then deactivates the failing rank and  
automatically switches over to the online spare rank.  
General DIMM slot population guidelines  
Observe the following guidelines for all AMP modes:  
Install DIMMs only if the corresponding processor is installed.  
When two processors are installed, balance the DIMMs across the two processors.  
White DIMM slots denote the first slot of a channel (Ch 1-A, Ch 2-B, Ch 3-C, Ch 4-D)  
Do not mix RDIMMs and LRDIMMs.  
When one processor is installed, install DIMMs in sequential alphabetic order: A, B, C, D, E, F, and so  
forth.  
When two processors are installed, install the DIMMs in sequential alphabetic order balanced between  
the two processors: P1-A, P2-A, P1-B, P2-B, P1-C, P2-C, and so forth.  
When single-rank, dual-rank, and quad-rank DIMMs are populated for two DIMMs per channel or three  
DIMMs per channel, always populate the higher number rank DIMM first (starting from the farthest slot).  
For example, first quad-rank DIMM, then dual-rank DIMM, and then lastly single-rank DIMM.  
DIMMs should be populated starting farthest from the processor on each channel.  
For DIMM spare replacement, install the DIMMs per slot number as instructed by the system software.  
For more information about node memory, see the HP website (http://www.hp.com/go/memory).  
DIMM speeds are supported as indicated in the following table.  
Populated slots  
(per channel)  
Rank  
Speeds supported (MT/s)  
Single-, dual-, or  
quad-rank  
Single- or dual-rank  
2133  
2133  
1
2
Advanced ECC population guidelines  
For Advanced ECC mode configurations, observe the following guidelines:  
Observe the general DIMM slot population guidelines.  
DIMMs may be installed individually.  
Hardware options installation 76  
 
Online spare population guidelines  
For Online Spare memory mode configurations, observe the following guidelines:  
Observe the general DIMM slot population guidelines.  
Each channel must have a valid online spare configuration.  
Each channel can have a different valid online spare configuration.  
Each populated channel must have a spare rank. A single dual-rank DIMM is not a valid configuration.  
Population order  
For memory configurations with a single processor or multiple processors, DIMMs must be populated  
sequentially in alphabetical order (A through H).  
After installing the DIMMs, use the BIOS/Platform Configuration (RBSU) in the UEFI System Utilities to  
configure supported AMP modes.  
Installing a DIMM  
1.  
Power down the node (on page 31).  
2.  
3.  
4.  
5.  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
If installed in a 2U node, remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node  
6.  
If installed ina 2U node, remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage  
7.  
8.  
9.  
Remove the air baffle (on page 41).  
Open the DIMM slot latches.  
Install the DIMM.  
10. Install the air baffle (on page 42).  
Hardware options installation 77  
 
11. Install any removed PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
12. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
13. Connect all peripheral cables to the nodes.  
14. Power up the node ("Power up the nodes" on page 31).  
Storage cable options  
B140i 1U node SATA cable  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the bayonet board assembly from the node ("Remove the bayonet board assembly" on page  
6.  
7.  
Remove the bayonet board bracket from the bayonet board ("Remove the bayonet board bracket" on  
page 46).  
Connect the SATA cable to the system board and bayonet board.  
8.  
9.  
Install the bayonet board bracket onto the bayonet board ("Install the bayonet board bracket" on page  
Route and secure the cable under the thin plastic covers.  
10. Install the bayonet board assembly into the node ("Install the bayonet board assembly" on page 44)..  
11. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
12. Connect all peripheral cables to the nodes.  
13. Power up the node ("Power up the nodes" on page 31).  
Hardware options installation 78  
   
B140i 2U node SATA cable  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
If installed, remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node riser cage  
6.  
If installed, remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage assemblies" on  
page 53).  
7.  
8.  
If a graphic card/ coprocessor power cable is installed, disconnect it from the bayonet board.  
Remove the bayonet board assembly from the node ("Remove the bayonet board assembly" on page  
9.  
Remove the bayonet board bracket from the bayonet board ("Remove the bayonet board bracket" on  
page 46).  
10. Connect the SATA cable to the system board and bayonet board.  
11. Route the cable under the padding on the 2U bayonet board and install the bayonet board bracket onto  
12. Route and secure the cable under the thin plastic covers.  
13. Install the bayonet board assembly into the node ("Install the bayonet board assembly" on page 44).  
14. If removed, connect the graphic card/ coprocessor cable to the bayonet board.  
15. Install any removed PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
16. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
17. Connect all peripheral cables to the nodes.  
18. Power up the node ("Power up the nodes" on page 31).  
Hardware options installation 79  
   
Mini-SAS H240 1U node cable option  
In a 1U node, the HP H240 host bus adapter can only be installed in the single-slot left PCI riser cage  
assembly.  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
o
o
Remove the 1U left rear I/O blank (on page 38).  
Remove the single-slot left PCI riser cage assembly (on page 48).  
6.  
7.  
If a B140i SATA cable is installed, disconnect it from the connectors on the system board.  
Remove the bayonet board assembly from the node ("Remove the bayonet board assembly" on page  
8.  
Remove the bayonet board bracket from the bayonet board ("Remove the bayonet board bracket" on  
page 46).  
9.  
If installed, disconnect and remove the B140i 1U node SATA cable.  
10. Remove the PCI slot blank.  
11. Install the host bus adapter into the riser cage assembly and secure it with one T-10 screw.  
12. Connect the split ends of the Mini-SAS Y-cable to the host bus adapter.  
13. Connect the common end of the Mini-SAS Y-cable to the bayonet board.  
14. Install the bayonet board bracket onto the bayonet board ("Install the bayonet board bracket" on page  
15. Route and secure the cable under the thin plastic covers.  
16. Install the bayonet board assembly into the node ("Install the bayonet board assembly" on page 44).  
17. Install the single-slot left PCI riser cage assembly ("Single-slot left PCI riser cage assembly option" on  
page 85).  
18. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
Hardware options installation 80  
 
19. Connect all peripheral cables to the nodes.  
20. Power up the node ("Power up the nodes" on page 31).  
Mini-SAS H240 2U node cable option  
In a 2U node, the HP H240 host bus adapter can only be installed in the single-slot left PCI riser cage  
assembly or the single-slot 2U node PCI riser cage assembly.  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
o
o
o
Remove the 2U rear I/O blank (on page 40).  
Remove the single-slot 2U node PCI riser cage assembly (on page 51).  
Remove the single-slot left PCI riser cage assembly (on page 48).  
6.  
7.  
If a B140i SATA cable is installed, disconnect it from the connectors on the system board.  
Remove the bayonet board assembly from the node ("Remove the bayonet board assembly" on page  
8.  
Remove the bayonet board bracket from the bayonet board ("Remove the bayonet board bracket" on  
page 46).  
9.  
If installed, disconnect and remove the B140i 2U node SATA cable.  
10. Remove riser slot blank from riser cage.  
11. Install the host bus adapter into the riser cage assembly and secure it with one T-10 screw.  
12. Connect the Mini-SAS cable to the host bus adapter.  
13. Connect the opposite ends of the cable assembly to the bayonet board.  
14. Route the cable under the padding on the 2U bayonet board and install the bayonet board bracket onto  
15. Route and secure the cable under the thin plastic covers.  
16. Install the bayonet board assembly into the node ("Install the bayonet board assembly" on page 44).  
Hardware options installation 81  
 
17. Install the PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
18. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
19. Connect all peripheral cables to the nodes.  
20. Power up the node ("Power up the nodes" on page 31).  
Mini-SAS P440/P840 cable option  
In a 1U node, the HP P440 Smart Array controller must be installed in the single-slot left PCI riser cage  
assembly.  
To install an HP P840 Smart Array controller in a 2U node, two P440/P840 Mini-SAS cable options are  
required. The HP P840 Smart Array controller can only be installed in slot 2 of FlexibleLOM 2U node riser  
cage assembly or slot 3 of a three-slot PCI riser cage assembly. For more information on the riser board slot  
specifications, see "PCIe riser board slot definitions (on page 27)."  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
If installing an HP P440 Smart Array controller, do one of the following:  
o
o
Remove the 1U left rear I/O blank (on page 38).  
Remove the single-slot left PCI riser cage assembly (on page 48).  
6.  
If installing a HP P840 Smart Array controller, do one of the following:  
o
o
o
Remove the 2U rear I/O blank (on page 40).  
Remove the FlexibleLOM 2U node riser cage assembly (on page 52).  
Remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage assemblies" on page 53).  
7.  
8.  
If a B140i SATA cable is installed, disconnect it from the connectors on the system board.  
Remove the bayonet board assembly from the node ("Remove the bayonet board assembly" on page  
9.  
Remove the bayonet board bracket from the bayonet board ("Remove the bayonet board bracket" on  
page 46).  
10. If installed, disconnect and remove the B140i 1U node SATA cable or the B140i 2U node SATA cable.  
11. Remove the PCI slot blank.  
12. Install the HP P440 Smart array controller or HP P840 Smart Array controller into the riser cage  
assembly and secure it with one T-10 screw ("Controller options" on page 96).  
Hardware options installation 82  
 
13. Connect the Mini-SAS cable to the Smart Storage controller and the bayonet board.  
HP P840 Smart Array controller in a 2U node  
14. In a 1U node, do the following:  
a. Install the bayonet board bracket onto the bayonet board ("Install the bayonet board bracket" on  
page 47).  
b. Route and secure the cable under the thin plastic covers.  
15. In a 2U node, do the following:  
a. Route the cables under the padding on the 2U bayonet board and install the bayonet board bracket  
onto the bayonet board ("Install the bayonet board bracket" on page 47).  
b. Route and secure the cables under the thin plastic covers.  
16. Install the bayonet board assembly into the node ("Install the bayonet board assembly" on page 44).  
17. Install the PCI riser cage asssembly. ("PCI riser cage assembly options" on page 84)  
18. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
19. Connect all peripheral cables to the node.  
20. Power up the node ("Power up the nodes" on page 31).  
Mini-SAS P440 2U node cable option  
In a 2U node, the HP P440 Smart Array controller can only be installed in the single-slot left PCI riser cage  
assembly or the single-slot 2U node PCI riser cage assembly.  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
Hardware options installation 83  
 
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
o
o
o
Remove the 2U rear I/O blank (on page 40).  
Remove the FlexibleLOM 2U node riser cage assembly (on page 52).  
Remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage assemblies" on page 53).  
6.  
7.  
Remove the bayonet board assembly from the node ("Remove the bayonet board assembly" on page  
Remove the bayonet board bracket from the bayonet board ("Remove the bayonet board bracket" on  
page 46).  
8.  
9.  
If installed, disconnect and remove the B140i 2U node SATA cable.  
Remove the PCI slot blank.  
10. Install the HP P440 Smart Array controller into the riser cage assembly and secure it with one T-10 screw  
11. Connect the Mini-SAS cable to the Smart Storage controller and the bayonet board.  
12. Route the cable under the padding on the 2U bayonet board and install the bayonet board bracket onto  
13. Route and secure the cable under the thin plastic covers.  
14. Install the bayonet board assembly into the node ("Install the bayonet board assembly" on page 44).  
15. Install the PCI riser cage assembly.  
16. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
17. Connect all peripheral cables to the nodes.  
18. Power up the node ("Power up the nodes" on page 31).  
PCI riser cage assembly options  
Each node supports two PCI riser cage assembly options. A second processor is required to support  
installation of the single-slot 1U node right PCI riser cage assembly or a three-slot PCI riser cage assembly.  
For more information on the riser board slot specifications, see PCIe riser board slot definitions (on page 27).  
Hardware options installation 84  
   
In a 2U node, a three-slot PCI riser cage assembly must be installed with the single-slot left PCI riser cage  
assembly. The FlexibleLOM 2U riser cage assembly must be installed with the single-slot 2U node PCI riser  
cage assembly.  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
Single-slot left PCI riser cage assembly option  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
o
o
Remove the 1U left rear I/O blank (on page 38).  
Remove the 2U rear I/O blank (on page 40).  
6.  
If you are installing an expansion board, remove the PCI blank.  
7.  
8.  
Install any optional expansion boards.  
Connect all necessary internal cabling to the expansion board. For more information on these cabling  
requirements, see the documentation that ships with the option.  
Hardware options installation 85  
   
9.  
In a 1U node, install the single-slot left PCI riser cage assembly and then secure it with three T-10 screws.  
10. In a 2U node, do the following:  
a. Install the single-slot left PCI riser cage assembly and then secure it with two T-10 screws.  
b. Install the three-slot riser cage assembly ("Three-slot PCI riser cage assembly options" on page 93).  
IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not  
power up.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
11. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
12. Connect all peripheral cables to the nodes.  
13. Power up the node ("Power up the nodes" on page 31).  
Hardware options installation 86  
Single-slot 1U node right PCI riser cage assembly option  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
o
o
Remove the 1U left rear I/O blank (on page 38).  
Remove the single-slot left PCI riser cage assembly (on page 48).  
6.  
7.  
Remove the 1U right rear I/O blank (on page 39).  
If you are installing an expansion board, remove the PCI blank.  
8.  
9.  
Install any optional expansion boards into the PCI riser cage assembly.  
Connect all necessary internal cabling to the expansion board. For more information on these cabling  
requirements, see the documentation that ships with the option.  
Hardware options installation 87  
 
10. Install the PCI riser cage assembly.  
11. Do one of the following:  
o
o
Install the 1U left rear I/O blank (on page 38).  
Install the single-slot left PCI riser cage assembly ("Single-slot left PCI riser cage assembly option" on  
page 85).  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not  
power up.  
12. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
13. Connect all peripheral cables to the nodes.  
14. Power up the node ("Power up the nodes" on page 31).  
Single-slot 2U node PCI riser cage assembly option  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the 2U rear I/O blank (on page 40).  
Hardware options installation 88  
 
6.  
If you are installing an expansion board, remove the PCI blank.  
7.  
8.  
Install any optional expansion boards.  
Connect all necessary internal cabling to the expansion board. For more information on these cabling  
requirements, see the documentation that ships with the option.  
9.  
Do the following:  
a. Install the single-slot 2U node PCI riser cage assembly and secure it with two T-10 screws.  
b. Install the FlexibleLOM 2U node riser cage assembly and secure it with five T-10 screws.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not  
power up.  
Hardware options installation 89  
10. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
11. Connect all peripheral cables to the nodes.  
12. Power up the node ("Power up the nodes" on page 31).  
FlexibleLOM 1U node riser cage assembly option  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
o
o
Remove the 1U left rear I/O blank (on page 38).  
Remove the single-slot left PCI riser cage assembly (on page 48).  
6.  
7.  
Remove the 1U right rear I/O blank (on page 39).  
Remove the PCI blank.  
Hardware options installation 90  
 
8.  
Install the FlexibleLOM adapter.  
9.  
Install the FlexibleLOM riser cage assembly.  
10. Do one of the following:  
o
o
Install the 1U left rear I/O blank (on page 38).  
Install the single-slot left PCI riser cage assembly ("Single-slot left PCI riser cage assembly option" on  
page 85).  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not  
power up.  
11. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
Hardware options installation 91  
12. Connect all peripheral cables to the nodes.  
13. Power up the node ("Power up the nodes" on page 31).  
FlexibleLOM 2U node riser cage assembly option  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove 2U rear I/O blank ("Remove the 2U rear I/O blank" on page 40).  
Remove the PCI blank.  
7.  
Install the FlexibleLOM adapter.  
Hardware options installation 92  
   
8.  
Do the following:  
a. Install the single-slot 2U node PCI riser cage assembly and secure it with two T-10 screws.  
b. Install the FlexibleLOM 2U node riser cage assembly and secure it with five T-10 screws.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not  
power up.  
9.  
Install the node into the chassis ("Installing a node into the chassis" on page 60).  
10. Connect all peripheral cables to the nodes.  
11. Power up the node ("Power up the nodes" on page 31).  
Three-slot PCI riser cage assembly options  
NOTE: The three-slot PCI riser cage assembly and the three-slot GPU-direct PCI riser cage  
assembly, share the same riser cage but have a different riser board. For more information on the  
riser board slot specifications, see "PCIe riser board slot definitions (on page 27)."  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the 2U rear I/O blank (on page 40).  
Hardware options installation 93  
   
6.  
Install the single-slot left PCI riser cage assembly and then secure it with two T-10 screws.  
7.  
If installing an expansion board, do the following:  
a. Remove the riser cage bracket.  
Hardware options installation 94  
b. Select the appropriate PCIe slot and remove any PCI blanks.  
8.  
9.  
Install any optional expansion boards.  
Connect all necessary internal cables to the expansion board. For more information on these cabling  
requirements, see the documentation that ships with the option.  
10. Install the riser cage bracket.  
Hardware options installation 95  
11. Install the three-slot riser cage assembly and then secure it with six T-10 screws.  
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all  
PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots  
have either an expansion slot cover or an expansion board installed.  
IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not  
power up.  
12. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
13. Connect all peripheral cables to the nodes.  
14. Power up the node ("Power up the nodes" on page 31).  
Controller options  
The node ships with an embedded HP Dynamic Smart Array B140i Controller. For more information about  
the controller and its features, see the HP Dynamic Smart Array B140i RAID Controller User Guide on the HP  
Upgrade options exist for an integrated array controller. For a list of supported options, see the product  
QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To configure arrays, see the HP Smart Storage Administrator User Guide on the HP website  
The node supports FBWC. FBWC consists of a cache module and an HP Smart Storage Battery Pack. The  
DDR cache module buffers and stores data being written by an integrated Gen9 P-series Smart Array  
Controller.  
CAUTION: The cache module connector does not use the industry-standard DDR3 mini-DIMMs.  
Do not use the controller with cache modules designed for other controller models, because the  
controller can malfunction and you can lose data. Also, do not transfer this cache module to an  
unsupported controller model, because you can lose data.  
Hardware options installation 96  
   
CAUTION: To prevent a node malfunction or damage to the equipment, do not add or remove  
the battery pack while an array capacity expansion, RAID level migration, or stripe size migration  
is in progress.  
CAUTION: After the node is powered down, wait for 30 seconds, and then check the amber LED  
before unplugging the cable from the cache module. If the amber LED flashes after 30 seconds,  
do not remove the cable from the cache module. The cache module is backing up data. Data will  
be lost if the cable is detached when the amber LED is still flashing.  
Storage controller installation guidelines  
To maintain optimal thermal conditions and efficiency, HP recommends the following guidelines:  
Install one storage controller per node.  
Install the HP H240 host bus adapter in the single-slot left PCI riser cage assembly or the single-slot 2U  
node PCI riser cage assembly.  
Install the HP P440 Smart Array controller in the single-slot left PCI riser cage assembly or the single-slot  
2U node PCI riser cage assembly.  
Install the HP P840 Smart Array controller in slot 2 of the FlexibleLOM 2U node riser cage assembly or  
slot 3 of a three-slot PCI riser cage assembly.  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
Installing the storage controller and FBWC module options  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Open the latch on the controller.  
Hardware options installation 97  
 
6.  
Connect the cache module backup power cable to the module.  
7.  
Install the cache module on the storage controller.  
8.  
9.  
Remove the PCI riser cage ("Remove the PCI riser cage assembly" on page 48).  
Select the appropriate PCIe slot and remove any PCI blanks.  
10. If you installed a cache module on the storage controller, connect the cache module backup power  
cable to the riser board ("FBWC module cabling" on page 140).  
11. Install the storage controller into the riser cage assembly and secure it to the riser cage with one T-10  
screw.  
12. Connect all necessary internal cables to the storage controller. For internal drive cabling information,  
see "Storage cabling (on page 137)."  
13. Install the PCI riser cage ("PCI riser cage assembly options" on page 84).  
14. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
15. Connect all peripheral cables to the nodes.  
16. Power up the node ("Power up the nodes" on page 31).  
For more information about the integrated storage controller and its features, select the relevant user  
documentation on the HP website (http://www.hp.com/go/smartstorage/docs).  
Hardware options installation 98  
To configure arrays, see the HP Smart Storage Administrator User Guide on the HP website  
Installing the HP Smart Storage Battery  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
7.  
8.  
9.  
Power down the system (on page 31).  
Disconnect all peripheral cables from the nodes and chassis.  
Remove all nodes from the chassis ("Remove the node from the chassis" on page 32).  
If installed, remove the security bezel (on page 35).  
Remove all drives ("Removing the drive" on page 35).  
If installed, remove the RCM module (on page 33).  
Remove all power supplies ("Remove the power supply" on page 33).  
Remove the chassis from the rack (on page 34).  
Remove the access panel ("Remove the chassis access panel" on page 36).  
10. Remove the HP Smart Storage Battery holder.  
Hardware options installation 99  
 
11. Route the cable through holder and install the HP Smart Storage Battery.  
12. Connect the HP Smart Storage Battery cable to power distribution board.  
13. Install the HP Smart Storage Battery holder into the chassis.  
14. Install the access panel ("Install the chassis access panel" on page 37).  
15. Install the chassis into the rack ("Installing the chassis into the rack" on page 59).  
16. Install all nodes, drives and power supplies ("Chassis component installation" on page 60).  
17. If removed, install the security bezel ("Security bezel option" on page 64).  
18. If removed, install the RCM module ("Rack control management (RCM) module" on page 67)  
19. Connect all peripheral cables to the nodes and chassis.  
20. Power up the nodes (on page 31).  
Hardware options installation 100  
Graphic card options  
Graphic card/coprocessor power setting switch  
Before installing a graphic card/coprocessor option, set the graphic card/coprocessor power setting switch  
to the correct settings based on the power consumption of the graphic card/coprocessor. The switch is  
located on the 2U bayonet board.  
Switches 1 and 2 correspond to graphic card 1/coprocessor 1  
Switches 3 and 4 correspond to graphic card 2/coprocessor 2  
Item  
Switch  
150W 225W/ 300W No graphic card/ coprocessor  
235W  
installed (default)  
1
2
OFF  
ON  
ON  
OFF  
ON  
ON  
OFF  
OFF  
1 - First graphic card/  
coprocessor  
3
4
OFF  
ON  
ON  
OFF  
ON  
ON  
OFF  
OFF  
2 - Second graphic card/  
coprocessor  
Single graphic card/ coprocessor power cable option  
This power cable is for use in the FlexibleLOM 2U node riser cage assembly only.  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the FlexibleLOM 2U node riser cage assembly (on page 52).  
Set the graphic card power setting switch to the correct settings based on the power consumption of the  
For more information, see the documentation that ships with the graphic card option.  
If installing a half-height graphic card, remove the middle PCI blank only.  
7.  
Hardware options installation 101  
   
8.  
If installing a full-height graphic card, remove the middle and top PCI blanks.  
9.  
Connect the power cable to the connector on the riser board.  
10. Install the graphic card into the PCI riser cage assembly.  
Hardware options installation 102  
11. If installing an NVIDIA Tesla K40 GPU, connect the 2-pin graphic card adapter cable to the graphic  
card and the riser board.  
12. Connect the power cable to the graphic card.  
13. Install the FlexibleLOM 2U node riser cage assembly and then secure it with five T-10 screws.  
14. Connect the power cable to the bayonet board.  
15. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
16. Connect all peripheral cables to the nodes.  
17. Power up the node ("Power up the nodes" on page 31).  
Dual graphic card/ coprocessor power cable option  
This power cable is for use in the three-slot PCI riser cage assembly and three-slot GPU-direct PCI riser cage  
assembly.  
Hardware options installation 103  
 
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage assemblies" on page 53).  
Set the graphic card power setting switch to the correct settings based on the power consumption of the  
graphic card.  
For more information, see the documentation that ships with the graphic card option.  
Remove the riser cage bracket.  
7.  
8.  
If installing a half-height graphic card, remove the middle PCI blank only.  
Hardware options installation 104  
9.  
If installing a full-height graphic card, remove the middle and top PCI blanks.  
10. Turn the riser cage assembly over and lay it along the right side of the node.  
11. Connect the power cable to the first graphic card.  
12. Install the first graphic card in the front of the riser cage assembly.  
13. Install the second graphic card into the rear of the riser cage assembly.  
14. Connect the power cable to the second graphic card.  
15. If installing two NVIDIA Tesla K40 GPUs, connect the 2-pin graphic card adapter cables to the graphic  
cards and the riser board.  
Hardware options installation 105  
16. Connect the power cable to the bayonet board.  
17. Install the riser cage blank.  
.
18. Install the three-slot riser cage assembly and then secure it with six T-10 screws ("Three-slot PCI riser  
19. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
20. Connect all peripheral cables to the nodes.  
21. Power up the node ("Power up the nodes" on page 31).  
NVIDIA Tesla K40 12GB Module Enablement Kit  
The enablement kit is for the following configurations:  
Installing one K40 12GB module in the FlexibleLOM 2U node PCI riser cage assembly (on page 107)  
Hardware options installation 106  
 
Installing two K40 12GB modules in a three-slot PCI riser cage assembly or three-slot GPU-direct PCI  
riser cage assembly (on page 110)  
Installing one K40 12GB module in the FlexibleLOM 2U node PCI riser cage  
assembly  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the FlexibleLOM 2U node PCI riser cage assembly ("FlexibleLOM 2U node riser cage  
6.  
Set the graphic card/coprocessor power setting switch to the correct settings (225W/235W) based on  
the power consumption of the graphic card/coprocessor ("Graphic card/coprocessor power setting  
For more information, see the documentation that ships with the graphic card/coprocessor option.  
Remove the two top PCI blanks from the riser cage assembly.  
7.  
Hardware options installation 107  
 
8.  
Connect the single graphic card/coprocessor power cable to the connector on the riser board.  
9.  
Install the graphic card into the PCI riser cage assembly.  
10. Connect the power cable to the graphic card.  
Hardware options installation 108  
11. Connect the 2-pin graphic card adapter cable.  
12. Install the FlexibleLOM 2U node riser cage assembly and then secure it with five T-10 screws.  
Hardware options installation 109  
13. Connect the power cable to the bayonet board.  
14. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
15. Connect all peripheral cables to the nodes.  
16. Power up the node ("Power up the nodes" on page 31).  
Installing two K40 12GB modules in a three-slot PCI riser cage assembly or three-slot  
GPU-direct PCI riser cage assembly  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage assemblies" on page 53).  
Set the graphic card/coprocessor power setting switch to the correct settings (225W/235W) based on  
the power consumption of the graphic card/coprocessor.  
For more information, see the documentation that ships with the graphic card/coprocessor option.  
Hardware options installation 110  
 
7.  
8.  
9.  
Remove the riser cage bracket.  
Remove the two top PCI blanks from the riser cage assembly.  
Turn the riser cage assembly over and lay it along the right side of the node.  
10. Remove the existing rear support brackets from the first and second graphic cards.  
Hardware options installation 111  
11. Install the rear support bracket onto the first graphic card.  
12. Install the first graphic card into the front of the PCI riser cage assembly.  
13. Connect the power cable to the first graphic card.  
14. Remove the existing front I/O bracket from the second graphic card.  
15. Install the rear and front support brackets onto the second graphic card:  
a. Secure the rear support bracket with three T-10 screws.  
Hardware options installation 112  
b. Secure the front support bracket with three M2.5 screws.  
16. Install the second graphic card.  
17. Connect the dual graphic card/coprocessor power cable to the graphic cards.  
18. Connect the 2-pin graphic card adapter cables to the graphic cards and the riser board.  
Hardware options installation 113  
19. Install the riser cage bracket.  
20. Connect the power cable to the bayonet board.  
21. Install the three-slot riser cage assembly and then secure it with six T-10 screws ("Three-slot PCI riser  
22. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
23. Connect all peripheral cables to the nodes.  
24. Power up the node ("Power up the nodes" on page 31).  
Intel Coprocessor Enablement Kit  
The enablement kit is for the following configurations:  
Installing one Intel coprocessor in the FlexibleLOM 2U node PCI riser cage assembly (on page 115)  
Hardware options installation 114  
 
Installing two Intel coprocessors in a three-slot PCI riser cage assembly or a three-slot GPU-direct PCI  
Installing one Intel coprocessor in the FlexibleLOM 2U node PCI riser cage assembly  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the FlexibleLOM 2U node PCI riser cage assembly ("FlexibleLOM 2U node riser cage  
6.  
7.  
Set the graphic card/coprocessor power setting switch to the correct settings based on the power  
consumption of the coprocessor.  
For more information, see the documentation that ships with the coprocessor option.  
Remove the two top PCI blanks from the riser cage assembly.  
Hardware options installation 115  
 
8.  
Connect the single graphic card power cable to the connector on the riser board.  
9.  
Connect the power cable to the coprocessor.  
10. Install the coprocessor into the PCI riser cage assembly.  
11. Install the FlexibleLOM 2U node riser cage assembly and then secure it with five T-10 screws  
Hardware options installation 116  
12. Connect the power cable to the bayonet board.  
13. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
14. Connect all peripheral cables to the nodes.  
15. Power up the node ("Power up the nodes" on page 31).  
Installing two Intel coprocessors in a three-slot PCI riser cage assembly  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage assemblies" on page 53).  
Set the graphic card/coprocessor power setting switch to the correct settings based on the power  
consumption of the coprocessor.  
For more information, see the documentation that ships with the graphic card option.  
Hardware options installation 117  
 
7.  
8.  
9.  
Remove the riser cage bracket.  
Remove the two top PCI blanks from the riser cage assembly.  
Turn the riser cage assembly over and lay it along the right side of the node.  
10. Remove the existing rear support brackets from the first and second coprocessors.  
Hardware options installation 118  
11. Install one support bracket onto the rear of the first coprocessor.  
12. Install the first coprocessor into the front of the PCI riser cage assembly.  
13. Connect the dual graphic card/coprocessor power cable to the first coprocessor.  
14. Remove the existing front I/O bracket from the second coprocessor.  
Hardware options installation 119  
15. Install two support brackets onto the second corprocessor.  
16. Install the second coprocessor into the PCI riser cage assembly.  
17. Connect the dual graphic card power cable to the second coprocessor.  
Hardware options installation 120  
18. Install the riser cage bracket.  
19. Connect the power cable to the bayonet board.  
20. Install the three-slot riser cage assembly and then secure it with six T-10 screws ("Three-slot PCI riser  
21. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
22. Connect all peripheral cables to the nodes.  
23. Power up the node ("Power up the nodes" on page 31).  
M.2 SATA SSD enablement board  
The M.2 SATA SSD enablement board can only be installed on the single-slot left PCI riser cage assembly  
and the single-slot 2U node PCI riser cage assembly.  
Hardware options installation 121  
 
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Do one of the following:  
a. Remove the single-slot left PCI riser cage assembly (on page 48).  
b. Remove the single-slot 2U node PCI riser cage assembly (on page 51).  
If installed, remove the storage controller.  
6.  
7.  
Install the enablement board on the PCI riser cage assembly, and then secure it with a T-15 screw.  
o
Single-slot left PCI riser cage assembly  
Hardware options installation 122  
o
Single-slot 2U node PCI riser cage assembly  
8.  
9.  
If removed, install the storage controller.  
Install any removed PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
10. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
11. Connect all peripheral cables to the nodes.  
12. Power up the node ("Power up the nodes" on page 31).  
Processor and heatsink  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the  
internal system components to cool before touching them.  
CAUTION: To avoid damage to the processor and system board, only authorized personnel  
should attempt to replace or install the processor in this node.  
CAUTION: To prevent possible node malfunction and damage to the equipment, multiprocessor  
configurations must contain processors with the same part number.  
CAUTION: The heatsink thermal interface media is not reusable and must be replaced if the  
heatsink is removed from the processor after it has been installed.  
Hardware options installation 123  
 
IMPORTANT: Processor socket 1 must be populated at all times or the node does not function.  
5.  
6.  
If installed in a 2U node, remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node  
If installed in a 2U node, remove the three-slot PCI riser cage assembly ("Three-slot PCI riser cage  
7.  
8.  
Remove the air baffle (on page 41).  
Open each of the processor locking levers in the order indicated in the following illustration, and then  
open the processor retaining bracket.  
9.  
Remove the clear processor socket cover. Retain the processor socket cover for future use.  
CAUTION: THE PINS ON THE SYSTEM BOARD ARE VERY FRAGILE AND EASILY DAMAGED. To  
avoid damage to the system board, do not touch the processor or the processor socket contacts.  
Hardware options installation 124  
10. Install the processor. Verify that the processor is fully seated in the processor retaining bracket by  
visually inspecting the processor installation guides on either side of the processor. THE PINS ON THE  
SYSTEM BOARD ARE VERY FRAGILE AND EASILY DAMAGED.  
11. Close the processor retaining bracket. When the processor is installed properly inside the processor  
retaining bracket, the processor retaining bracket clears the flange on the front of the socket.  
CAUTION: Do not press down on the processor. Pressing down on the processor may cause  
damage to the processor socket and the system board. Press only in the area indicated on the  
processor retaining bracket.  
CAUTION: Close and hold down the processor cover socket while closing the processor locking  
levers. The levers should close without resistance. Forcing the levers closed can damage the  
processor and socket, requiring system board replacement.  
Hardware options installation 125  
12. Press and hold the processor retaining bracket in place, and then close each processor locking lever.  
Press only in the area indicated on the processor retaining bracket.  
CAUTION: Always use a new heatsink when replacing processors. Failure to use new  
components can cause damage to the processor.  
13. Remove the thermal interface protective cover from the heatsink.  
CAUTION: Heatsink retaining screws should be tightened or loosened in diagonally opposite  
pairs (in an "X" pattern). Do not overtighten the screws as this can damage the board, connectors,  
or screws. Use the wrench supplied with the system to reduce the possibility of overtightening the  
screws.  
14. Install the heatsink:  
a. Position the heatsink on the processor backplate.  
b. Tighten one pair of diagonally opposite screws halfway, and then tighten the other pair of screws.  
Hardware options installation 126  
c. Finish the installation by completely tightening the screws in the same sequence.  
15. Install the air baffle (on page 42).  
16. Install any removed PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
17. Install the node into the chassis ("Installing a node into the chassis" on page 60).  
18. Connect all peripheral cables to the nodes.  
19. Power up the node ("Power up the nodes" on page 31).  
Dedicated iLO management port module option  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
To install the component:  
1.  
2.  
3.  
4.  
5.  
6.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the server node from the chassis ("Remove the node from the chassis" on page 32).  
Place the node on a flat, level surface.  
Remove any installed PCI riser cage assemblies ("Remove the PCI riser cage assembly" on page 48).  
Remove all rear I/O blanks:  
o
o
o
Remove the 1U left rear I/O blank (on page 38).  
Remove the 1U right rear I/O blank (on page 39).  
Remove the 2U rear I/O blank (on page 40).  
7.  
Remove the knockout.  
Hardware options installation 127  
 
a. Insert a flat screwdriver into the knockout.  
b. Twist and pull to remove the knockout from the node.  
8.  
Install the dedicated iLO management port card into the node.  
9.  
If removed, install all rear I/O blanks:  
o
o
o
Install the 1U left rear I/O blank (on page 38)  
Install the 1U right rear I/O blank (on page 40)  
Install the 2U rear I/O blank ("Install the 2U node rear I/O blank" on page 41)  
10. Install any removed PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
11. Install the node into the chassis.  
12. Connect all peripheral cables to the nodes.  
13. Power up the node ("Power up the nodes" on page 31).  
Hardware options installation 128  
Enabling the dedicated iLO management module  
To enable the dedicated iLO management module:  
1.  
2.  
3.  
During the server startup sequence after installing the module, press F9 in the POST screen.  
The System Utilities screen appears.  
Select System Configuration | iLO 4 Configuration Utility.  
The iLO 4 Configuration Utility screen appears.  
Select Network Options, and then press Enter.  
The Network Options screen appears.  
4.  
5.  
Set the Network Interface Adapter field to ON, and then press Enter.  
Press F10 to save your changes.  
A message prompt to confirm the iLO settings reset appears.  
Press Enter to reboot the iLO settings.  
6.  
7.  
8.  
Press Esc until the main menu is displayed.  
Select Reboot the System to exit the utility and resume the boot process.  
The IP address of the enabled dedicated iLO connector appears on the POST screen on the subsequent  
boot-up. Access the Network Options screen again to view this IP address for later reference.  
HP Trusted Platform Module option  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
Use these instructions to install and enable a TPM on a supported node. This procedure includes three  
sections:  
1.  
2.  
3.  
Installing the Trusted Platform Module board (on page 130).  
Retaining the recovery key/password (on page 131).  
Enabling the Trusted Platform Module (on page 131).  
Enabling the TPM requires accessing BIOS/Platform Configuration (RBSU) in HP UEFI System Utilities  
(on page 149).  
TPM installation requires the use of drive encryption technology, such as the Microsoft Windows BitLocker  
Drive Encryption feature. For more information on BitLocker, see the Microsoft website  
CAUTION: Always observe the guidelines in this document. Failure to follow these guidelines  
can cause hardware damage or halt data access.  
When installing or replacing a TPM, observe the following guidelines:  
Do not remove an installed TPM. Once installed, the TPM becomes a permanent part of the system  
board.  
When installing or replacing hardware, HP service providers cannot enable the TPM or the encryption  
technology. For security reasons, only the customer can enable these features.  
When returning a system board for service replacement, do not remove the TPM from the system board.  
When requested, HP Service provides a TPM with the spare system board.  
Hardware options installation 129  
 
Any attempt to remove an installed TPM from the system board breaks or disfigures the TPM security  
rivet. Upon locating a broken or disfigured rivet on an installed TPM, administrators should consider the  
system compromised and take appropriate measures to ensure the integrity of the system data.  
When using BitLocker, always retain the recovery key/password. The recovery key/password is  
required to enter Recovery Mode after BitLocker detects a possible compromise of system integrity.  
HP is not liable for blocked data access caused by improper TPM use. For operating instructions, see the  
encryption technology feature documentation provided by the operating system.  
Installing the Trusted Platform Module board  
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the  
internal system components to cool before touching them.  
1.  
2.  
3.  
4.  
5.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Remove any installed PCI riser cage assemblies ("Remove the PCI riser cage assembly" on page 48).  
CAUTION: Any attempt to remove an installed TPM from the system board breaks or disfigures  
the TPM security rivet. Upon locating a broken or disfigured rivet on an installed TPM,  
administrators should consider the system compromised and take appropriate measures to ensure  
the integrity of the system data.  
6.  
Install the TPM board. Press down on the connector to seat the board ("System board components" on  
page 16).  
Hardware options installation 130  
   
7.  
Install the TPM security rivet by pressing the rivet firmly into the system board.  
8.  
9.  
Install any removed PCI riser cage assemblies ("PCI riser cage assembly options" on page 84).  
Install the node into the chassis.  
10. Connect all peripheral cables to the nodes.  
11. Power up the node ("Power up the nodes" on page 31).  
Retaining the recovery key/password  
The recovery key/password is generated during BitLocker setup, and can be saved and printed after  
BitLocker is enabled. When using BitLocker, always retain the recovery key/password. The recovery  
key/password is required to enter Recovery Mode after BitLocker detects a possible compromise of system  
integrity.  
To help ensure maximum security, observe the following guidelines when retaining the recovery  
key/password:  
Always store the recovery key/password in multiple locations.  
Always store copies of the recovery key/password away from the node.  
Do not save the recovery key/password on the encrypted hard drive.  
Enabling the Trusted Platform Module  
1.  
2.  
During the node startup sequence, press the F9 key to access System Utilities.  
From the System Utilities screen, select System Configuration > BIOS/Platform Configuration (RBSU) >  
Server Security.  
3.  
4.  
5.  
6.  
7.  
Select Trusted Platform Module Options and press the Enter key.  
Select Enabled to enable the TPM and BIOS secure startup. The TPM is fully functional in this mode.  
Press the F10 key to save your selection.  
When prompted to save the change in System Utilities, press the Y key.  
Press the ESC key to exit System Utilities. Then, press the Enter key when prompted to reboot the node.  
Hardware options installation 131  
     
The node then reboots a second time without user input. During this reboot, the TPM setting becomes  
effective.  
You can now enable TPM functionality in the OS, such as Microsoft Windows BitLocker or measured boot.  
CAUTION: When a TPM is installed and enabled on the node, data access is locked if you fail  
to follow the proper procedures for updating the system or option firmware, replacing the system  
board, replacing a hard drive, or modifying OS application TPM settings.  
For more information on firmware updates and hardware procedures, see the HP Trusted Platform Module  
Best Practices White Paper on the HP website (http://www.hp.com/support).  
For more information on adjusting TPM usage in BitLocker, see the Microsoft website  
Hardware options installation 132  
Cabling  
Chassis cabling  
Front I/O cabling  
Item  
Description  
Left front I/O cable  
Right front I/O cable  
1
2
Cabling 133  
 
Drive backplane power cabling  
HP Apollo r2600 Chassis  
Item  
Description  
Power cable for Node 1 and Node 2  
Power cable for drives  
1
2
3
4
Power cable for Node 3 and Node 4  
PDB pass-through cable  
HP Apollo r2200 Chassis  
Item  
Description  
Power cable for Node 1 and Node 2  
1
Cabling 134  
 
Item  
Description  
Power cable for drives  
Power cable for Node 3 and Node 4  
PDB pass-through cable  
2
3
4
RCM 2.0 cabling  
Fan power cabling  
Cabling 135  
 
Fan cabling  
Item  
Description  
Fan 1 cable  
Fan 2 cable  
Fan 3 cable  
Fan 4 cable  
Fan 5 cable  
Fan 6 cable  
Fan 7 cable  
Fan 8 cable  
1
2
3
4
5
6
7
8
HP Smart Storage Battery cabling  
Cabling 136  
 
Node cabling  
Storage cabling  
B140i 1U node SATA cabling  
B140i 2U node SATA cabling  
Mini-SAS H240 1U node cabling  
Cabling 137  
   
Mini-SAS H240 2U node cabling  
Mini-SAS P440 2U node cabling  
Mini-SAS P440/P840 cabling  
HP P440 Smart Array controller installed in a 1U node  
Cabling 138  
HP P840 Smart Array controller installed in a 2U node  
Graphic card/ coprocessor cabling  
NOTE: Graphic card/ coprocessor cabling may vary slightly depending on the type of graphic/  
coprocessor installed.  
Single graphic card/ coprocessor power cabling  
Dual graphic card/ coprocessor power cabling  
Cabling 139  
 
2-pin graphic card adapter cabling (for NVIDIA K40 GPUs only)  
FlexibleLOM 2U node riser cage assembly  
Three-slot PCI riser cage assembly and three-slot GPU-direct PCI riser cage assembly  
FBWC module cabling  
The FBWC solution is a separately purchased option. This node only supports FBWC module installation  
when an HP Smart Array P-Series controller is installed.  
Depending on the controller option installed, the actual storage controller connectors might look different  
from what is shown in this section.  
Cabling 140  
   
HP P440 Smart Array controller in a single-slot left PCI riser cage assembly  
HP P440 Smart Array controller in a single-slot 2U node PCI riser cage assembly  
HP P840 Smart Array controller in a FlexibleLOM 2U node riser cage assembly  
Cabling 141  
HP P840 Smart Array controller in a three-slot PCI riser cage assembly  
HP P840 Smart Array controller in a three-slot GPU-direct PCI riser cage assembly  
Cabling 142  
Software and configuration utilities  
Server mode  
The software and configuration utilities presented in this section operate in online mode, offline mode, or in  
both modes.  
Software or configuration utility  
HP iLO (on page 143)  
Server mode  
Online and Offline  
Online and Offline  
Active Health System (on page 144)  
Online and Offline  
Online and Offline  
Online  
HP RESTful API support for HP iLO (on page 145)  
Integrated Management Log (on page 145)  
HP Insight Remote Support (on page 146)  
Online  
HP Insight Online (on page 146)  
Offline  
Intelligent Provisioning (on page 146)  
HP Insight Diagnostics (on page 147)  
Erase Utility (on page 147)  
Online and Offline  
Offline  
Online  
Scripting Toolkit for Windows and Linux (on page 148)  
Online and Offline  
Online and Offline  
Offline  
HP Service Pack for ProLiant (on page 148)  
HP Smart Update Manager (on page 148)  
HP UEFI System Utilities (on page 149)  
HP Smart Storage Administrator (on page 152)  
FWUPDATE utility (on page 154)  
Online and Offline  
Offline  
Product QuickSpecs  
For more information about product features, specifications, options, configurations, and compatibility, see  
the product QuickSpecs on the HP website (http://www.hp.com/go/qs).  
HP iLO  
The iLO subsystem is a standard component of HP ProLiant servers that simplifies initial node setup, server  
health monitoring, power and thermal optimization, and remote server administration. The iLO subsystem  
includes an intelligent microprocessor, secure memory, and a dedicated network interface. This design  
makes iLO independent of the host server and its operating system.  
iLO enables and manages the Active Health System (on page 144) and also features Agentless  
Management. All key internal subsystems are monitored by iLO. If enabled, SNMP alerts are sent directly by  
iLO regardless of the host operating system or even if no host operating system is installed.  
Embedded remote support software is available on HP ProLiant Gen8 and later servers with iLO 4, regardless  
of the operating system software and without installing OS agents on the server.  
Software and configuration utilities 143  
   
Using iLO, you can do the following:  
Access a high-performance and secure Integrated Remote Console to the server from anywhere in the  
world if you have a network connection to the server.  
Use the shared .NET Integrated Remote Console to collaborate with up to four server administrators.  
Remotely mount high-performance Virtual Media devices to the node.  
Securely and remotely control the power state of the managed node.  
Implement true Agentless Management with SNMP alerts from HP iLO, regardless of the state of the host  
server.  
Download the Active Health System log.  
Register for HP Insight Remote Support.  
Use iLO Federation to manage multiple servers from one system running the iLO web interface.  
Use Virtual Power and Virtual Media from the GUI, the CLI, or the iLO scripting toolkit for many tasks,  
including the automation of deployment and provisioning.  
Control iLO by using a remote management tool.  
For more information about iLO features, see the iLO documentation on the HP website  
The HP iLO 4 hardware and firmware features and functionality, such as NAND size and embedded user  
partition, vary depending on the node model. For a complete list of supported features and functionality, see  
the HP iLO 4 QuickSpecs on the HP website  
Active Health System  
HP Active Health System provides the following features:  
Combined diagnostics tools/scanners  
Always on, continuous monitoring for increased stability and shorter downtimes  
Rich configuration history  
Health and service alerts  
Easy export and upload to Service and Support  
The HP Active Health System monitors and records changes in the server hardware and system configuration.  
The Active Health System assists in diagnosing problems and delivering rapid resolution if server failures  
occur.  
The Active Health System collects the following types of data:  
Server model  
Serial number  
Processor model and speed  
Storage capacity and speed  
Memory capacity and speed  
Firmware/BIOS  
Software and configuration utilities 144  
   
HP Active Health System does not collect information about Active Health System users' operations, finances,  
customers, employees, partners, or data center, such as IP addresses, host names, user names, and  
passwords. HP Active Health System does not parse or change operating system data from third-party error  
event log activities, such as content created or passed through by the operating system.  
The data that is collected is managed according to the HP Data Privacy policy. For more information see the  
The Active Health System, in conjunction with the system monitoring provided by Agentless Management or  
SNMP Pass-thru, provides continuous monitoring of hardware and configuration changes, system status, and  
service alerts for various server components.  
The Agentless Management Service is available in the SPP, which can be downloaded from the HP website  
(http://www.hp.com/go/spp/download). The Active Health System log can be downloaded manually from  
iLO or HP Intelligent Provisioning and sent to HP.  
For more information, see the following documents:  
HP iLO User Guide on the HP website (http://www.hp.com/go/ilo/docs)  
HP Intelligent Provisioning User Guide on the HP website  
HP RESTful API support for HP iLO  
HP iLO 4 firmware version 2.00 and later includes the HP RESTful API. The HP RESTful API is a management  
interface that server management tools can use to perform configuration, inventory, and monitoring of an HP  
ProLiant server via iLO. A REST client sends HTTPS operations to the iLO web server to GET and PATCH  
JSON-formatted data, and to configure supported iLO and server settings, such as the UEFI BIOS settings.  
HP iLO 4 supports the HP RESTful API with HP ProLiant Gen8 and later servers. For more information about  
the HP RESTful API, see the HP website (http://www.hp.com/support/restfulinterface/docs).  
Integrated Management Log  
The IML records hundreds of events and stores them in an easy-to-view form. The IML timestamps each event  
with 1-minute granularity.  
You can view recorded events in the IML in several ways, including the following:  
From within HP SIM  
From within HP UEFI System Utilities (on page 149)  
From within the Embedded UEFI shell (on page 151)  
From within operating system-specific IML viewers:  
o
o
For Windows: IML Viewer  
For Linux: IML Viewer Application  
From within the iLO web interface  
From within HP Insight Diagnostics (on page 147)  
Software and configuration utilities 145  
     
HP Insight Remote Support  
HP strongly recommends that you register your device for remote support to enable enhanced delivery of  
your HP Warranty, HP Care Pack Service, or HP contractual support agreement. HP Insight Remote Support  
supplements your monitoring continuously to ensure maximum system availability by providing intelligent  
event diagnosis, and automatic, secure submission of hardware event notifications to HP, which will initiate  
a fast and accurate resolution, based on your product’s service level. Notifications can be sent to your  
authorized HP Channel Partner for onsite service, if configured and available in your country.  
For more information, see HP Insight Remote Support and Insight Online Setup Guide for ProLiant Servers  
and BladeSystem c-Class Enclosures on the HP website  
(http://www.hp.com/go/insightremotesupport/docs). HP Insight Remote Support is available as part of HP  
Warranty, HP Care Pack Service, or HP contractual support agreement.  
HP Insight Remote Support central connect  
When you use the embedded Remote Support functionality with HP ProLiant Gen8 and later server models  
and HP BladeSystem c-Class enclosures, you can register a node or chassis to communicate to HP through an  
HP Insight Remote Support centralized Hosting Device in your local environment. All configuration and  
service event information is routed through the Hosting Device. This information can be viewed by using the  
local HP Insight Remote Support user interface or the web-based view in HP Insight Online.  
For more information, see HP Insight Remote Support Release Notes on the HP website  
HP Insight Online direct connect  
When you use the embedded Remote Support functionality with HP ProLiant Gen8 and later server models  
and HP BladeSystem c-Class enclosures, you can register a node or chassis to communicate directly to HP  
Insight Online without the need to set up an HP Insight Remote Support centralized Hosting Device in your  
local environment. HP Insight Online will be your primary interface for remote support information.  
For more information, see the product documentation on the HP website  
HP Insight Online  
HP Insight Online is a capability of the HP Support Center portal. Combined with HP Insight Remote Support  
central connect or HP Insight Online direct connect, it automatically aggregates device health, asset, and  
support information with contract and warranty information, and then secures it in a single, personalized  
dashboard that is viewable from anywhere at any time. The dashboard organizes your IT and service data  
to help you understand and respond to that information more quickly. With specific authorization from you,  
an authorized HP Channel Partner can also view your IT environment remotely using HP Insight Online.  
For more information about using HP Insight Online, see the HP Insight Online User’s Guide on the HP  
Intelligent Provisioning  
Intelligent Provisioning is a single-server deployment tool embedded in HP ProLiant Gen8 and later servers  
that simplifies HP ProLiant server setup, providing a reliable and consistent way to deploy HP ProLiant server  
configurations:  
Software and configuration utilities 146  
       
Intelligent Provisioning assists with the OS installation process by preparing the system for installing  
"off-the-shelf" and HP branded versions of operating system software and integrating optimized HP  
ProLiant server support software.  
Intelligent Provisioning provides maintenance-related tasks using the Perform Maintenance window.  
Intelligent Provisioning provides installation help for Microsoft Windows, Red Hat and SUSE Linux, and  
VMware operating systems. For specific OS support, see the HP Intelligent Provisioning Release Notes  
For more information about Intelligent Provisioning software, see the HP website  
(http://www.hp.com/go/intelligentprovisioning). For Intelligent Provisioning recovery media downloads,  
see the Resources tab on the HP website (http://www.hp.com/go/ilo). For consolidated drive and firmware  
update packages, see the HP Smart Update: Server Firmware and Driver Updates page on the HP website  
HP Insight Diagnostics  
HP Insight Diagnostics is a proactive node management tool, available in both offline and online versions,  
that provides diagnostics and troubleshooting capabilities to assist IT administrators who verify node  
installations, troubleshoot problems, and perform repair validation.  
HP Insight Diagnostics Offline Edition performs various in-depth system and component testing while the OS  
is not running. To run this utility, boot the node using Intelligent Provisioning (on page 146).  
HP Insight Diagnostics Online Edition is a web-based application that captures system configuration and  
other related data needed for effective node management. Available in Microsoft Windows and Linux  
versions, the utility helps to ensure proper system operation.  
For more information or to download the utility, see the HP website (http://www.hp.com/servers/diags). HP  
Insight Diagnostics Online Edition is also available in the SPP ("HP Service Pack for ProLiant" on page 148).  
HP Insight Diagnostics survey functionality  
HP Insight Diagnostics (on page 147) provides survey functionality that gathers critical hardware and  
software information on ProLiant nodes.  
This functionality supports operating systems that are supported by the node. For operating systems  
supported by the node, see the HP website (http://www.hp.com/go/supportos).  
If a significant change occurs between data-gathering intervals, the survey function marks the previous  
information and overwrites the survey data files to reflect the latest changes in the configuration.  
Survey functionality is installed with every Intelligent Provisioning-assisted HP Insight Diagnostics installation,  
or it can be installed through the SPP ("HP Service Pack for ProLiant" on page 148).  
Erase Utility  
CAUTION: Perform a backup before running the Erase Utility. The utility sets the system to its  
original factory state, deletes the current hardware configuration information, including array  
setup and disk partitioning, and erases all connected hard drives completely. Before using this  
utility, see the instructions in the HP Intelligent Provisioning User Guide.  
Use the Erase Utility to erase drives and Active Health System logs, and to reset UEFI System Utilities settings.  
Run the Erase Utility if you must erase the system for the following reasons:  
Software and configuration utilities 147  
     
You want to install a new operating system on a node with an existing operating system.  
You encounter an error when completing the steps of a factory-installed operating system installation.  
To access the Erase Utility, click the Perform Maintenance icon from the Intelligent Provisioning home screen,  
and then select Erase.  
For more information about the Erase Utility, see the HP Intelligent Provisioning User Guide on the HP website  
Scripting Toolkit for Windows and Linux  
The Scripting Toolkit for Windows and Linux is a server deployment product that delivers an unattended  
automated installation for high-volume server deployments. The Scripting Toolkit is designed to support  
ProLiant BL, ML, DL, SL, and XL servers. The toolkit includes a modular set of utilities and important  
documentation that describes how to apply these tools to build an automated server deployment process.  
The Scripting Toolkit provides a flexible way to create standard server configuration scripts. These scripts are  
used to automate many of the manual steps in the server configuration process. This automated server  
configuration process cuts time from each deployment, making it possible to scale rapid, high-volume server  
deployments.  
For more information, and to download the Scripting Toolkit, see the HP website  
HP Service Pack for ProLiant  
SPP is a comprehensive systems software (drivers and firmware) solution delivered as a single package with  
major server releases. This solution uses HP SUM as the deployment tool and is tested on all supported HP  
ProLiant servers including HP ProLiant Gen8 and later servers.  
SPP can be used in an online mode on a Windows or Linux hosted operating system, or in an offline mode  
where the server is booted to an operating system included on the ISO file so that the server can be updated  
automatically with no user interaction or updated in interactive mode.  
For more information or to download SPP, see one of the following pages on the HP website:  
HP Service Pack for ProLiant download page (http://www.hp.com/go/spp)  
HP Smart Update: Server Firmware and Driver Updates page (http://www.hp.com/go/SmartUpdate)  
HP Smart Update Manager  
HP SUM is a product used to install and update firmware, drivers, and systems software on HP ProLiant  
servers. HP SUM provides a GUI and a command-line scriptable interface for deployment of systems software  
for single or one-to-many HP ProLiant servers and network-based targets, such as iLOs, OAs, and VC Ethernet  
and Fibre Channel modules.  
For more information about HP SUM, see the product page on the HP website  
To download HP SUM, see the HP website (http://www.hp.com/go/hpsum/download).  
To access the HP Smart Update Manager User Guide, see the HP SUM Information Library  
Software and configuration utilities 148  
       
HP UEFI System Utilities  
The HP UEFI System Utilities is embedded in the system ROM. The UEFI System Utilities enable you to perform  
a wide range of configuration activities, including:  
Configuring system devices and installed options  
Enabling and disabling system features  
Displaying system information  
Selecting the primary boot controller  
Configuring memory options  
Selecting a language  
Launching other pre-boot environments such as the Embedded UEFI Shell and Intelligent Provisioning  
For more information on the HP UEFI System Utilities, see the HP UEFI System Utilities User Guide for HP  
ProLiant Gen9 Servers on the HP website (http://www.hp.com/go/ProLiantUEFI/docs).  
Scan the QR code located at the bottom of the screen to access mobile-ready online help for the UEFI System  
Utilities and UEFI Shell. For on-screen help, press F1.  
Using HP UEFI System Utilities  
To use the System Utilities, use the following keys.  
Action  
Key  
F9 during server POST  
Access System Utilities  
Navigate menus  
Select items  
Up and Down arrows  
Enter  
F10  
F1  
Save selections  
Access Help for a highlighted configuration  
option*  
*Scan the QR code on the screen to access online help for the UEFI System Utilities and UEFI Shell.  
Default configuration settings are applied to the server at one of the following times:  
Upon the first system power-up  
After defaults have been restored  
Default configuration settings are sufficient for typical server operations; however, you can modify  
configuration settings as needed. The system prompts you for access to the System Utilities each time the  
system is powered up.  
Flexible boot control  
This feature enables you to do the following:  
Add Boot Options  
o
o
Browse all FAT16 and FAT32 file systems.  
Select an X64 UEFI application with an .EFI extension to add as a new UEFI boot option, such as an  
OS boot loader or other UEFI application.  
Software and configuration utilities 149  
   
The new boot option is appended to the boot order list. When you select a file, you are prompted  
to enter the boot option description (which is then displayed in the Boot menu), as well as any  
optional data to be passed to an .EFI application.  
Boot to System Utilities  
After pre-POST, the boot options screen appears. During this time, you can access the System Utilities  
by pressing the F9 key.  
Choose between supported modes: Legacy BIOS Boot Mode or UEFI Boot Mode  
IMPORTANT: If the default boot mode settings are different than the user defined settings, the  
system may not boot the OS installation if the defaults are restored. To avoid this issue, use the  
User Defined Defaults feature in UEFI System Utilities to override the factory default settings.  
For more information, see the HP UEFI System Utilities User Guide for HP ProLiant Gen9 Servers on the HP  
Restoring and customizing configuration settings  
You can reset all configuration settings to the factory default settings, or you can restore system default  
configuration settings, which are used instead of the factory default settings.  
You can also configure default settings as necessary, and then save the configuration as the custom default  
configuration. When the system loads the default settings, it uses the custom default settings instead of the  
factory defaults.  
Secure Boot configuration  
Secure Boot is integrated in the UEFI specification on which the HP implementation of UEFI is based. Secure  
Boot is completely implemented in the BIOS and does not require special hardware. It ensures that each  
component launched during the boot process is digitally signed and that the signature is validated against a  
set of trusted certificates embedded in the UEFI BIOS. Secure Boot validates the software identity of the  
following components in the boot process:  
UEFI drivers loaded from PCIe cards  
UEFI drivers loaded from mass storage devices  
Pre-boot UEFI shell applications  
OS UEFI boot loaders  
Once enabled, only firmware components and operating systems with boot loaders that have an appropriate  
digital signature can execute during the boot process. Only operating systems that support Secure Boot and  
have an EFI boot loader signed with one of the authorized keys can boot when Secure Boot is enabled. For  
more information about supported operating systems, see the HP UEFI System Utilities and Shell Release  
A physically present user can customize the certificates embedded in the UEFI BIOS by adding/removing  
their own certificates.  
Software and configuration utilities 150  
 
Embedded UEFI shell  
The system BIOS in all HP ProLiant Gen9 servers includes an Embedded UEFI Shell in the ROM. The UEFI  
Shell environment provides an API, a command line prompt, and a set of CLIs that allow scripting, file  
manipulation, and system information. These features enhance the capabilities of the UEFI System Utilities.  
For more information, see the following documents:  
HP UEFI Shell User Guide for HP ProLiant Gen9 Servers on the HP website  
UEFI Shell Specification on the UEFI website (http://www.uefi.org/specifications)  
Embedded Diagnostics option  
The system BIOS in all HP ProLiant Gen9 servers includes an Embedded Diagnostics option in the ROM. The  
Embedded Diagnostics option can run comprehensive diagnostics of the server hardware, including  
processors, memory, drives, and other server components.  
For more information on the Embedded Diagnostics option, see the HP UEFI System Utilities User Guide for  
HP ProLiant Gen9 Servers on the HP website (http://www.hp.com/go/ProLiantUEFI/docs).  
HP RESTful API support for UEFI  
HP ProLiant Gen9 servers include support for a UEFI compliant System BIOS, along with UEFI System Utilities  
and Embedded UEFI Shell pre-boot environments. HP ProLiant Gen9 servers also support configuring the  
UEFI BIOS settings using the HP RESTful API, a management interface that server management tools can use  
to perform configuration, inventory, and monitoring of an HP ProLiant server. A REST client uses HTTPS  
operations to configure supported server settings, such as UEFI BIOS settings.  
For more information about the HP RESTful API and the HP RESTful Interface Tool, see the HP website  
Re-entering the server serial number and product ID  
After you replace the system board, you must re-enter the node serial number and the product ID.  
1.  
2.  
During the node startup sequence, press the F9 key to access UEFI System Utilities.  
Select the System Configuration > BIOS/Platform Configuration (RBSU) > Advanced Options >  
Advanced System ROM Options > Serial Number, and then press the Enter key.  
3.  
Enter the serial number and press the Enter key. The following message appears:  
The serial number should only be modified by qualified service personnel.  
This value should always match the serial number located on the chassis.  
4.  
5.  
6.  
Press the Enter key to clear the warning.  
Enter the serial number and press the Enter key.  
Select Product ID. The following warning appears:  
Warning: The Product ID should ONLY be modified by qualified service  
personnel. This value should always match the Product ID located on the  
chassis.  
7.  
8.  
Enter the product ID and press the Enter key.  
Press the F10 key to confirm exiting System Utilities. The node automatically reboots.  
Software and configuration utilities 151  
   
Utilities and features  
HP Smart Storage Administrator  
HP SSA is a configuration and management tool for HP Smart Array controllers. Starting with HP ProLiant  
Gen8 servers, HP SSA replaces ACU with an enhanced GUI and additional configuration features.  
HP SSA exists in three interface formats: the HP SSA GUI, the HP SSA CLI, and HP SSA Scripting. Although  
all formats provide support for configuration tasks, some of the advanced tasks are available in only one  
format.  
Some HP SSA features include the following:  
Supports online array capacity expansion, logical drive extension, assignment of online spares, and  
RAID or stripe size migration  
Suggests the optimal configuration for an unconfigured system  
Provides diagnostic and SmartSSD Wear Gauge functionality on the Diagnostics tab  
For supported controllers, provides access to additional features.  
For more information about HP SSA, see the HP website (http://www.hp.com/go/hpssa).  
Automatic Server Recovery  
ASR is a feature that causes the system to restart when a catastrophic operating system error occurs, such as  
a blue screen, ABEND, or panic. A system fail-safe timer, the ASR timer, starts when the System Management  
driver, also known as the Health Driver, is loaded. When the operating system is functioning properly, the  
system periodically resets the timer. However, when the operating system fails, the timer expires and restarts  
the server.  
ASR increases server availability by restarting the server within a specified time after a system hang. You can  
disable ASR from the System Management Homepage or through UEFI System Utilities.  
USB support  
HP nodes support both USB 2.0 ports and USB 3.0 ports. Both types of ports support installing all types of  
USB devices (USB 1.0, USB 2.0, and USB 3.0), but may run at lower speeds in specific situations:  
USB 3.0 capable devices operate at USB 2.0 speeds when installed in a USB 2.0 port.  
When the node is configured for UEFI Boot Mode, HP provides legacy USB support in the pre-boot  
environment prior to the operating system loading for USB 1.0, USB 2.0 , and USB 3.0 speeds.  
When the node is configured for Legacy BIOS Boot Mode, HP provides legacy USB support in the  
pre-boot environment prior to the operating system loading for USB 1.0 and USB 2.0 speeds. While  
USB 3.0 ports can be used with all devices in Legacy BIOS Boot Mode, they are not available at USB  
3.0 speeds in the pre-boot environment. Standard USB support (USB support from within the operating  
system) is provided by the OS through the appropriate USB device drivers. Support for USB 3.0 varies  
by operating system.  
For maximum compatibility of USB 3.0 devices with all operating systems, HP provides a configuration  
setting for USB 3.0 Mode. Auto is the default setting. This setting impacts USB 3.0 devices when connected  
to USB 3.0 ports in the following manner:  
Software and configuration utilities 152  
   
Auto (default)—If configured in Auto Mode, USB 3.0 capable devices operate at USB 2.0 speeds in the  
pre-boot environment and during boot. When a USB 3.0 capable OS USB driver loads, USB 3.0  
devices transition to USB 3.0 speeds. This mode provides compatibility with operating systems that do  
not support USB 3.0 while still allowing USB 3.0 devices to operate at USB 3.0 speeds with state-of-the  
art operating systems.  
Enabled—If Enabled, USB 3.0 capable devices operate at USB 3.0 speeds at all times (including the  
pre-boot environment) when in UEFI Boot Mode. This mode should not be used with operating systems  
that do not support USB 3.0. If operating in Legacy Boot BIOS Mode, the USB 3.0 ports cannot function  
in the pre-boot environment and are not bootable.  
Disabled—If configured for Disabled, USB 3.0 capable devices function at USB 2.0 speeds at all times.  
The pre-OS behavior of the USB ports is configurable in System Utilities, so that the user can change the  
default operation of the USB ports. For more information, see the HP UEFI System Utilities User Guide for HP  
ProLiant Gen9 Servers on the HP website (http://www.hp.com/go/ProLiantUEFI/docs).  
External USB functionality  
HP provides external USB support to enable local connection of USB devices for node administration,  
configuration, and diagnostic procedures.  
For additional security, external USB functionality can be disabled through USB options in UEFI System  
Utilities.  
Redundant ROM support  
The node enables you to upgrade or configure the ROM safely with redundant ROM support. The node has  
a single ROM that acts as two separate ROM images. In the standard implementation, one side of the ROM  
contains the current ROM program version, while the other side of the ROM contains a backup version.  
NOTE: The server ships with the same version programmed on each side of the ROM.  
Safety and security benefits  
When you flash the system ROM, ROMPaq writes over the backup ROM and saves the current ROM as a  
backup, enabling you to switch easily to the alternate ROM version if the new ROM becomes corrupted for  
any reason. This feature protects the existing ROM version, even if you experience a power failure while  
flashing the ROM.  
Keeping the system current  
Access to HP Support Materials  
Access to some updates for HP ProLiant Servers may require product entitlement when accessed through the  
HP Support Center support portal. HP recommends that you have an HP Passport set up with relevant  
entitlements. For more information, see the HP website  
Software and configuration utilities 153  
   
Updating firmware or System ROM  
Multiple methods exist to update the firmware or System ROM:  
HP Service Pack for ProLiant (on page 148)  
FWUPDATE utility (on page 154)  
FWUpdate command from within the Embedded UEFI shell (on page 154)  
Firmware Update application in System Utilities (on page 155)  
Online Flash components (on page 155)  
Product entitlement is required to perform updates. For more information, see "Access to HP Support  
Materials (on page 153)."  
FWUPDATE utility  
The FWUPDATE utility enables you to upgrade the system firmware (BIOS).  
To use the utility to upgrade the firmware:  
1.  
2.  
3.  
Download the FWUPDATE flash component from the HP website (http://www.hp.com/go/hpsc).  
Save the FWUPDATE flash components to a USB key.  
Set the boot order so the USB key will boot first using one of the following options:  
o
o
Configure the boot order so the USB key is the first bootable device.  
Press F11 (Boot Menu) when prompted during system boot to access the One-Time Boot Menu. This  
menu allows you to select the boot device for a specific boot and does not modify the boot order  
configuration settings.  
4.  
5.  
Insert the USB key into an available USB port.  
Boot the system.  
The FWUPDATE utility checks the system and provides a choice (if more than one exists) of available  
firmware revisions.  
To download the flash components, see the HP website (http://www.hp.com/go/hpsc).  
For more information about the One-Time Boot Menu, see the HP UEFI System Utilities User Guide for HP  
ProLiant Gen9 Servers on the HP website (http://www.hp.com/go/ProLiantUEFI/docs).  
FWUpdate command from within the Embedded UEFI Shell  
For systems configured in either boot mode, update the firmware:  
1.  
Access the System ROM Flash Binary component for your node from the HP Support Center  
(http://www.hp.com/go/hpsc). When searching for the component, always select OS Independent to  
locate the binary file.  
2.  
3.  
4.  
5.  
Copy the binary file to a USB media or iLO virtual media.  
Attach the media to the node.  
Boot to Embedded Shell.  
To obtain the assigned file system volume for the USB key, enter Map –r. For more information about  
accessing a file system from the shell, see the HP UEFI Shell User Guide for HP ProLiant Gen9 Servers  
Software and configuration utilities 154  
     
6.  
Change to the file system that contains the System ROM Flash Binary component for your node. Enter  
one of the fsx file systems available, such as fs0or fs1, and press Enter.  
7.  
8.  
Use the cdcommand to change from the current directory to the directory that contains the binary file.  
Enter fwupdate –d BIOS -f <filename>to flash the system ROM.  
For help on the FWUPDATE command, enter the command:  
help fwupdate -b  
9.  
Reboot the node. A reboot is required after the firmware update for the updates to take effect and for  
hardware stability to be maintained.  
For more information about the commands used in this procedure, see the HP UEFI Shell User Guide for HP  
ProLiant Gen9 Servers on the HP website (http://www.hp.com/go/ProLiantUEFI/docs).  
Firmware Update application in System Utilities  
For systems configured in either boot mode, update the firmware:  
1.  
Access the System ROM Flash Binary component for your node from the HP Support Center  
(http://www.hp.com/go/hpsc). When searching for the component, always select OS Independent to  
find the component.  
2.  
3.  
4.  
5.  
6.  
7.  
8.  
9.  
Copy the binary file to a USB media or iLO virtual media.  
Attach the media to the node.  
During POST, press F9 to enter System Utilities.  
Select Embedded Applications Firmware Update System ROM Select Firmware File.  
Select the device containing the flash file.  
Select the flash file. This step may take a few moments to complete.  
Select Start firmware update and allow the process to complete.  
Reboot the node. A reboot is required after the firmware update for the updates to take effect and for  
hardware stability to be maintained.  
Online Flash components  
This component provides updated system firmware that can be installed directly on supported Operating  
Systems. Additionally, when used in conjunction with HP SUM ("HP Smart Update Manager" on page 148),  
this Smart Component allows the user to update firmware on remote servers from a central location. This  
remote deployment capability eliminates the need for the user to be physically present at the server to  
perform a firmware update.  
Drivers  
IMPORTANT: Always perform a backup before installing or updating device drivers.  
The node includes new hardware that may not have driver support on all OS installation media.  
If you are installing an Intelligent Provisioning-supported OS, use Intelligent Provisioning (on page 146) and  
its Configure and Install feature to install the OS and latest supported drivers.  
Software and configuration utilities 155  
     
If you do not use Intelligent Provisioning to install an OS, drivers for some of the new hardware are required.  
These drivers, as well as other option drivers, ROM images, and value-add software can be downloaded as  
part of an SPP.  
If you are installing drivers from SPP, be sure that you are using the latest SPP version that your node supports.  
To verify that your node is using the latest supported version and for more information about SPP, see the HP  
To locate the drivers for a particular server, go to the HP website (http://www.hp.com/go/hpsc) and click  
on Drivers, Software & Firmware. Then, enter your product name in the Find an HP product field and click  
Go.  
Software and firmware  
Software and firmware should be updated before using the server for the first time, unless any installed  
software or components require an older version.  
For system software and firmware updates, use one of the following sources:  
Download the SPP ("HP Service Pack for ProLiant" on page 148) from the HP Service Pack for ProLiant  
Download individual drivers, firmware, or other systems software components from the node product  
page in the HP Support Center (http://www.hp.com/go/hpsc).  
Operating System Version Support  
For information about specific versions of a supported operating system, refer to the operating system  
Version control  
The VCRM and VCA are web-enabled Insight Management Agents tools that HP SIM uses to schedule  
software update tasks to the entire enterprise.  
VCRM manages the repository for SPP. Administrators can view the SPP contents or configure VCRM to  
automatically update the repository with internet downloads of the latest software and firmware from  
HP.  
VCA compares installed software versions on the node with updates available in the VCRM managed  
repository. Administrators configure VCA to point to a repository managed by VCRM.  
For more information about version control tools, see the HP Systems Insight Manager User Guide, the HP  
Version Control Agent User Guide, and the HP Version Control Repository Manager User Guide on the HP  
1.  
2.  
3.  
Select HP Insight Management from the available options in Products and Solutions.  
Select HP Version Control from the available options in HP Insight Management.  
Download the latest document.  
Software and configuration utilities 156  
 
HP operating systems and virtualization software support for  
ProLiant servers  
For information about specific versions of a supported operating system, see the HP website  
HP Technology Service Portfolio  
Connect to HP for assistance on the journey to the new style of IT. HP Technology Services delivers confidence  
and reduces risk to help you realize agility and stability in your IT infrastructure.  
Utilize our consulting expertise in the areas of private or hybrid cloud computing, big data and mobility  
requirements, improving data center infrastructure and better use of today’s server, storage and networking  
technology. For more information, see the HP website (http://www.hp.com/services/consulting).  
Our support portfolio covers services for HP server, storage and networking hardware and software plus the  
leading industry standard operating systems. Let us work proactively with you to prevent problems. Our  
flexible choices of hardware and software support coverage windows and response times help resolve  
problems faster, reduce unplanned outages and free your staff for more important tasks. For more  
information, see the HP website (http://www.hp.com/services/support).  
Tap into our knowledge, expertise, innovation and world-class services to achieve better results. Access and  
apply technology in new ways to optimize your operations and you’ll be positioned for success.  
Change control and proactive notification  
HP offers Change Control and Proactive Notification to notify customers 30 to 60 days in advance of  
upcoming hardware and software changes on HP commercial products.  
For more information, refer to the HP website (http://www.hp.com/go/pcn).  
Software and configuration utilities 157  
 
System battery  
If the node no longer automatically displays the correct date and time, then replace the battery that provides  
power to the real-time clock. Under normal use, battery life is 5 to 10 years.  
WARNING: The computer contains an internal lithium manganese dioxide, a vanadium  
pentoxide, or an alkaline battery pack. A risk of fire and burns exists if the battery pack is not  
properly handled. To reduce the risk of personal injury:  
Do not attempt to recharge the battery.  
Do not expose the battery to temperatures higher than 60°C (140°F).  
Do not disassemble, crush, puncture, short external contacts, or dispose of in fire or water.  
Replace only with the spare designated for this product.  
To remove the component:  
1.  
2.  
3.  
4.  
5.  
6.  
7.  
Power down the node (on page 31).  
Disconnect all peripheral cables from the node.  
Remove the node from the chassis (on page 32).  
Place the node on a flat, level surface.  
Remove any installed PCI riser cage assemblies ("Remove the PCI riser cage assembly" on page 48).  
Locate the battery on the system board ("System board components" on page 16).  
If the system battery is secured by a metal tab, do the following:  
a. Use your finger or a small flat-bladed, nonconductive tool to press the metal tab. This will partially  
release the battery from the socket.  
b. Remove the battery.  
IMPORTANT: Replacing the system board battery resets the system ROM to its default  
configuration. After replacing the battery, reconfigure the system through RBSU.  
To replace the component, reverse the removal procedure.  
System battery 158  
 
For more information about battery replacement or proper disposal, contact an authorized reseller or an  
authorized service provider.  
System battery 159  
Troubleshooting  
Troubleshooting resources  
The HP ProLiant Gen9 Troubleshooting Guide, Volume I: Troubleshooting provides procedures for resolving  
common problems and comprehensive courses of action for fault isolation and identification, issue resolution,  
and software maintenance on ProLiant servers and server blades. To view the guide, select a language:  
The HP ProLiant Gen9 Troubleshooting Guide, Volume II: Error Messages provides a list of error messages  
and information to assist with interpreting and resolving error messages on ProLiant servers and server  
blades. To view the guide, select a language:  
Troubleshooting 160  
 
Regulatory information  
Safety and regulatory compliance  
For safety, environmental, and regulatory information, see Safety and Compliance Information for Server,  
Storage, Power, Networking, and Rack Products, available at the HP website  
Belarus Kazakhstan Russia marking  
Manufacturer  
Hewlett-Packard Company, Address: 3000 Hanover Street, Palo Alto, California 94304, U.S.  
Local representative information (Russian)  
HP Russia  
HP Belarus  
HP Kazakhstan  
Local representative information (Kazakh)  
Manufacturing date  
The manufacturing date is defined by the serial number (HP serial number format for this product):  
CCSYWWZZZZ  
Regulatory information 161  
 
Valid date formats include the following:  
YWW, where Yindicates the year counting from within each new decade, with 2000 as the starting  
point. For example, 238: 2 for 2002 and 38 for the week of September 9. In addition, 2010 is  
indicated by 0, 2011 by 1, 2012 by 2, 2013 by 3, and so forth.  
YYWW, where YYindicates the year, using a base year of 2000. For example, 0238: 02 for 2002 and  
38 for the week of September 9.  
Turkey RoHS material content declaration  
Ukraine RoHS material content declaration  
Warranty information  
HP ProLiant and X86 Servers and Options (http://www.hp.com/support/ProLiantServers-Warranties)  
Regulatory information 162  
 
Electrostatic discharge  
Preventing electrostatic discharge  
To prevent damaging the system, be aware of the precautions you need to follow when setting up the system  
or handling parts. A discharge of static electricity from a finger or other conductor may damage system  
boards or other static-sensitive devices. This type of damage may reduce the life expectancy of the device.  
To prevent electrostatic damage:  
Avoid hand contact by transporting and storing products in static-safe containers.  
Keep electrostatic-sensitive parts in their containers until they arrive at static-free workstations.  
Place parts on a grounded surface before removing them from their containers.  
Avoid touching pins, leads, or circuitry.  
Always be properly grounded when touching a static-sensitive component or assembly.  
Grounding methods to prevent electrostatic discharge  
Several methods are used for grounding. Use one or more of the following methods when handling or  
installing electrostatic-sensitive parts:  
Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist  
straps are flexible straps with a minimum of 1 megohm ±10 percent resistance in the ground cords. To  
provide proper ground, wear the strap snug against the skin.  
Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when  
standing on conductive floors or dissipating floor mats.  
Use conductive field service tools.  
Use a portable field service kit with a folding static-dissipating work mat.  
If you do not have any of the suggested equipment for proper grounding, have an authorized reseller install  
the part.  
For more information on static electricity or assistance with product installation, contact an authorized  
reseller.  
Electrostatic discharge 163  
 
Specifications  
Environmental specifications  
Specification  
Value  
Temperature range*  
Operating  
10°C to 35°C (50°F to 95°F)  
-30°C to 60°C (-22°F to 140°F)  
Nonoperating  
Relative humidity  
(noncondensing)  
Minimum to be the higher (more moisture) of  
-12°C (10.4°F) dew point or 8% relative  
humidity  
Operating  
Maximum to be 24°C (75.2°F) dew point or  
90% relative humidity  
5% to 95%  
Nonoperating  
38.7°C (101.7°F), maximum wet bulb  
temperature  
* All temperature ratings shown are for sea level. An altitude derating of 1.0°C per 304.8 m (1.8°F per  
1000 ft) to 3048 m (10,000 ft) is applicable. No direct sunlight allowed. Maximum rate of change is 20°C  
per hour (36°F per hour). The upper limit and rate of change might be limited by the type and number of  
options installed.  
For certain approved hardware configurations, the supported system inlet temperature range is extended:  
5°C to 10°C (41°F to 50°F) and 35°C to 40°C (95°F to 104°F) at sea level with an altitude derating of  
1.0°C per every 175 m (1.8°F per every 574 ft) above 900 m (2953 ft) to a maximum of 3048 m  
(10,000 ft).  
40°C to 45°C (104°F to 113°F) at sea level with an altitude derating of 1.0°C per every 125 m (1.8°F  
per every 410 ft) above 900 m (2953 ft) to a maximum of 3048 m (10,000 ft).  
The approved hardware configurations for this system are listed on the HP website  
Mechanical specifications  
HP Apollo r2200 Chassis (12 LFF)  
Specifications  
Value  
Dimensions  
8.73 cm (3.44 in)  
Height  
86.33 cm (33.40 in)  
44.80 cm (17.64 in)  
Depth  
Width  
Weight (with nodes removed)  
Specifications 164  
 
Specifications  
Value  
25.37 kg (55.94 lb)  
11.94 kg (26.37 lb)  
Weight (maximum)  
Weight (minimum)  
HP Apollo r2600 Chassis (24 SFF)  
Specifications  
Value  
Dimensions  
8.73 cm (3.44 in)  
Height  
82.27 cm (32.40 in)  
44.80 cm (17.64 in)  
Depth  
Width  
Weight (with nodes removed)  
23.45 kg (51.70 lb)  
9.86 kg (21.74 lb)  
Weight (maximum)  
Weight (minimum)  
HP ProLiant XL170r Gen9 Server Node (1U)  
Specifications  
Value  
Dimensions  
4.13 cm (1.63 in)  
Height  
64.15 cm (25.26 in)  
17.95 cm (7.07 in)  
Depth  
Width  
Weight  
1.73 kg (3.82)  
Weight (maximum)  
Weight (minimum)  
1.67 kg (3.69 lb)  
HP ProLiant XL190r Gen9 Server Node (2U)  
Specifications  
Value  
Dimensions  
8.36 cm (3.30 in)  
Height  
69.15 cm (27.23 in)  
17.95 cm (7.07 in)  
Depth  
Width  
Weight  
6.47 kg (14.27)  
Weight (maximum)  
Weight (minimum)  
4.73 kg (10.43 lb)  
Power supply specifications  
Depending on installed options, the node is configured with one of the following power supplies:  
HP 800W Flex Slot Titanium Hot Plug Power Supply Kit – 96% efficiency  
HP 800W Flex Slot Platinum Hot Plug Power Supply Kit – 94% efficiency  
HP 800W Flex Slot Universal Hot Plug Power Supply Kit – 94% efficiency  
HP 800W Flex Slot -48VDC Hot Plug Power Supply Kit – 94% efficiency  
Specifications 165  
 
HP 1400W Flex Slot Platinum Plus Hot Plug Power Supply Kit – 94% efficiency  
For detailed power supply specifications, see the QuickSpecs on the HP website  
Hot-plug power supply calculations  
For hot-plug power supply specifications and calculators to determine electrical and heat loading for the  
Specifications 166  
 
Support and other resources  
Before you contact HP  
Be sure to have the following information available before you call HP:  
Active Health System log (HP ProLiant Gen8 or later products)  
Download and have available an Active Health System log for 7 days before the failure was detected.  
For more information, see the HP iLO 4 User Guide or HP Intelligent Provisioning User Guide on the HP  
Onboard Administrator SHOW ALL report (for HP BladeSystem products only)  
For more information on obtaining the Onboard Administrator SHOW ALL report, see the HP website  
Technical support registration number (if applicable)  
Product serial number  
Product model name and number  
Product identification number  
Applicable error messages  
Add-on boards or hardware  
Third-party hardware or software  
Operating system type and revision level  
HP contact information  
For United States and worldwide contact information, see the Contact HP website  
In the United States:  
To contact HP by phone, call 1-800-334-5144. For continuous quality improvement, calls may be  
recorded or monitored.  
If you have purchased a Care Pack (service upgrade), see the Support & Drivers website  
(http://www8.hp.com/us/en/support-drivers.html). If the problem cannot be resolved at the website,  
call 1-800-633-3600. For more information about Care Packs, see the HP website  
Support and other resources 167  
 
Acronyms and abbreviations  
ABEND  
abnormal end  
ACU  
Array Configuration Utility  
ADM  
Advanced Data Mirroring  
AMP  
Advanced Memory Protection  
ASHRAE  
American Society of Heating, Refrigerating and Air-Conditioning Engineers  
ASR  
Automatic Server Recovery  
CSA  
Canadian Standards Association  
CSR  
Customer Self Repair  
DDR  
double data rate  
DPC  
DIMMs per channel  
EAC  
EuroAsian Economic Commission  
FBWC  
flash-backed write cache  
Acronyms and abbreviations 168  
 
GPU  
graphics processing unit  
HP APM  
HP Advanced Power Manager  
HP SIM  
HP Systems Insight Manager  
HP SSA  
HP Smart Storage Administrator  
HP SUM  
HP Smart Update Manager  
IEC  
International Electrotechnical Commission  
iLO  
Integrated Lights-Out  
IML  
Integrated Management Log  
ISO  
International Organization for Standardization  
LFF  
large form factor  
LOM  
LAN on Motherboard  
LRDIMM  
load reduced dual in-line memory module  
NMI  
nonmaskable interrupt  
NVRAM  
nonvolatile memory  
Acronyms and abbreviations 169  
OA  
Onboard Administrator  
PCIe  
Peripheral Component Interconnect Express  
PDU  
power distribution unit  
POST  
Power-On Self Test  
RBSU  
ROM-Based Setup Utility  
RCM  
Rack control management  
RDIMM  
registered dual in-line memory module  
RDP  
Remote Desktop Protocol  
RPS  
redundant power supply  
SAS  
serial attached SCSI  
SATA  
serial ATA  
SFF  
small form factor  
SIM  
Systems Insight Manager  
SPP  
HP Service Pack for ProLiant  
Acronyms and abbreviations 170  
SUV  
serial, USB, video  
TPM  
Trusted Platform Module  
UEFI  
Unified Extensible Firmware Interface  
UID  
unit identification  
USB  
universal serial bus  
VCA  
Version Control Agent  
VCRM  
Version Control Repository Manager  
VM  
Virtual Machine  
Acronyms and abbreviations 171  
Documentation feedback  
HP is committed to providing documentation that meets your needs. To help us improve the documentation,  
send any errors, suggestions, or comments to Documentation Feedback (mailto:[email protected]).  
Include the document title and part number, version number, or the URL when submitting your feedback.  
Documentation feedback 172  
 
Index  
A
D
access panel 36  
diagnosing problems 160  
diagnostic tools 143, 147, 151, 152  
diagnostics utility 147  
DIMM installation guidelines 74  
DIMM slot locations 19  
Active Health System 143, 144  
ACU (Array Configuration Utility) 152  
Advanced ECC memory 75, 76  
Advanced ECC support 75  
airflow requirements 56  
DIMMs, single- dual-, and quad-rank 74  
documentation 172  
documentation feedback 172  
drive numbering 19  
drivers 155  
Array Configuration Utility (ACU) 152  
ASR (Automatic Server Recovery) 152  
authorized reseller 163, 167  
Automatic Server Recovery (ASR) 152  
B
E
battery replacement notice 161  
Belarus Kazakhstan Russia marking 161  
BIOS upgrade 143  
boot options 149, 151  
BSMI notice 161  
electrical grounding requirements 56  
electrostatic discharge 163  
environmental requirements 55  
Erase Utility 143, 147  
buttons 9  
buttons, front panel 9  
F
FBWC module 97  
firmware 153, 156  
front panel components 9  
front panel LEDs 10  
C
Cable guard 133  
cables 133  
cabling 133  
cabling, front LED 133  
G
grounding methods 163  
grounding requirements 56, 163  
cache module 140  
cautions 163  
Change Control 149, 152, 157  
chassis components 9, 10, 11, 12, 16  
components 9  
components, identification 9, 10, 11, 12, 13, 14,  
configuration of system 143  
connectors 9  
contacting HP 167  
crash dump analysis 18  
customer self repair (CSR) 167  
H
hardware options 64  
hardware options installation 64  
health driver 152  
heatsink 123  
HP Advanced Power Manager (HP APM) 62  
HP Care Pack Services 54, 157  
HP contact information 167  
HP iLO 143  
HP Insight Diagnostics 147  
HP Insight Online 143, 146  
HP Insight Remote Support software 146, 157  
Index 173  
 
HP RESTful API 145, 151  
HP Service Pack for ProLiant 143, 147, 148  
HP Smart Storage Battery 99  
P
PCI riser cage 84  
phone numbers 167  
population guidelines, Advanced ECC 76  
power requirements 56, 166  
power supply 166  
HP Smart Update Manager overview 143, 148  
HP SSA (HP Smart Storage Administrator) 143, 152  
I
powering down 31  
iLO (Integrated Lights-Out) 143, 144, 145  
IML (Integrated Management Log) 143, 145  
Insight Diagnostics 147, 153  
processor 123  
Product ID 151  
installation services 54  
installation, server options 64  
Q
QuickSpecs 143  
installing hardware 64  
installing operating system 62  
R
Integrated Lights-Out (iLO) 143, 145  
Integrated Management Log (IML) 145  
Intelligent Provisioning 63, 143, 146, 147, 149  
internal USB connector 152  
Rack Control Managment (RCM) module 67  
rack installation 54  
rack warnings 57  
rear panel components 11, 13  
rear panel LEDs 12, 14  
L
redundant ROM 153  
registering the server 63  
LEDs, drive 22  
LEDs, power supply 16  
LEDs, troubleshooting 160  
regulatory compliance notices 161  
removing node from chassis 32  
requirements, power 56, 166  
requirements, temperature 56  
ROM-Based Setup Utility (RBSU) 149  
ROMPaq utility 153  
M
M.2 SATA SSD enablement board 121  
memory 74, 76  
memory configurations 76  
memory subsystem architecture 74  
memory, Advanced ECC 75  
memory, configuring 75, 76  
memory, online spare 76  
S
safety considerations 153, 161, 163  
safety information 153, 161  
scripted installation 148  
scripting toolkit 143, 148  
security bezel, installing 64  
security bezel, removing 35  
serial number 151  
server features and options 64  
Smart Update Manager 143, 148  
specifications, environmental 164  
support 167  
support and other resources 167  
supported operating systems 156, 157  
system battery 161, 163  
N
NIC connectors 13  
NMI functionality 18  
NMI header 18  
O
online spare memory 76, 77  
operating system installation 157  
operating systems 156, 157  
operations 31  
optimum environment 55  
options installation 64  
system board components 16  
System Erase Utility 147  
Index 174  
T
technical support 157, 167  
telephone numbers 167  
temperature requirements 56, 164  
TPM (Trusted Platform Module) 129, 131  
troubleshooting 160  
troubleshooting resources 160  
Trusted Platform Module (TPM) 129, 131  
U
updating the system ROM 153, 154, 155  
USB support 152  
utilities, deployment 143, 148  
V
ventilation 55  
Virtualization option 157  
W
warnings 57  
Index 175  

Trion Cm Series User Manual
Soleus Air Pa1 12r 32 User Manual
Ricoh Aficio Is 2430 User Manual
MAYTAG MMV4205FW 03 User Manual
Lexmark Z82 User Manual
KENWOOD KRC 940 User Manual
KENWOOD KDC X498 User Manual
HITACHI 57F710A User Manual
DELL SE2719HX User Manual
BLACK DECKER ST7700 02 User Manual