Servers
Here you can find all the server information.
Overview
Server Room:
The keys for the server room are in the secretary office, in the locker behind the door, in a small wooden box. They are labeled with “Server Raum”. Please turn off the lights and lock the door when leaving the server room.
Server Name | Type | Service Tag | OS | iDrac-IP | Chair-IP | Host Name | Note |
---|---|---|---|---|---|---|---|
Devimg01 | Dell PowerEdge R530 | 31ZMQ92 | Ubuntu 14.04.3 | 10.0.0.2 | 131.159.24.137 | devimg01-cm | Development and Image Server |
Testbed01 | Dell PowerEdge R530 | 303SQ92 | Ubuntu 14.04.3 | 10.100.0.1 | 131.159.24.142 | testbed01-cm | Testbed for RIFE project |
Testbed02 | Dell PowerEdge R530 | 304MQ92 | Ubuntu 14.04.3 | 10.100.0.2 | 131.159.24.150 | testbed02-cm | Testbed for LLCM / SSICLOPS |
Net01 | Dell PowerEdge R730 | 31VSQ92 | FreeBSD 10.2 + Ubuntu 14.04.3 | 10.200.0.1 | 131.159.24.163 | net01-cm | Networking / Cloud / Performance Tests |
Net02 | Dell PowerEdge R730 | 31XPQ92 | FreeBSD 10.2 + Ubuntu 14.04.3 | 10.200.0.2 | 131.159.24.151 | net02-cm | Networking / Cloud / Performance Tests |
Net03 | Dell PowerEdge R730 | CWKTQ92 | FreeBSD 10.2 + Ubuntu 14.04.3 | 10.200.0.3 | 131.159.24.166 | net03-cm | Networking / Cloud / Performance Tests |
Net04 | Dell PowerEdge R730 | 1JXFQK2 | Ubuntu 16.04 | 10.200.0.4 | - | net04.cm | SSICLOPS Project - FPGA Offloading Tests |
Net05 | Dell PowerEdge R730 | 1JW8QK2 | Ubuntu 16.04 | 10.200.0.5 | - | net05.cm | SSICLOPS Project - FPGA Offloading Tests |
Net06 | Dell PowerEdge R750 | 60Q52V3 | Ubuntu 22.04.01 | 10.30.0.1 | 131.159.25.63 | net-06 | |
Net07 | Dell PowerEdge R750 | B44GZT3 | Ubuntu 22.04.01 | 10.30.0.2 | 131.159.25.64 | net-07 | |
Sim01 | Dell PowerEdge R730 | 49P50D2 | Ubuntu 16.04 LTS Server | 10.250.0.1 | 131.159.24.15 | sim01-cm | Simulation Server |
Emu01 | Dell PowerEdge R630 | DWX40D2 | Ubuntu 16.04 LTS Desktop | 10.150.0.1 | 131.159.24.18 | emu01-cm | Emulation Server |
Emu02 | Dell PowerEdge R730 | 9DG49F2 | - | 10.150.0.2 | 131.159.24.21 | emu02-cm | Server Dell S4048 Switch Controller |
FX2-1 | Dell PowerEdge FX2 Chassis | 49WZ8F2 | Firmware 1.3 | 10.150.10.1 | - | emu-fx-1 | Chassis Controller Management Emu03-Emu06 |
Emu03 | Dell PowerEdge FC630 | 49L19F2 | Ubuntu 16.04 LTS Server | 10.150.0.3 | 131.159.24.20 | emu03-cm | Emulation Servercluster |
Emu04 | Dell PowerEdge FC630 | 49M09F2 | Ubuntu 16.04 LTS Server | 10.150.0.4 | - | emu04-cm | Emulation Servercluster |
Emu05 | Dell PowerEdge FC630 | 49M59F2 | Ubuntu 16.04 LTS Server | 10.150.0.5 | - | emu05-cm | Emulation Servercluster |
Emu06 | Dell PowerEdge FC630 | 49MZ8F2 | Ubuntu 16.04 LTS Server | 10.150.0.6 | - | emu06-cm | Emulation Servercluster |
FX2-2 | Dell PowerEdge FX2 Chassis | 49LZ8F2 | Firmware 1.3 | 10.150.10.2 | - | emu-fx-2 | Chassis Controller Management Emu07-Emu10 |
Emu07 | Dell PowerEdge FC630 | 49R69F2 | Ubuntu 16.04 LTS Server | 10.150.0.7 | - | emu07-cm | Emulation Servercluster |
Emu08 | Dell PowerEdge FC630 | 49S29F2 | Ubuntu 16.04 LTS Server | 10.150.0.8 | - | emu08-cm | Emulation Servercluster |
Emu09 | Dell PowerEdge FC630 | 49S69F2 | Ubuntu 16.04 LTS Server | 10.150.0.9 | - | emu09-cm | Emulation Servercluster |
Emu10 | Dell PowerEdge FC630 | 49T19F2 | Ubuntu 16.04 LTS Server | 10.150.0.10 | - | emu10-cm | Emulation Servercluster |
FX2-3 | Dell PowerEdge FX2 Chassis | 48M59F2 | Firmware 1.3 | 10.150.10.3 | - | emu-fx-3 | Chassis Controller Management Emu11-Emu14 |
Emu11 | Dell PowerEdge FC630 | 48359F2 | Ubuntu 16.04 LTS Server | 10.150.0.11 | - | emu11-cm | Emulation Servercluster |
Emu12 | Dell PowerEdge FC630 | 48529F2 | Ubuntu 16.04 LTS Server | 10.150.0.12 | - | emu12-cm | Emulation Servercluster |
Emu13 | Dell PowerEdge FC630 | 486Y8F2 | Ubuntu 16.04 LTS Server | 10.150.0.13 | - | emu13-cm | Emulation Servercluster |
Emu14 | Dell PowerEdge FC630 | 487Z8F2 | Ubuntu 16.04 LTS Server | 10.150.0.14 | - | emu14-cm | Emulation Servercluster |
Mon01 | Dell PowerEdge R430 | 6L0T7J2 | Ubuntu 16.04 LTS Server | 10.25.0.1 | - | mon01-cm | Storage Monitoring Server |
Sto01 | Dell PowerEdge R730xd | 6KKN7J2 | Ubuntu 16.04 LTS Server | 10.50.0.1 | - | sto01-cm | Storage Server |
Sto02 | Dell PowerEdge R730xd | 6KLQ7J2 | Ubuntu 16.04 LTS Server | 10.50.0.2 | - | sto02-cm | Storage Server |
cmp01 | Dell PowerEdge R6525 | 88WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.1 | 131.159.25.22 | cmp-01 | Compute Server |
cmp02 | Dell PowerEdge R6525 | 68WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.2 | 131.159.25.21 | cmp-02 | Compute Server |
cmp03 | Dell PowerEdge R6525 | 58WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.3 | 131.159.25.23 | cmp-03 | Compute Server |
cmp04 | Dell PowerEdge R6525 | 48WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.4 | 131.159.25.24 | cmp-04 | Compute Server |
cmp05 | Dell PowerEdge R6525 | 78WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.5 | 131.159.25.25 | cmp-05 | Compute Server |
cmp06 | Dell PowerEdge R6525 | 98WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.6 | 131.159.25.26 | cmp-06 | Compute Server |
gpu01 | Dell PowerEdge R7525 | GV8BYJ3 | Ubuntu 20.04.05 LTS Server | 10.20.0.1 | 131.159.25.18 | gpu-01 | GPU Server |
gpu02 | Dell PowerEdge R7525 | FV8BYJ3 | Ubuntu 20.04.05 LTS Server | 10.20.0.2 | 131.159.25.19 | gpu-02 | GPU Server |
Server-List
More specific information about each server.
DevImg01
Purpose: Development and operating system images from other servers
Operating System | Chair-IP | iDrac-IP | Server |
---|---|---|---|
Ubuntu 14.04.3 | 131.159.24.137 | 10.0.0.2 | Dell PowerEdge R530 |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
125GB | 16 (32) | 4x4TB = 16TB with RAID5 (one virtual disk with 11TB) |
iDRAC
- MAC: 14:18:77:45:AE:53
- IP: 10.0.0.2, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS
Network Interface em1
- Name: devimg01.cm.in.tum.de
- Device: Embedded NIC.1-1-1, MAC: 14:18:77:45:AE:4F
- IP: 131.159.24.137
Network interface 10 Gigabit
- Device: NIC.Slot.3-1-1, Operating system device: p3p1
- IP: 10.0.0.1, Subnet: 255.255.0.0
- Do not use: 255.0.0.0 otherwise the LRZ backup node is unreachable
Testbed
Purpose: Testbed for RIFE (testbed01) and LLCM/SSICLOPS (testbed02) projects We have two testbed servers that are completely identical.
Name | Operating System | Chair-IP | iDrac-IP | Server | Notes |
---|---|---|---|---|---|
Testbed01-cm | Ubuntu 14.04.3 | 131.159.24.142 | 10.100.0.1 | Dell PowerEdge R530 | RIFE project |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
105GB | 16 (32) | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
Management interface (Dell iDRAC)
- MAC: 14:18:77:45:A1:CB
- IP: 10.100.0.1, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS
Network interface em1
- Name: testbed01.cm.in.tum.de
- Device: Embedded NIC.1-1-1, MAC: 14:18:77:45:A1:C7
- IP: 131.159.24.142
Network interface 10 Gigabit
- Device: NIC.Slot.2-1-1, Operating system device: p2p1
- P: 10.0.0.2, Subnet: 255.255.0.0
Virtualization with two machines
Name | Operating System | Chair-IP | iDrac-IP | Server | Notes |
---|---|---|---|---|---|
Testbed02-cm | Ubuntu 14.04.3 | 131.159.24.150 | 10.100.0.2 | Dell PowerEdge R530 | LLCM/SSICLOPS project |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
111GB | 16 (32) | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
Management interface (Dell iDRAC)
- MAC: 14:18:77:45:AA:8D
- IP: 10.100.0.2, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS
Network interface em1
- Name: testbed02.cm.in.tum.de
- Device: Embedded NIC.1-1-1, MAC: 14:18:77:45:AA:89
- IP: 131.159.24.150
Network interface 10 Gigabit
- Device: NIC.Slot.2-1-1, Operating system device: p2p1
- IP: 10.0.0.3, Subnet: 255.255.0.0
Net
The net first three servers have a external graphic card for offloading experimentation/tests. All three have a
To see if the graphic card is installed use the following command:
sudo lspci -v | grep "S7000" -A 17 -B 2
There are three servers for network and performance tests.
Name | Operating System | Chair-IP | iDrac-IP | Server | Storage | Notes |
---|---|---|---|---|---|---|
Net01-cm | Ubuntu 14.04.3 / FreeBSD 10.2 | 131.159.24.163 | 10.200.0.1 | Dell PowerEdge R730 | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) | |
Net02-cm | Ubuntu 14.04.3 / FreeBSD 10.2 | 131.159.24.151 | 10.200.0.2 | Dell PowerEdge R730 | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) | |
Net03-cm | Ubuntu 14.04.3 / FreeBSD 10.2 | 131.159.24.166 | 10.200.0.3 | Dell PowerEdge R730 | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
125GB | 12 (24) | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
The next two net server have FPGA network cards installed:
These are the specifications of the two servers dedicated to offloading tests.
Name | Operating System | Chair-IP | iDrac-IP | Server | Storage | Notes |
---|---|---|---|---|---|---|
Net04.cm | Ubuntu 16.04 (MAAS) | 1JXFQK2 | 10.200.0.4 | Dell PowerEdge R730 | - | Installed with MAAS |
Net05.cm | Ubuntu 16.04 (MAAS) | 1JW8QK2 | 10.200.0.5 | Dell PowerEdge R730 | - | Installed with MAAS |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
64GB | 12 | 600GB data storage (virtual disk RAID-1) - 120GB SSD system storage (virtual disk RAID-0) |
Both servers have two 120GB SSD - the first thought was to create a RAID-1 with the SSDs but as mentioned in several discussions, there is a very high chance that the SSDs will fail at the same time in RAID-1. So we left one SSD that is not used currently. If the system SSD fails we can create a new virtual disk (iDrac) with the remaining SSD and do a quick new install with MAAS.
Sim
Purpose: Simulation server
Name | Operating System | Chair-IP | iDrac-IP | Server | Notes |
---|---|---|---|---|---|
Sim01-cm | Ubuntu 16.04 Server | 131.159.24.15 | 10.250.0.1 | Dell PowerEdge R730 |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
251GB | 16 (32) | 4x560GB = 2.2TB with RAID1 (one virtual disk with 1.1TB) |
Management interface (Dell iDRAC)
- MAC: 14:18:77:5F:C2:6D
- IP: 10.250.0.1, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS
Network interface eno3
- Name: sim01.cm.in.tum.de
- Device: Embedded NIC.1-3-1, MAC: 24:6E:96:13:9E:5C
- IP: 131.159.24.15
Emu
Purpose: Emulation server
Emu01
Name | Operating System | Chair-IP | iDrac-IP | Server |
---|---|---|---|---|
Emu01-cm | Ubuntu 16.04 Desktop | 131.159.24.18 | 10.150.0.1 | Dell PowerEdge R630 |
Memory (RAM) | CPU Cores (Hyper-threading) | Storage |
---|---|---|
123GB | 32 (64) | 4x280GB=1080GB RAID5 (one virtual disk with 840GB) |
Management interface (Dell iDRAC)
- MAC: 64:00:6A:C4:4B:84
- IP: 10.150.0.1, Subnet: 255.0.0.0, Gateway: 0.0.0.0, No DNS
Network interface eno3
- Name: emu01.cm.in.tum.de
- Device: Embedded NIC.1-3-1, MAC: 24:6E:96:12:B2:74
- IP: 131.159.24.18
Emu02
Name | Operating System | Chair-IP | iDrac-IP | Server |
---|---|---|---|---|
Emu02-cm | Ubuntu 16.04.1 Server | 131.159.24.21 | 10.150.0.2 | Dell PowerEdge R730 |
Memory (RAM) | CPU Cores (Hyper-threading) | Storage |
---|---|---|
128GB | 16 (32) | RAID1 2×1.8TB (one virtual 200GB, one virtual 1.6 TB) |
NIC Slot 2: Intel(R) 10G 2P X520 Adapter
NIC 2 Slot 1 Partition 1 - enp4s0f0
- Network: Brocade Switch - Server
- Mac-Address: A0:36:9F:B8:BE:30
- IP-Address: 10.0.0.4
NIC 2 Slot 2 Partition 1 - enp4s0f1
- Network: Brocade Switch - Chair
- Mac-Address: A0:36:9F:B8:BE:32
- IP-Address: 131.159.24.21
NIC 1: Broadcom Gigabit Ethernet BCM5720
NIC 1 Port 1 Partition 1
- Network: Dell S4840 Switch Controller
- Mac-Address: 18:66:DA:54:4C:A4
- IP-Address: 10.20.0.1
FX Cluster NODES
Configuration for each FX Node FC630
Name | Operating System | Server |
---|---|---|
Emu<nn>-cm | Ubuntu 14.04.5 LTS Server | Dell PowerEdge FC630 |
Memory (RAM) | CPU Cores (Hyper-threading) | Storage |
---|---|---|
768GB (24x32GB) | 20 (40) | 3 x 446GB SSD RAID5 (one virtual 893 GB SSD) |
Emu03 - Outdate (Xen)
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address: 24:6E:96:1C:CD:C0
- IP-Address: 10.0.1.1
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress: 24:6E:96:1C:CD:C2
- IP-Address: 10.0.1.2
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress: 24:6E:96:1C:CD:C4
- IP-Address: 10.0.0.5
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress: 24:6E:96:1C:CD:C6
- IP-Address: 131.59.24.20
Emu04 - Outdate (Xen)
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address: 24:6E:96:1C:CD:A0
- IP-Address: 10.0.1.3
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress: 24:6E:96:1C:CD:A2
- IP-Address: 10.0.1.4
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress: 24:6E:96:1C:CD:A4
- IP-Address: 10.0.0.6
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress: 24:6E:96:1C:CD:A6
- IP-Address: 131.59.24.
Emu05
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu06
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu07
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu08
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu09
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu10
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu11
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu12
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu13
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Emu14
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
- Network: Dell Switch
- Mac-Address:
- IP-Address: 10.0.1.
NIC 1 Port 2 Partition 1
- Network: Dell Switch
- Mac-Adress:
- IP-Address: 10.0.1.
NIC 1 Port 3 Partition 1
- Network: Brocade Switch - Server
- Mac-Adress:
- IP-Address: 10.0.0.
NIC 1 Port 4 Partition 1
- Network: Brocade Switch - Chair
- Mac-Adress:
- IP-Address: 131.59.24.
Social Computing
There are also some servers in the chair server room, which are equipped with graphics cards for machine learning purposes etc.
Hostname | IP Address | MAC Address |
---|---|---|
social1.cm.in.tum.de | 131.159.24.12 | 38:60:77:6a:c6:db |
social2.cm.in.tum.de | 131.159.24.134 | c8:60:00:c7:7F:7e |
social3.cm.in.tum.de | 131.159.24.238 | 10:bf:48:e2:a7:39 |
social4.cm.in.tum.de | 131.159.24.13 | b8:ca:3a:82:d6:6f |
social5.cm.in.tum.de | 131.159.24.171 | - |
social6.cm.in.tum.de | 131.159.24.184 | - |
Network Setup
This section/chapter describes the network part.
- Making server ready for access via the network
- Find MAC Address in the iDrac
If you go to the server room make notes about the network configuration/how the cables are patched on the back side. Each Port has a small name or number next to it.
- There is always one iDrac interface - the rbg-network group should patch this interface in our server management network (VLan132, accessible from vmott2)
- There is always one cable to the chair network - the rbg-network group should patch this one in our chair network (131.159.24.0/23) this cable is often connected to the Integrated Network Card
- The Integrated Network Device/Card has four Ports
- On the iDrac webinterface they are called NIC.Integrated - with each Port behind it (e.g. NIC.Integrated.1-1-1, NIC.Integrated.1-2-1, etc.)
- From these four ports, sometimes the first two ports are replaced with a better network card
- Very often the chair network cable is plugged into the third Integrated Network Port
- Any additional network cards or slots have other names in the iDrac interface
After you found out to which Port the chair network cable connects you can simply look the Mac Address up on the iDraw webinterface
Find MAC Address
Another way to check if the link is up and the mac address of the device, is to connect directly via ssh to the server management interface. For this step the iDrac interface address needs to be set and ping should work.
- Connect to vmott2
ssh paulth@cm-mgmt.in.tum.de
- From there connect to the wished iDrac interface (table in Overview chapter)
#example with sim01 ssh root@10.250.0.1 #password is the same as for the webinterface - password safe
- Now you can use different commands to check the interfaces
command | Note |
---|---|
racadm getsysinfo | Full system report |
racadm hwinventory nic | Show all network interfaces |
racadm ifconfig | Shows all up and running network interfaces |
racadm nistatistics <interface> | Use the interface from the hwinventory command. Only works when server is running and/or operating system is booted |
With the nicstatistics
command you can find out if the link is up or not, but only works if the server is running.
Network VLAN Tagging
This section describes how to setup vlan tagging on the servers.
sudo apt install vlan sudo modprobe 8021q sudo install bridge-utils sudo vim /etc/network/interfaces # content of interfaces file (only ubuntu < 18.04) #-------------------------- # real hardare interface auto eno1 iface eno1 inet dhcp # vlan interface auto eno1.83 iface eno1.83 inet manual vlan-raw-device eno1 vlan_id 83 # bridge interface to vlan auto chair iface chair inet dhcp bridge_ports eno1.93 bridge_fd 15 #------------------------- sudo ifup eno1 sudo ifup eno1.83 sudo ifup chair
Server Administration - iDRAC
Every server has at least one management network interface. We have our own server management VLAN to administrate the servers. There is one central gateway where admins can log in and from there reach the server management interfaces, via the second network interface of the gateway.
- VLAN address space: 10.0.0.0/8
- RBG intern VLAN name: VLan132 il11_management
- Gateway: vmott2 (131.159.24.136)
- Name: cm-mgmt.informatik.tu-muenchen.de
- Login: ssh
Procedure to administrate servers, check status, install new os, etc:
- New Server only: Go to the server room and set static ip address via buttons on server
- Choose one IP from the Management Network: 10.0.0.0/8
- Subnet-Mask: 255.0.0.0
- Gateway: Not necessary → 0.0.0.0 (not allowed by iDrac, choose 10.0.0.0)
- DNS: off
- Log in on vmott2 via ssh
ssh paulth@cm-mgmt
- Ping static iDrac interface address, no response:
- wrong settings on server (ip, subnet-mask, dns)
- wrong VLAN on iDrac interface → rbg network group ask if they can patch the management in our management network (VLan 132).
- Log off from vmott2, now you can open a SOCKS5 proxy connection to vmott2, from there you have access to the iDrac webinterface of all servers
ssh -ND 8080 paulth@cm-mgmt.in.tum.de
- Type the command into your terminal, if the cursors stops after your password the connection is successful
- Configure the proxy in your browser, Firefox:Preferences→Advanced→Network→Connection:Settings→Manual Proxy - ONLY Socks Host:localhost Port:8080→ OK
- Now you can reach every server via
https://<iDrac-interface-ip-address> #example (sim01) https://10.250.0.1
- The login is
root
and the password can be found in our password safe
- OTHER POSSIBILITY: Log off from vmott2 and make an ssh tunnel to access a specific iDrac webinterface
sudo ssh -L 443:SERVER-IP:443 -L 5900:SERVER-IP:5900 -L 5901:SERVER-IP:5901 USER@cm-mgmt.in.tum.de
devimg01 | sudo ssh -L 443:10.0.0.2:443 -L 5900:10.0.0.2:5900 -L 5901:10.0.0.2:5901 paulth@cm-mgmt.in.tum.de |
testbed01 | sudo ssh -L 443:10.100.0.1:443 -L 5900:10.100.0.1:5900 -L 5901:10.100.0.1:5901 paulth@cm-mgmt.in.tum.de |
testbed02 | sudo ssh -L 443:10.100.0.2:443 -L 5900:10.100.0.2:5900 -L 5901:10.100.0.2:5901 paulth@cm-mgmt.in.tum.de |
net01 | sudo ssh -L 443:10.100.0.1:443 -L 5900:10.100.0.1:5900 -L 5901:10.100.0.1:5901 paulth@cm-mgmt.in.tum.de |
net02 | sudo ssh -L 443:10.100.0.2:443 -L 5900:10.100.0.2:5900 -L 5901:10.100.0.2:5901 paulth@cm-mgmt.in.tum.de |
net03 | sudo ssh -L 443:10.100.0.3:443 -L 5900:10.100.0.3:5900 -L 5901:10.100.0.3:5901 paulth@cm-mgmt.in.tum.de |
sim01 | sudo ssh -L 443:10.250.0.1:443 -L 5900:10.250.0.1:5900 -L 5901:10.250.0.1:5901 paulth@cm-mgmt.in.tum.de |
emu01 | sudo ssh -L 443:10.150.0.1:443 -L 5900:10.150.0.1:5900 -L 5901:10.150.0.1:5901 paulth@cm-mgmt.in.tum.de |
After that you can open the iDrac web interface by typing: https://localhost into your web browser.
Remote racadm
The racadm tool can be used in two ways:
- SSH into the idrac
- Install racadm package on Dell server, use it locally for this server (local) or control remote idrac interfaces (remote)
The racadm package throws errors if installed on non Dell server, but racadm binary is successfully downloaded under /opt/dell/srvadmin/bin/idracadm7. Even though it is idracadm7 it also works for iDrac8 and is the newest version (9.3.0). Use the following commands to set up a working remote racadm environment on a non-dell Server. Commands work for Ubuntu 18.04 only!! Have a look at current versions.:
sudo su echo "deb http://linux.dell.com/repo/community/openmanage/930/bionic bionic main" > /etc/apt/sources.list.d/linux.dell.com.sources.list gpg --keyserver-options http-proxy=http://proxy.in.tum.de:8080 --keyserver pool.sks-keyservers.net --recv-key 1285491434D8786F gpg -a --export 1285491434D8786F | sudo apt-key add - sudo apt update # instasll packets - libssl required for ssl connection to idrac sudo apt install libssl-dev srvadmin-idracadm8 sudo cp /opt/dell/srvadmin/bin/idracadm7 /root/ sudo ln -sf /root/idracadm7 /usr/bin/racadm # remove broken package installation sudo apt purge srvadmin-base srvadmin-hapi srvadmin-idracadm7
Server OS Installation
MAAS
- Maas can install the operating system with a few clicks
- Log in to the dashboard: http://mon01-cm.informatik.tu-muenchen.de:5240/MAAS
- Restart the server with PXE enabled on the network card interface connected to the brocade switch and in network internal, vlan_133
- The server should show up on the “Nodes” tab with a randomly selected name
- To Commission the server (prepare server) and finally Deploy it (boot and install OS) the power settings have to be set
Power Configuration
- To shutdown and start servers a new user is created on iDrac
- TODO user creation commands iDRAC
- After the user is created power settings have to be set in MAAS
- In order to reach the iDrac a route has to be established on mon01
ip route add 10.200.0.0/24 via 131.159.24.136
- On vmott2 a NAT needs to be configured to forward packets from MAAS to the according server
#uncomment in /etc/sysctl.conf net.ipv4.ip_forward = 1 # iptables NAT - rewrite all incoming packets from MAAS to internal vmott2 interface sudo iptables -t nat -A POSTROUTING -s 131.159.24.39 -j SNAT --to-source 10.0.0.1
Alternative Installation
If the server has the enterprise license the image can be mounted virtually from the iDrac interface. With the Express or any other license a boot stick/CD needs to be prepared and mounted manually down in the server room.
- Log in on the iDrac webinterface -more information
- Virtual console → Start virtual console (next to options) → Java application is downloaded and executed, make sure that you have java runtime environment installed and activated for browser content.
- In the console window → Virtual Device→Conncet virtual device → console window again→Virtual Device→Assign DVD/CD→Choose: Now choose the wished image→Assign device
- After choosing the image you have to set the boot option: Console window → Next start → virtual DVD/CD/ISO
- Now you have to restart/power on the server: Console window → Power → System on
OS Installation
- After mounting the boot image and choosing the image as next boot option, restart the server over the iDrac interface
- Now the normal operating system installation dialog should show up
- Configure the general operating system settings as shown below
- After that go to the next section - OS configuration.
ubuntu-16.04-server-amd64
- Language: English
- Install Ubuntu Server
- Select a language: English - English
- Country, territory or area: other→Europe→Germany
- Country to base default locale settings: United States - en_US.UTF-8
- Keyboard Layout: choose as you wish, most of the time English(US)
- Choose network interface: select the interface of the configured chair network interface, more information in the server administration chapter
- Autoconfiguration
- Choose hostname: server name / rbg-hostname (default if autoconfiguration successful) without the cm, example → sim01-cm = sim01
- User Full name: i11
- User account: i11
- Password: min 12 digits with numbers, big-small letters and special character, safe in password safe, more in chapter Passwords
- Encrypt home directory: No (Depends on purpose)
- Setting up clock: Automatically, if not choose timezone
- Time zone: Europe/Berlin
- Try umount disks that are in us: yes
- Partition disks: Manual
- Partition menu: go down to the disk with free space→create a new partition→size:max→type:primary
- Partition settings: use as: ext4, mount point: /, bootable flag: off → Done setting up the partition
- Finish partitioning and write changes to disk
- No Swap, continue without swap
- Write changes to disk: Yes
- Installing the system: automatic
- System upgrades: Install security updates automatically
- Software: System Utilities + Openssh Server (choose with space, enter to confirm)
- After installation unmount iso image and restart server
ubuntu-16.04-desktop-amd64
- Language: English
- Install Ubuntu
- Installation Type: Erase disk and install Ubuntu + Use LVM with the new Ubuntu installation
- Where are you: Berlin
- Keyboard Layout: English(US) - English(US)
- Settings: Your name: i11, Your computer's name: <server-name> (e.g. emu01), Username: i11, Password: min 12 digits with numbers, big-small letters and special character, safe in password safe, more in chapter Passwords
- After installation unmount iso image and restart server
Operating system configuration
In this chapter the configuration and integration of a linux server in our chair environment is described. The steps should be done in the following order:
- Configure autofs for automatic filesystem mounts
- LDAP authentification
NAS Share + Home automount
- Install autofs
sudo apt-get install autofs
- Edit
/etc/auto.master
:/- /etc/auto.direct
- Create the file
/etc/auto.direct
/home -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11 /share -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/share_il11
- If necessary move existing home out of the way
sudo mv /home /home.old
- Reload autofs
sudo service autofs restart
LDAP authentification
- Install LDAP packages
sudo apt-get install nslcd ldap-utils
- Edit
/etc/nslcd.conf
, replce the respective lines:uri ldaps://ldapswitch.in.tum.de:636 base ou=IN,o=TUM,c=de map passwd homeDirectory "/home/$uid"
- Edit
/etc/nsswitch.conf
passwd: files ldap group: files ldap shadow: files ldap
- Update the nslcd service
sudo update-rc.d nslcd enable
- Automatically create home folders when logging in for the first time, edit
/etc/pam.d/common-session
session required pam_mkhomedir.so skel=/etc/skel umask=0022
- Add empty .Xauthority to the
/etc/skel
directory - Restrict access to groups, create a file
/etc/login.group.allowed
(0644), fill in the groupsi11 il11admin
- To restrict to groups edit
/etc/pam.d/common-auth
, add to top of fileauth required pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed
- When authentification failure, use the SSH pam_unix → first search ldap, then locally, edit
/etc/pam.d/common-auth
auth sufficient pam_ldap.so try_first_pass auth sufficient pam_unix.so nullok use_first_pass
- Edit sudoers file for sudo access
/etc/sudoers
# Members of the LDAP group: il11admin get root privileges %il11admin ALL=(ALL) ALL
- Allow password change from all servers
- Edit
/etc/pam.d/common-session
session optional pam_ldap.so
- Edit
/etc/pam.d/common-auth
, remove the use_authtok, already done aboveauth sufficient pam_unix.so nullok use_first_pass
- In the last step restart all authentification services and hope the best ;)
#it will ask if you want to override the local changes -> choose no sudo pam-auth-update sudo service nscd stop sudo service nslcd restart
Here are all the files listed with content that were changed during the process:
Autofs
/etc/auto.direct
/share -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/share_il11 #virtual machine /home_i11 -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11 #server /home -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11
/etc/auto.master
# Sample auto.master file # This is an automounter map and it has the following format # key [ -mount-options-separated-by-comma ] location # For details of the format look at autofs(5). # /- /etc/auto.direct # # NOTE: mounts done from a hosts map will be mounted with the # "nosuid" and "nodev" options unless the "suid" and "dev" # options are explicitly given. # #/net -hosts # # Include /etc/auto.master.d/*.autofs # +dir:/etc/auto.master.d # # Include central master map if it can be found using # nsswitch sources. # # Note that if there are entries for /net or /misc (as # above) in the included master map any keys that are the # same will not be seen as the first read key seen takes # precedence. # +auto.master /mount /etc/auto_mount -nosuid,noquota
Ldap
/etc/login.group.allowed
root il11admin
/etc/nslcd.conf
# /etc/nslcd.conf # nslcd configuration file. See nslcd.conf(5) # for details. # The user and group nslcd should run as. uid nslcd gid nslcd # The location at which the LDAP server(s) should be reachable. uri ldap://ldapswitch.informatik.tu-muenchen.de # The search base that will be used for all queries. base ou=IN,o=TUM,c=de map passwd homeDirectory "/home_i11/$uid" # The LDAP protocol version to use. #ldap_version 3 ...
/etc/nsswitch.conf
# /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: files ldap group: files ldap shadow: files ldap hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis
/etc/pam.d/common-session
... # here are the per-package modules (the "Primary" block) session [default=1] pam_permit.so # here's the fallback if no module succeeds session requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around session required pam_permit.so # The pam_umask module will set the umask according to the system default in # /etc/login.defs and user settings, solving the problem of different # umask settings with different shells, display managers, remote sessions etc. # See "man pam_umask". session optional pam_umask.so # and here are more per-package modules (the "Additional" block) session required pam_unix.so session [success=ok default=ignore] pam_ldap.so minimum_uid=1000 session optional pam_systemd.so session required pam_mkhomedir.so skel=/etc/skel umask=0022 # end of pam-auth-update config
/etc/pam.d/common-auth
... # here are the per-package modules (the "Primary" block) auth required pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed auth sufficient pam_ldap.so try_first_pass auth sufficient pam_unix.so nullok use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so # end of pam-auth-update config
/etc/sudoers
# # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults mail_badpass Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the LDAP group: il11admin get root privileges %il11admin ALL=(ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d %root ALL=(ALL) NOPASSWD: ALL
/etc/pam.d/common-password
.... # here are the per-package modules (the "Primary" block) password [success=2 default=ignore] pam_unix.so obscure sha512 password [success=1 default=ignore] pam_ldap.so minimum_uid=1000 try_first_pass # here's the fallback if no module succeeds password requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around password required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config
LDAP authentification with local caching
- Install ldap-utils and sssd:
# apt install ldap-utils sssd
- Edit the configuration file for sssd in
/etc/sssd/sssd.conf
:[sssd] config_file_version = 2 services = nss, pam domains = LDAP [domain/LDAP] cache_credentials = true enumerate = false id_provider = ldap auth_provider = ldap ldap_uri = ldaps://ldap.in.tum.de:636 ldap_search_base = ou=IN,o=TUM,c=DE ldap_network_timeout = 2 entry_cache_timeout = 7776000
- Change the files permission to
600
, otherwise sssd will fail to start# chmod 600 /etc/sssd/sssd.conf
- Disable NSCD caching for
passwd
,group
andnetgroup
, as it would interfere with sssd caching. Change the following lines in/etc/nscd.conf
:enable-cache passwd no enable-cache group no enable-cache netgroup no
- configure NSS to get user and group information from sssd. To do this append
sss
to thepasswd
,group
,shadow
andsudoers
line in/etc/nsswitch.conf
:passwd: files sss group: files sss shadow: files sss sudoers: files sss
- Put any groups you want to be able to login via LDAP in
/etc/login.group.allowed
. Do the same thing with individual users in/etc/login.user.allowed
. Make sure both files exist, even if one of them may be empty.# echo il11 >> /etc/login.group.allowed # touch /etc/login.user.allowed
- Change the permission of each file to
0644
# chmod 0664 /etc/login.group.allowed # chmod 0664 /etc/login.user.allowed
- Configure PAM to allow LDAP login
- First edit
/etc/pam.d/common-auth
auth sufficient pam_unix.so nullok auth sufficient pam_sss.so use_first_pass auth requisite pam_deny.so
- To restrict users who are allowed to log in via ssh, add the following two lines to
/etc/pam.d/sshd
immediately before common-account is included:account [success=1 new_authtok_reqd=1 default=ignore] pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed account required pam_listfile.so onerr=fail item=user sense=allow file=/etc/login.user.allowed
- Certain users should be granted sudo priviledges upon login. For those create entries in
/etc/security/group.conf
# members of il11admin are always granted sudo access *;*;%il11admin;Al0000-2400;adm # For other users create extra entries *;*;exampleuser;Al0000-2400;adm
- In order for these rules to apply add the following line to
/etc/pam.d/sshd
and/etc/pam.d/login
before common-auth is includedauth optional pam_group.so
- Homedirectories should be created if not present. Add the following to
/etc/pam.d/common-session
session required pam_mkhomedir.so skel=/etc/skel umask=0022
- Also create an empty
.Xauthority
in/etc/skel
# touch /etc/skel/.Xauthority
- Finally update PAM (when asked if you wish to overwrite local changes choose no), restart NSCD and start sssd
# pam-auth-update # service nscd restart # service sssd start
- LDAP login is now enabled
Filesystem Setup
- Create a date folder under / with permission 777
sudo mkdir /data && sudo chmod 777 /data
NTP TimeServer
Fail2Ban
Fail2Ban is a intrusion prevention system. It bans IP addresses after too many login requests.
- Install the fail2ban package
sudo apt-get install fail2ban
- Copy the template configuration file
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
- Edit the configuration file and adjust settings
sudo vim /etc/fail2ban/jail.local
bantime=3600 # adjust value ... [sshd] enabled=true # add line
- Restart fail2ban Service
sudo service fail2ban restart
- If wished E-Mail Notification can be enabled by setting parameters in the configuration file, make sure that sendmail is installed
sudo vim /etc/fail2ban/jail.local
destemail = root@mailschlichter.informatik.tu-muenchen.de ... action = %(action_mwl)s
Automatic Security Updates
Other
- Show system information on ssh login, install landscape-common
sudo apt-get install landscap-common
Passwords
For secure password storage we made a keepass file where all the passwords are stored. Ask the old/current administrator for the most recent password file.
- There is an admin user for every server called i11
- The password is the same for every server purpose - e.g. passwords are the same for all simulation server, emulation server and so on
Other
Chair Network
- Network: 131.159.24.0/23
- Subnet mask: 255.255.254.0
- Broadcast: 131.159.25.255
- RBG-Network intern VLAN: VLan83
Default Login iDrac
A new server has default login credentials for the iDrac interface:
- Login: root
- Password: calvin
Service Tag Numbers
Server | Service Tag | License |
---|---|---|
Devimg01 | 31ZMQ92 | Enterprise |
Testbed01 | 303SQ92 | Enterprise |
Testbed02 | 304MQ92 | Enterprise |
Net01 | 31VSQ92 | Enterprise |
Net02 | 31XPQ92 | Enterprise |
Net03 | CWKTQ92 | Enterprise |
Sim01 | 49P50D2 | Enterprise |
Em01 | DWX40D2 | Enterprise |
Naming Scheme
- Domain Names: <host>.cm.in.tum.de
- Storage (secure storage, integrated or seperate and computation)
- st01 – st<nn>
- Development
- devimg01 – devimg<nn>
- Simulation
- sim01 – sim<nn>
- Testbed (RIFE, SSICLOPS)
- testbed01 – testbed<nn>
- Network Performance tests
- net01 – net<nn>
- Emulation (VMs..)
- emu01 – emu<nn>
TUM Tag + Server Label
A few weeks after the server hardware order the bill should arrive at the secretary office together with a TUM label. This label should be put on the respective servers. Follow the same position as already pasted in the server room. Together with the label put a paper with the server name (emu01) on each server. Print the server name (e.g. sim01.cm.in.tum.de) on a white paper (Libreoffice: Dejavu Sans, 12) and fix it with some sellotape.
- Put TUM label on server, same position as on the other servers
- Print white paper with server name + cm.in.tum.de (DejaVu Sans, 12) and fix it on the server next to TUM label
Atschlichter3 Shutdown
Commands to restart atschlichter3, grub is corrupted → restart ends up in a minimal bash environment.
Commands:
- set root=md0
- linux /vmlinuz root=/dev/md0
- initrd /initrd.img
- boot
The server should boot and is reachable via ssh.
Server Update
The Dell servers can be booted with a platform specific bootable iso which updates all the server components automatically. It is recommended to do this once in a while as also hardware issues can be resolved with updating the firmware (e.g. RAM problems).
Ansible (semi-automatic):
- Download all bootable ISO images from the Dell support website (see manual process) and save them on the idrac-gtw:/srv/idrac-img/download/ (vmott22)
- Log in on AWX and execute the “infra - iDrac Update ISOs” playbook
- After that make sure nobody is using the according server and execute the “infra - iDrac Tasks” Playbook with limit set to the server and “Update iDrac” on true.
- Server will be offline during the update
Manual process:
- Download the server model specific iso from this website
- OR go to this website and enter the server Service-Tag, choose category “Systemverwaltung” search for “Platform Specific Bootable ISO” and download it
- Start the server iDrac management tool
- Mount the ISO image as a virtual CD, select Virtual CD as boot parameter and reboot the server
- Server should automatically boot into the ISO file and start the updates (can take up to 1 hour or more)
- After updates are finished boot the server into the same ISO, due to firmware dependencies the update should run twice as sometimes not all updates can be made in the first run
- After running the iso twice, iso can be unmounted and server rebooted into the hard drive
- Done