Here you can find all the server information.
Server Room:
The keys for the server room are in the secretary office, in the locker behind the door, in a small wooden box. They are labeled with “Server Raum”. Please turn off the lights and lock the door when leaving the server room.
Server Name | Type | Service Tag | OS | iDrac-IP | Chair-IP | Host Name | Note |
---|---|---|---|---|---|---|---|
Devimg01 | Dell PowerEdge R530 | 31ZMQ92 | Ubuntu 14.04.3 | 10.0.0.2 | 131.159.24.137 | devimg01-cm | Development and Image Server |
Testbed01 | Dell PowerEdge R530 | 303SQ92 | Ubuntu 14.04.3 | 10.100.0.1 | 131.159.24.142 | testbed01-cm | Testbed for RIFE project |
Testbed02 | Dell PowerEdge R530 | 304MQ92 | Ubuntu 14.04.3 | 10.100.0.2 | 131.159.24.150 | testbed02-cm | Testbed for LLCM / SSICLOPS |
Net01 | Dell PowerEdge R730 | 31VSQ92 | FreeBSD 10.2 + Ubuntu 14.04.3 | 10.200.0.1 | 131.159.24.163 | net01-cm | Networking / Cloud / Performance Tests |
Net02 | Dell PowerEdge R730 | 31XPQ92 | FreeBSD 10.2 + Ubuntu 14.04.3 | 10.200.0.2 | 131.159.24.151 | net02-cm | Networking / Cloud / Performance Tests |
Net03 | Dell PowerEdge R730 | CWKTQ92 | FreeBSD 10.2 + Ubuntu 14.04.3 | 10.200.0.3 | 131.159.24.166 | net03-cm | Networking / Cloud / Performance Tests |
Net04 | Dell PowerEdge R730 | 1JXFQK2 | Ubuntu 16.04 | 10.200.0.4 | - | net04.cm | SSICLOPS Project - FPGA Offloading Tests |
Net05 | Dell PowerEdge R730 | 1JW8QK2 | Ubuntu 16.04 | 10.200.0.5 | - | net05.cm | SSICLOPS Project - FPGA Offloading Tests |
Net06 | Dell PowerEdge R750 | 60Q52V3 | Ubuntu 22.04.01 | 10.30.0.1 | 131.159.25.63 | net-06 | |
Net07 | Dell PowerEdge R750 | B44GZT3 | Ubuntu 22.04.01 | 10.30.0.2 | 131.159.25.64 | net-07 | |
Sim01 | Dell PowerEdge R730 | 49P50D2 | Ubuntu 16.04 LTS Server | 10.250.0.1 | 131.159.24.15 | sim01-cm | Simulation Server |
Emu01 | Dell PowerEdge R630 | DWX40D2 | Ubuntu 16.04 LTS Desktop | 10.150.0.1 | 131.159.24.18 | emu01-cm | Emulation Server |
Emu02 | Dell PowerEdge R730 | 9DG49F2 | - | 10.150.0.2 | 131.159.24.21 | emu02-cm | Server Dell S4048 Switch Controller |
FX2-1 | Dell PowerEdge FX2 Chassis | 49WZ8F2 | Firmware 1.3 | 10.150.10.1 | - | emu-fx-1 | Chassis Controller Management Emu03-Emu06 |
Emu03 | Dell PowerEdge FC630 | 49L19F2 | Ubuntu 16.04 LTS Server | 10.150.0.3 | 131.159.24.20 | emu03-cm | Emulation Servercluster |
Emu04 | Dell PowerEdge FC630 | 49M09F2 | Ubuntu 16.04 LTS Server | 10.150.0.4 | - | emu04-cm | Emulation Servercluster |
Emu05 | Dell PowerEdge FC630 | 49M59F2 | Ubuntu 16.04 LTS Server | 10.150.0.5 | - | emu05-cm | Emulation Servercluster |
Emu06 | Dell PowerEdge FC630 | 49MZ8F2 | Ubuntu 16.04 LTS Server | 10.150.0.6 | - | emu06-cm | Emulation Servercluster |
FX2-2 | Dell PowerEdge FX2 Chassis | 49LZ8F2 | Firmware 1.3 | 10.150.10.2 | - | emu-fx-2 | Chassis Controller Management Emu07-Emu10 |
Emu07 | Dell PowerEdge FC630 | 49R69F2 | Ubuntu 16.04 LTS Server | 10.150.0.7 | - | emu07-cm | Emulation Servercluster |
Emu08 | Dell PowerEdge FC630 | 49S29F2 | Ubuntu 16.04 LTS Server | 10.150.0.8 | - | emu08-cm | Emulation Servercluster |
Emu09 | Dell PowerEdge FC630 | 49S69F2 | Ubuntu 16.04 LTS Server | 10.150.0.9 | - | emu09-cm | Emulation Servercluster |
Emu10 | Dell PowerEdge FC630 | 49T19F2 | Ubuntu 16.04 LTS Server | 10.150.0.10 | - | emu10-cm | Emulation Servercluster |
FX2-3 | Dell PowerEdge FX2 Chassis | 48M59F2 | Firmware 1.3 | 10.150.10.3 | - | emu-fx-3 | Chassis Controller Management Emu11-Emu14 |
Emu11 | Dell PowerEdge FC630 | 48359F2 | Ubuntu 16.04 LTS Server | 10.150.0.11 | - | emu11-cm | Emulation Servercluster |
Emu12 | Dell PowerEdge FC630 | 48529F2 | Ubuntu 16.04 LTS Server | 10.150.0.12 | - | emu12-cm | Emulation Servercluster |
Emu13 | Dell PowerEdge FC630 | 486Y8F2 | Ubuntu 16.04 LTS Server | 10.150.0.13 | - | emu13-cm | Emulation Servercluster |
Emu14 | Dell PowerEdge FC630 | 487Z8F2 | Ubuntu 16.04 LTS Server | 10.150.0.14 | - | emu14-cm | Emulation Servercluster |
Mon01 | Dell PowerEdge R430 | 6L0T7J2 | Ubuntu 16.04 LTS Server | 10.25.0.1 | - | mon01-cm | Storage Monitoring Server |
Sto01 | Dell PowerEdge R730xd | 6KKN7J2 | Ubuntu 16.04 LTS Server | 10.50.0.1 | - | sto01-cm | Storage Server |
Sto02 | Dell PowerEdge R730xd | 6KLQ7J2 | Ubuntu 16.04 LTS Server | 10.50.0.2 | - | sto02-cm | Storage Server |
cmp01 | Dell PowerEdge R6525 | 88WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.1 | 131.159.25.22 | cmp-01 | Compute Server |
cmp02 | Dell PowerEdge R6525 | 68WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.2 | 131.159.25.21 | cmp-02 | Compute Server |
cmp03 | Dell PowerEdge R6525 | 58WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.3 | 131.159.25.23 | cmp-03 | Compute Server |
cmp04 | Dell PowerEdge R6525 | 48WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.4 | 131.159.25.24 | cmp-04 | Compute Server |
cmp05 | Dell PowerEdge R6525 | 78WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.5 | 131.159.25.25 | cmp-05 | Compute Server |
cmp06 | Dell PowerEdge R6525 | 98WS6J3 | Ubuntu 20.04.03 LTS Server | 10.10.0.6 | 131.159.25.26 | cmp-06 | Compute Server |
gpu01 | Dell PowerEdge R7525 | GV8BYJ3 | Ubuntu 20.04.05 LTS Server | 10.20.0.1 | 131.159.25.18 | gpu-01 | GPU Server |
gpu02 | Dell PowerEdge R7525 | FV8BYJ3 | Ubuntu 20.04.05 LTS Server | 10.20.0.2 | 131.159.25.19 | gpu-02 | GPU Server |
More specific information about each server.
Purpose: Development and operating system images from other servers
Operating System | Chair-IP | iDrac-IP | Server |
---|---|---|---|
Ubuntu 14.04.3 | 131.159.24.137 | 10.0.0.2 | Dell PowerEdge R530 |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
125GB | 16 (32) | 4x4TB = 16TB with RAID5 (one virtual disk with 11TB) |
iDRAC
Network Interface em1
Network interface 10 Gigabit
Purpose: Testbed for RIFE (testbed01) and LLCM/SSICLOPS (testbed02) projects We have two testbed servers that are completely identical.
Name | Operating System | Chair-IP | iDrac-IP | Server | Notes |
---|---|---|---|---|---|
Testbed01-cm | Ubuntu 14.04.3 | 131.159.24.142 | 10.100.0.1 | Dell PowerEdge R530 | RIFE project |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
105GB | 16 (32) | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
Management interface (Dell iDRAC)
Network interface em1
Network interface 10 Gigabit
Virtualization with two machines
Name | Operating System | Chair-IP | iDrac-IP | Server | Notes |
---|---|---|---|---|---|
Testbed02-cm | Ubuntu 14.04.3 | 131.159.24.150 | 10.100.0.2 | Dell PowerEdge R530 | LLCM/SSICLOPS project |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
111GB | 16 (32) | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
Management interface (Dell iDRAC)
Network interface em1
Network interface 10 Gigabit
The net first three servers have a external graphic card for offloading experimentation/tests. All three have a
To see if the graphic card is installed use the following command:
sudo lspci -v | grep "S7000" -A 17 -B 2
There are three servers for network and performance tests.
Name | Operating System | Chair-IP | iDrac-IP | Server | Storage | Notes |
---|---|---|---|---|---|---|
Net01-cm | Ubuntu 14.04.3 / FreeBSD 10.2 | 131.159.24.163 | 10.200.0.1 | Dell PowerEdge R730 | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) | |
Net02-cm | Ubuntu 14.04.3 / FreeBSD 10.2 | 131.159.24.151 | 10.200.0.2 | Dell PowerEdge R730 | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) | |
Net03-cm | Ubuntu 14.04.3 / FreeBSD 10.2 | 131.159.24.166 | 10.200.0.3 | Dell PowerEdge R730 | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
125GB | 12 (24) | 2x2TB = 4TB with RAID1 (one virtual disk with 1.7TB) |
The next two net server have FPGA network cards installed:
These are the specifications of the two servers dedicated to offloading tests.
Name | Operating System | Chair-IP | iDrac-IP | Server | Storage | Notes |
---|---|---|---|---|---|---|
Net04.cm | Ubuntu 16.04 (MAAS) | 1JXFQK2 | 10.200.0.4 | Dell PowerEdge R730 | - | Installed with MAAS |
Net05.cm | Ubuntu 16.04 (MAAS) | 1JW8QK2 | 10.200.0.5 | Dell PowerEdge R730 | - | Installed with MAAS |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
64GB | 12 | 600GB data storage (virtual disk RAID-1) - 120GB SSD system storage (virtual disk RAID-0) |
Both servers have two 120GB SSD - the first thought was to create a RAID-1 with the SSDs but as mentioned in several discussions, there is a very high chance that the SSDs will fail at the same time in RAID-1. So we left one SSD that is not used currently. If the system SSD fails we can create a new virtual disk (iDrac) with the remaining SSD and do a quick new install with MAAS.
Purpose: Simulation server
Name | Operating System | Chair-IP | iDrac-IP | Server | Notes |
---|---|---|---|---|---|
Sim01-cm | Ubuntu 16.04 Server | 131.159.24.15 | 10.250.0.1 | Dell PowerEdge R730 |
Memory (RAM) | Real CPU Cores (Hyper-threading) | Storage |
---|---|---|
251GB | 16 (32) | 4x560GB = 2.2TB with RAID1 (one virtual disk with 1.1TB) |
Management interface (Dell iDRAC)
Network interface eno3
Purpose: Emulation server
Name | Operating System | Chair-IP | iDrac-IP | Server |
---|---|---|---|---|
Emu01-cm | Ubuntu 16.04 Desktop | 131.159.24.18 | 10.150.0.1 | Dell PowerEdge R630 |
Memory (RAM) | CPU Cores (Hyper-threading) | Storage |
---|---|---|
123GB | 32 (64) | 4x280GB=1080GB RAID5 (one virtual disk with 840GB) |
Management interface (Dell iDRAC)
Network interface eno3
Name | Operating System | Chair-IP | iDrac-IP | Server |
---|---|---|---|---|
Emu02-cm | Ubuntu 16.04.1 Server | 131.159.24.21 | 10.150.0.2 | Dell PowerEdge R730 |
Memory (RAM) | CPU Cores (Hyper-threading) | Storage |
---|---|---|
128GB | 16 (32) | RAID1 2×1.8TB (one virtual 200GB, one virtual 1.6 TB) |
NIC Slot 2: Intel(R) 10G 2P X520 Adapter
NIC 2 Slot 1 Partition 1 - enp4s0f0
NIC 2 Slot 2 Partition 1 - enp4s0f1
NIC 1: Broadcom Gigabit Ethernet BCM5720
NIC 1 Port 1 Partition 1
Configuration for each FX Node FC630
Name | Operating System | Server |
---|---|---|
Emu<nn>-cm | Ubuntu 14.04.5 LTS Server | Dell PowerEdge FC630 |
Memory (RAM) | CPU Cores (Hyper-threading) | Storage |
---|---|---|
768GB (24x32GB) | 20 (40) | 3 x 446GB SSD RAID5 (one virtual 893 GB SSD) |
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
Integrated NIC 1: Intel(R) 10GbE 4P X710-k bNDC
NIC 1 Port 1 Partition 1
NIC 1 Port 2 Partition 1
NIC 1 Port 3 Partition 1
NIC 1 Port 4 Partition 1
There are also some servers in the chair server room, which are equipped with graphics cards for machine learning purposes etc.
Hostname | IP Address | MAC Address |
---|---|---|
social1.cm.in.tum.de | 131.159.24.12 | 38:60:77:6a:c6:db |
social2.cm.in.tum.de | 131.159.24.134 | c8:60:00:c7:7F:7e |
social3.cm.in.tum.de | 131.159.24.238 | 10:bf:48:e2:a7:39 |
social4.cm.in.tum.de | 131.159.24.13 | b8:ca:3a:82:d6:6f |
social5.cm.in.tum.de | 131.159.24.171 | - |
social6.cm.in.tum.de | 131.159.24.184 | - |
This section/chapter describes the network part.
If you go to the server room make notes about the network configuration/how the cables are patched on the back side. Each Port has a small name or number next to it.
After you found out to which Port the chair network cable connects you can simply look the Mac Address up on the iDraw webinterface
Another way to check if the link is up and the mac address of the device, is to connect directly via ssh to the server management interface. For this step the iDrac interface address needs to be set and ping should work.
ssh paulth@cm-mgmt.in.tum.de
#example with sim01 ssh root@10.250.0.1 #password is the same as for the webinterface - password safe
command | Note |
---|---|
racadm getsysinfo | Full system report |
racadm hwinventory nic | Show all network interfaces |
racadm ifconfig | Shows all up and running network interfaces |
racadm nistatistics <interface> | Use the interface from the hwinventory command. Only works when server is running and/or operating system is booted |
With the nicstatistics
command you can find out if the link is up or not, but only works if the server is running.
This section describes how to setup vlan tagging on the servers.
sudo apt install vlan sudo modprobe 8021q sudo install bridge-utils sudo vim /etc/network/interfaces # content of interfaces file (only ubuntu < 18.04) #-------------------------- # real hardare interface auto eno1 iface eno1 inet dhcp # vlan interface auto eno1.83 iface eno1.83 inet manual vlan-raw-device eno1 vlan_id 83 # bridge interface to vlan auto chair iface chair inet dhcp bridge_ports eno1.93 bridge_fd 15 #------------------------- sudo ifup eno1 sudo ifup eno1.83 sudo ifup chair
Every server has at least one management network interface. We have our own server management VLAN to administrate the servers. There is one central gateway where admins can log in and from there reach the server management interfaces, via the second network interface of the gateway.
Procedure to administrate servers, check status, install new os, etc:
ssh paulth@cm-mgmt
ssh -ND 8080 paulth@cm-mgmt.in.tum.de
https://<iDrac-interface-ip-address> #example (sim01) https://10.250.0.1
root
and the password can be found in our password safesudo ssh -L 443:SERVER-IP:443 -L 5900:SERVER-IP:5900 -L 5901:SERVER-IP:5901 USER@cm-mgmt.in.tum.de
devimg01 | sudo ssh -L 443:10.0.0.2:443 -L 5900:10.0.0.2:5900 -L 5901:10.0.0.2:5901 paulth@cm-mgmt.in.tum.de |
testbed01 | sudo ssh -L 443:10.100.0.1:443 -L 5900:10.100.0.1:5900 -L 5901:10.100.0.1:5901 paulth@cm-mgmt.in.tum.de |
testbed02 | sudo ssh -L 443:10.100.0.2:443 -L 5900:10.100.0.2:5900 -L 5901:10.100.0.2:5901 paulth@cm-mgmt.in.tum.de |
net01 | sudo ssh -L 443:10.100.0.1:443 -L 5900:10.100.0.1:5900 -L 5901:10.100.0.1:5901 paulth@cm-mgmt.in.tum.de |
net02 | sudo ssh -L 443:10.100.0.2:443 -L 5900:10.100.0.2:5900 -L 5901:10.100.0.2:5901 paulth@cm-mgmt.in.tum.de |
net03 | sudo ssh -L 443:10.100.0.3:443 -L 5900:10.100.0.3:5900 -L 5901:10.100.0.3:5901 paulth@cm-mgmt.in.tum.de |
sim01 | sudo ssh -L 443:10.250.0.1:443 -L 5900:10.250.0.1:5900 -L 5901:10.250.0.1:5901 paulth@cm-mgmt.in.tum.de |
emu01 | sudo ssh -L 443:10.150.0.1:443 -L 5900:10.150.0.1:5900 -L 5901:10.150.0.1:5901 paulth@cm-mgmt.in.tum.de |
After that you can open the iDrac web interface by typing: https://localhost into your web browser.
The racadm tool can be used in two ways:
The racadm package throws errors if installed on non Dell server, but racadm binary is successfully downloaded under /opt/dell/srvadmin/bin/idracadm7. Even though it is idracadm7 it also works for iDrac8 and is the newest version (9.3.0). Use the following commands to set up a working remote racadm environment on a non-dell Server. Commands work for Ubuntu 18.04 only!! Have a look at current versions.:
sudo su echo "deb http://linux.dell.com/repo/community/openmanage/930/bionic bionic main" > /etc/apt/sources.list.d/linux.dell.com.sources.list gpg --keyserver-options http-proxy=http://proxy.in.tum.de:8080 --keyserver pool.sks-keyservers.net --recv-key 1285491434D8786F gpg -a --export 1285491434D8786F | sudo apt-key add - sudo apt update # instasll packets - libssl required for ssl connection to idrac sudo apt install libssl-dev srvadmin-idracadm8 sudo cp /opt/dell/srvadmin/bin/idracadm7 /root/ sudo ln -sf /root/idracadm7 /usr/bin/racadm # remove broken package installation sudo apt purge srvadmin-base srvadmin-hapi srvadmin-idracadm7
ip route add 10.200.0.0/24 via 131.159.24.136
#uncomment in /etc/sysctl.conf net.ipv4.ip_forward = 1 # iptables NAT - rewrite all incoming packets from MAAS to internal vmott2 interface sudo iptables -t nat -A POSTROUTING -s 131.159.24.39 -j SNAT --to-source 10.0.0.1
If the server has the enterprise license the image can be mounted virtually from the iDrac interface. With the Express or any other license a boot stick/CD needs to be prepared and mounted manually down in the server room.
In this chapter the configuration and integration of a linux server in our chair environment is described. The steps should be done in the following order:
sudo apt-get install autofs
/etc/auto.master
: /- /etc/auto.direct
/etc/auto.direct
/home -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11 /share -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/share_il11
sudo mv /home /home.old
sudo service autofs restart
sudo apt-get install nslcd ldap-utils
/etc/nslcd.conf
, replce the respective lines: uri ldaps://ldapswitch.in.tum.de:636 base ou=IN,o=TUM,c=de map passwd homeDirectory "/home/$uid"
/etc/nsswitch.conf
passwd: files ldap group: files ldap shadow: files ldap
sudo update-rc.d nslcd enable
/etc/pam.d/common-session
session required pam_mkhomedir.so skel=/etc/skel umask=0022
/etc/skel
directory/etc/login.group.allowed
(0644), fill in the groups i11 il11admin
/etc/pam.d/common-auth
, add to top of file auth required pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed
/etc/pam.d/common-auth
auth sufficient pam_ldap.so try_first_pass auth sufficient pam_unix.so nullok use_first_pass
/etc/sudoers
# Members of the LDAP group: il11admin get root privileges %il11admin ALL=(ALL) ALL
/etc/pam.d/common-session
session optional pam_ldap.so
/etc/pam.d/common-auth
, remove the use_authtok, already done above auth sufficient pam_unix.so nullok use_first_pass
#it will ask if you want to override the local changes -> choose no sudo pam-auth-update sudo service nscd stop sudo service nslcd restart
Here are all the files listed with content that were changed during the process:
Autofs
/share -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/share_il11 #virtual machine /home_i11 -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11 #server /home -fstype=nfs,defaults nasil11.informatik.tu-muenchen.de:/srv/il11/home_il11
# Sample auto.master file # This is an automounter map and it has the following format # key [ -mount-options-separated-by-comma ] location # For details of the format look at autofs(5). # /- /etc/auto.direct # # NOTE: mounts done from a hosts map will be mounted with the # "nosuid" and "nodev" options unless the "suid" and "dev" # options are explicitly given. # #/net -hosts # # Include /etc/auto.master.d/*.autofs # +dir:/etc/auto.master.d # # Include central master map if it can be found using # nsswitch sources. # # Note that if there are entries for /net or /misc (as # above) in the included master map any keys that are the # same will not be seen as the first read key seen takes # precedence. # +auto.master /mount /etc/auto_mount -nosuid,noquota
Ldap
root il11admin
# /etc/nslcd.conf # nslcd configuration file. See nslcd.conf(5) # for details. # The user and group nslcd should run as. uid nslcd gid nslcd # The location at which the LDAP server(s) should be reachable. uri ldap://ldapswitch.informatik.tu-muenchen.de # The search base that will be used for all queries. base ou=IN,o=TUM,c=de map passwd homeDirectory "/home_i11/$uid" # The LDAP protocol version to use. #ldap_version 3 ...
# /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: files ldap group: files ldap shadow: files ldap hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis
... # here are the per-package modules (the "Primary" block) session [default=1] pam_permit.so # here's the fallback if no module succeeds session requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around session required pam_permit.so # The pam_umask module will set the umask according to the system default in # /etc/login.defs and user settings, solving the problem of different # umask settings with different shells, display managers, remote sessions etc. # See "man pam_umask". session optional pam_umask.so # and here are more per-package modules (the "Additional" block) session required pam_unix.so session [success=ok default=ignore] pam_ldap.so minimum_uid=1000 session optional pam_systemd.so session required pam_mkhomedir.so skel=/etc/skel umask=0022 # end of pam-auth-update config
... # here are the per-package modules (the "Primary" block) auth required pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed auth sufficient pam_ldap.so try_first_pass auth sufficient pam_unix.so nullok use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so # end of pam-auth-update config
# # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults mail_badpass Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the LDAP group: il11admin get root privileges %il11admin ALL=(ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d %root ALL=(ALL) NOPASSWD: ALL
.... # here are the per-package modules (the "Primary" block) password [success=2 default=ignore] pam_unix.so obscure sha512 password [success=1 default=ignore] pam_ldap.so minimum_uid=1000 try_first_pass # here's the fallback if no module succeeds password requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around password required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config
# apt install ldap-utils sssd
/etc/sssd/sssd.conf
:[sssd] config_file_version = 2 services = nss, pam domains = LDAP [domain/LDAP] cache_credentials = true enumerate = false id_provider = ldap auth_provider = ldap ldap_uri = ldaps://ldap.in.tum.de:636 ldap_search_base = ou=IN,o=TUM,c=DE ldap_network_timeout = 2 entry_cache_timeout = 7776000
600
, otherwise sssd will fail to start# chmod 600 /etc/sssd/sssd.conf
passwd
, group
and netgroup
, as it would interfere with sssd caching. Change the following lines in /etc/nscd.conf
:enable-cache passwd no enable-cache group no enable-cache netgroup no
sss
to the passwd
, group
, shadow
and sudoers
line in /etc/nsswitch.conf
:passwd: files sss group: files sss shadow: files sss sudoers: files sss
/etc/login.group.allowed
. Do the same thing with individual users in /etc/login.user.allowed
. Make sure both files exist, even if one of them may be empty.# echo il11 >> /etc/login.group.allowed # touch /etc/login.user.allowed
0644
# chmod 0664 /etc/login.group.allowed # chmod 0664 /etc/login.user.allowed
/etc/pam.d/common-auth
auth sufficient pam_unix.so nullok auth sufficient pam_sss.so use_first_pass auth requisite pam_deny.so
/etc/pam.d/sshd
immediately before common-account is included:account [success=1 new_authtok_reqd=1 default=ignore] pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed account required pam_listfile.so onerr=fail item=user sense=allow file=/etc/login.user.allowed
/etc/security/group.conf
# members of il11admin are always granted sudo access *;*;%il11admin;Al0000-2400;adm # For other users create extra entries *;*;exampleuser;Al0000-2400;adm
/etc/pam.d/sshd
and /etc/pam.d/login
before common-auth is included auth optional pam_group.so
/etc/pam.d/common-session
session required pam_mkhomedir.so skel=/etc/skel umask=0022
.Xauthority
in /etc/skel
# touch /etc/skel/.Xauthority
# pam-auth-update # service nscd restart # service sssd start
sudo mkdir /data && sudo chmod 777 /data
Fail2Ban is a intrusion prevention system. It bans IP addresses after too many login requests.
sudo apt-get install fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo vim /etc/fail2ban/jail.local
bantime=3600 # adjust value ... [sshd] enabled=true # add line
sudo service fail2ban restart
sudo vim /etc/fail2ban/jail.local
destemail = root@mailschlichter.informatik.tu-muenchen.de ... action = %(action_mwl)s
- Show system information on ssh login, install landscape-common
sudo apt-get install landscap-common
For secure password storage we made a keepass file where all the passwords are stored. Ask the old/current administrator for the most recent password file.
A new server has default login credentials for the iDrac interface:
Server | Service Tag | License |
---|---|---|
Devimg01 | 31ZMQ92 | Enterprise |
Testbed01 | 303SQ92 | Enterprise |
Testbed02 | 304MQ92 | Enterprise |
Net01 | 31VSQ92 | Enterprise |
Net02 | 31XPQ92 | Enterprise |
Net03 | CWKTQ92 | Enterprise |
Sim01 | 49P50D2 | Enterprise |
Em01 | DWX40D2 | Enterprise |
A few weeks after the server hardware order the bill should arrive at the secretary office together with a TUM label. This label should be put on the respective servers. Follow the same position as already pasted in the server room. Together with the label put a paper with the server name (emu01) on each server. Print the server name (e.g. sim01.cm.in.tum.de) on a white paper (Libreoffice: Dejavu Sans, 12) and fix it with some sellotape.
Commands to restart atschlichter3, grub is corrupted → restart ends up in a minimal bash environment.
Commands:
The server should boot and is reachable via ssh.
The Dell servers can be booted with a platform specific bootable iso which updates all the server components automatically. It is recommended to do this once in a while as also hardware issues can be resolved with updating the firmware (e.g. RAM problems).
Ansible (semi-automatic):
Manual process: