This procedure describes how to use a Restore Roll to upgrade or reconfigure your existing Rocks cluster.
Restoring/Upgrading from Rocks 6 to Rocks 7 involves a slightly different approach than Rocks 5 to Rocks 6 or Rocks 6 to Rocks 6. The changes take into account adminstrative differences as well as installer differences. These changes are noted in this document |
Let's create a Restore Roll for your frontend. This roll will contain site-specific info that will be used to quickly reconfigure your frontend (see the section below for details).
# cd /export/site-roll/rocks/src/roll/restore # make roll |
The above command will output a roll ISO image that has the name of the form: hostname-restore-date-0.arch.disk1.iso. For example, on an x86_64-based frontend with the FQDN of rocks-85.sdsc.edu, the roll will be named like:
rocks-85.sdsc.edu-restore-2017.11.16-0.x86_64.disk1.iso |
Burn your restore roll ISO image to a CD.
Copy your restore roll ISO image to another machine for safekeeping.
Reinstall the frontend by putting the Rocks Boot CD in the CD tray (generally, this is the Kernel/Boot Roll) and reboot the frontend.
See full instructions for 6 to 7 upgrade. Restore roll is used after your Rocks 7 frontend has been installed
Prior to upgrading to 7, you need to retain a copy of your restore roll iso image created using the process outlined in Section Upgrade Frontend.
Build your frontend as a fresh installation. See Install Frontend Rocks 6>. However, do not reformat the partition that holds user home areas. This is usually named /export/home
After installation of the frontend, you will need to copy your restore iso created in Section Upgrade Frontend to your newly installed system. For the rest of this section, we will use the example iso image name of rocks-85.sdsc.edu-restore-2017.11.16-0.x86_64.disk1.iso
After your normal frontend installation, there are some "fixups" that need to be applied to your restore roll. Once applied, the roll needs to be Added to your frontend and then "run". Finally, after the roll is run, some restored host keys need to be placed back into the rocks database.
The detailed instructions now follow
repack-roll rocks-85.sdsc.edu-restore-2017.11.17-0.x86_64.disk1.iso
this will create a copy of your restore roll with a repacked.iso ending. See the following screen output |
# ls * iso rocks-85.sdsc.edu-restore-2017.11.17-0.x86_64.disk1.iso rocks-85.sdsc.edu-restore-2017.11.17-0.x86_64.disk1.repacked.iso |
# rocks add roll rocks-85.sdsc.edu-restore-2017.11.17-0.x86_64.disk1.repacked.iso # rocks enable roll rocks-85.sdsc.edu-restore # (cd /export/rocks/install; rocks create distro) |
# rocks run roll rocks-85.sdsc.edu-restore | sh |
You will see many warnings or errors. These are normal. The restore roll will not overwrite attributes that were entered/created during the frontend installation process. Some typical errors are shown in in the screen below |
Error: attribute "Kickstart_PublicHostname" exists {attr} {value} [attr=string] [value=string] Error: attribute "Kickstart_PublicAddress" exists {attr} {value} [attr=string] [value=string] ... Error: membership "Ethernet Switch" already exists {appliance} [distribution=string] [graph=string] [membership=string] [node=string] [os=string] [public=bool] ... Error: route exists {host} {address} {gateway} [netmask=string] |
# restore-keys Removing host keys from Rocks DB and re-adding local... Synching configuration and forcing make in /var/411 ... # su - root # exit |
# rocks set host boot compute action=install # rocks run host compute "shutdown -r now" |
At the boot: prompt type:
build |
At this point, the installation follows the same steps as a normal frontend installation (See the section: Install Frontend Rocks 6) -- with two exceptions:
On the first user-input screen (the screen that asks for 'local' and 'network' rolls), be sure to supply the Restore Roll that you just created.
You will be forced to manually partition your frontend's root disk.
You must reformat your / partition, your /var partition and your /boot partition (if it exists). Also, be sure to assign the mountpoint of /export to the partition that contains the users' home areas. Do NOT erase or format this partition, or you will lose the user home directories. Generally, this is the largest partition on the first disk. |
After your frontend completes its installation, the last step is to force a re-installation of all of your compute nodes. The following will force a PXE (network install) reboot of all your compute nodes.
# ssh-agent $SHELL # ssh-add # rocks run host compute '/boot/kickstart/cluster-kickstart-pxe' |
By default, the Restore Roll contains two sets of files: system files and user files, and some user scripts. The system files are listed in the 'FILES' directive in the file: /export/site-roll/rocks/src/roll/restore/src/system-files/version.mk.
FILES = /etc/passwd /etc/shadow /etc/gshadow /etc/group \ /etc/exports /etc/auto.home /etc/motd |
The user files are listed in the 'FILES' directive in the file: /export/site-roll/rocks/src/roll/restore/version.mk.
FILES += /etc/X11/xorg.conf |
If you have other files you'd like saved and restored, then append them to the 'FILES' directive in the file /export/site-roll/rocks/src/roll/restore/version.mk, then rebuild the restore roll.
If you'd like to add your own post sections, you can add the name of the script to the 'SCRIPTS' directive of the /export/site-roll/rocks/src/roll/restore/version.mk file.
SCRIPTS += /share/apps/myscript.sh /share/apps/myscript2.py |
This will add the shell script /share/apps/myscript.sh, and the python script /share/apps/myscript2.py in the post section of the restore-user-files.xml file.
If you'd like to run the script in "nochroot" mode, add
For example
|
All the files under /export/rocks/install/site-profiles are saved and restored. So, any user modifications that are added via the XML node method will be preserved.
The networking info for all node interfaces (e.g., the frontend, compute nodes, NAS appliances, etc.) are saved and restored. This is accomplished via the 'rocks dump' command.