Building your own root image is necessary if you want to use a distribution other than Red Hat 7.2. Otherwise, feel free to skip this section.
These instructions describe how to build a Red Hat 7.2 image. At the end of this section is a brief discussion of how other distributions might differ. Building a root image for another distribution is left as an exercise for the reader.
Download the Red Hat 7.2 root image from the User-Mode Linux (UML) project. As with the root image you downloaded in Section 2.1, it is over 150MB.
Extract the image.
host$ bunzip2 -c root_fs.rh72.pristine.bz2 >root_fs.ext2 |
Loopback mount the image.
host$ su host# mkdir /mnt.ext2 host# mount root_fs.ext2 /mnt.ext2 -o loop,ro |
Make a blank GFS root image. You also need to create an accompanying lock table image. Be sure you've added support for GFS to your host system by following the instructions in Section 4.5.
host# dd of=root_cidev bs=1024 seek=4096 count=0 host# dd of=root_fs bs=1024 seek=2097152 count=0 host# chmod a+w root_cidev root_fs host# losetup /dev/loop1 root_cidev host# losetup /dev/loop2 root_fs |
Enter the following pool information into a file named pool0cidev.cf.
poolname pool0cidev subpools 1 subpool 0 0 1 gfs_data pooldevice 0 0 /dev/loop1 0 |
Enter the following pool information into a file named pool0.cf.
poolname pool0 subpools 1 subpool 0 0 1 gfs_data pooldevice 0 0 /dev/loop2 0 |
Write the pool information to the loopback devices.
host# ptool pool0cidev.cf host# ptool pool0.cf |
Create the pool devices.
host# passemble |
Enter the following lock table into a file named gfscf.cf.
datadev: /dev/pool/pool0 cidev: /dev/pool/pool0cidev lockdev: 192.168.50.101:15697 cbport: 3001 timeout: 30 STOMITH: NUN name:none node: 192.168.50.1 1 SM: none node: 192.168.50.2 2 SM: none node: 192.168.50.3 3 SM: none node: 192.168.50.4 4 SM: none node: 192.168.50.5 5 SM: none node: 192.168.50.6 6 SM: none node: 192.168.50.7 7 SM: none node: 192.168.50.8 8 SM: none node: 192.168.50.9 9 SM: none node: 192.168.50.10 10 SM: none node: 192.168.50.11 11 SM: none node: 192.168.50.12 12 SM: none node: 192.168.50.13 13 SM: none node: 192.168.50.14 14 SM: none node: 192.168.50.15 15 SM: none |
Write the lock table to the cidev pool device.
host# gfsconf -c gfscf.cf |
Format the root disk image.
host# mkfs_gfs -p memexp -t /dev/pool/pool0cidev -j 15 -J 32 -i /dev/pool/pool0 |
Mount the root image.
host# mount -t gfs -o hostdata=192.168.50.1 /dev/pool/pool0 /mnt |
Copy the ext2 root to the GFS image.
host# cp -a /mnt.ext2/. /mnt |
Clean up.
host# umount /mnt.ext2 host# rmdir /mnt.ext2 host# Ctrl-D host$ rm root_fs.ext2 |
Cluster Tools source code is available as official release tarballs and through CVS. The CVS repository contains the latest, bleeding-edge code. It can be less stable than the official release, but it has features and bugfixes that the release does not have.
The latest release can be found at the top of the Cluster-Tools section of this release list. At the time of this writing, the latest release is 0.6.5.
Download the latest release. Extract it.
host$ tar jxvf ~/cluster-tools-0.6.5.tar.bz2 |
Follow these instructions to do a CVS checkout of the latest Cluster Tools code. The modulename is cluster-tools.
To do a developer checkout, you must be a CI developer. If you are interested in becoming a developer, read Section 8.3 and Section 8.4.
host$ su host# cd cluster-tools host# make install_ssi_redhat UML_ROOT=/mnt |
If you built a kernel, as described in Section 4, then follow the instructions in Section 4.4 and Section 4.7 to install kernel and GFS modules onto your new root.
Otherwise, mount the old root image and copy the modules directory from /mnt/lib/modules. Then remount the new root image and copy the modules into it.
Remake the ubd devices. At some point, the UML team switched the device numbering scheme from 98,1 for dev/ubd/1, 98,2 for dev/ubd/2, etc. Now they use 98,16 for dev/ubd/1, 98,32 for dev/ubd/2, etc.
Comment and uncomment the appropriate lines in /mnt/etc/inittab.ssi. Search for the phrase 'For UML' to see which lines to change. Basically, you should disable the DHCP daemon, and change the getty to use tty0 rather than tty1.
You may want to strip down the operating system so that it boots quicker. For the prepackaged root image, I removed the following files.
/etc/rc3.d/S25netfs /etc/rc3.d/S50snmpd /etc/rc3.d/S55named /etc/rc3.d/S55sshd /etc/rc3.d/S56xinetd /etc/rc3.d/S80sendmail /etc/rc3.d/S85gpm /etc/rc3.d/S85httpd /etc/rc3.d/S90crond /etc/rc3.d/S90squid /etc/rc3.d/S90xfs /etc/rc3.d/S91smb /etc/rc3.d/S95innd |
You might also want to copy dbdemo and its associated alphabet file into /root/dbdemo. This lets you run the demo described in Section 3.1.
host# umount /mnt host# passemble -r all host# losetup -d /dev/loop1 host# losetup -d /dev/loop2 |
Cluster Tools has make rules for Caldera and Debian, in addition to Red Hat. Respectively, the rules are install_ssi_caldera and install_ssi_debian.
The main difference between the distributions is the /etc/inittab.ssi installed. It is the inittab used by the clusterized init.ssi program. It is based on the distribution's /etc/inittab, but has some cluster-specific enhancements that are recognized by init.ssi.
There is also some logic in the /etc/rc.d/rc.nodeup script to detect which distribution it's on. This script is run whenever a node joins the cluster, and it needs to do different things for different distributions.
Finally, there are some modifications to the networking scripts to prevent them from tromping on the cluster interconnect configuration. They're a short-term hack, and they've only been implemented for Red Hat so far. The modified files are /etc/sysconfig/network-scripts/ifcfg-eth0 and /etc/sysconfig/network-scripts/network-functions.
Закладки на сайте Проследить за страницей |
Created 1996-2024 by Maxim Chirkov Добавить, Поддержать, Вебмастеру |