Bring up a three node cluster with ssi-start. Log in to all three consoles as root. The initial password is root, but you'll be forced to change it the first time you log in.
The following demos should familiarize you with what an SSI cluster can do.
Start dbdemo on node 1.
node1# cd ~/dbdemo node1# ./dbdemo alphabet |
The dbdemo program "processes" records from the file given as an argument. In this case, it's alphabet, which contains the ICAO alphabet used by aviators. For each record, dbdemo writes the data to its terminal device and spins in a busy loop for a second to simulate an intensive calculation.
The dbdemo program is also listening on its terminal device for certain command keys.
Table 1. Command Keys for dbdemo
Key | Description |
---|---|
1-9 | move to that node and continue with the next record |
Enter | periodically moves to a random node until you press a key |
q | quit |
Move dbdemo to different nodes. Note that it continues to send output to the console where it was started, and that it continues to respond to keypresses from that console. This demonstrates that although the process is running on another node, it can remotely read and write the device it had open.
Also note that when a process moves, it preserves its file offsets. After moving, dbdemo continues processing records from alphabet as if nothing had happened.
To confirm that the process moved to a new node, get its PID and use where_pid. You can do this on any node.
node3# ps -ef | grep dbdemo node3# where_pid <pid> 2 |
If you like, you can download the source for dbdemo. It's also available as a tarball in the /root/dbdemo directory.
From node 1's console, start up vi on node 2. The onnode command uses the SSI kernel's rexec system call to remotely execute vi.
node1# onnode 2 vi /tmp/newfile |
Confirm that it's on node 2 with where_pid. You need to get its PID first.
node3# ps -ef | grep vi node3# where_pid <pid> 2 |
Type some text and save your work. On node 3, cat the file to see the contents. This demonstrates the single root file system.
node3# cat /tmp/newfile some text |
From node 3, kill the vi session running on node 2. You should see control of node 1's console given back to the shell.
node3# kill <pid> |
Make a FIFO on the shared root.
node1# mkfifo /fifo |
echo something into the FIFO on node 1.
node1# echo something >/fifo |
cat the FIFO on node 2.
node2# cat /fifo something |
This demostrates that FIFOs are clusterwide and remotely accessible.
On node 3, write "Hello World" to the console of node 1.
node3# echo "Hello World" >/devfs/node1/console |
This shows that devices can be remotely accessed from anywhere in the cluster. Eventually, the node-specific subdirectories of /devfs will be merged together into a single device tree that can be mounted on /dev without confusing non-cluster aware applications.
Закладки на сайте Проследить за страницей |
Created 1996-2024 by Maxim Chirkov Добавить, Поддержать, Вебмастеру |