Setting up a cluster for cloudfs
Hypervisor/Host:
- kkeithle.usersys.redhat.com
Brick node guests on kkeithle.usersys.redhat.com:
- cloudfs-node01
- cloudfs-node02
- cloudfs-node03
- cloudfs-node04
Client node guest(s) on kkeithle.usersys.redhat.com:
N.B. everything is running F14.
Back-end storage:
- 40 5G LUNs on qlogic fibrechannel, provisioned as SCSI disks, 10 per brick. N.B. Size and number of LUNs is arbitrary.
cloudfs git repo, and prebuilt gluster and cloudfs packages:
- cloudfs git repo is here
- RHEL6 RPMs are here (gluster RPMs work on F14, cloudfs RPM doesn't)
- F15 RPMs are here (These work on F14 too.)
the nitty gritty:
- install the RPMs on all brick nodes and client nodes
- make filesystems on the iSCSI LUNs and mount them
- for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
- for dev in /dev/sd? ; do sudo mkdir -p /bricks/`basename $dev`; done
- for dev in /dev/sd? ; do sudo mount $dev /bricks/`basename $dev`; done
- optional make /etc/fstab entries for the mounts
- set up ssh to allow ssh/scp between nodes w/o prompting for passwords
- on each brick node: sudo mkdir /root/.ssh; sudo chmod 0700 /root/.ssh
- on the principal node (as root): ssh-keygen -t rsa press 'Enter' when prompted for a password
- copy the public and private keys to the other brick nodes: scp .ssh/id_rsa.pub root@<brick>:.ssh/
- copy the public key to the other brick nodes' authorized hosts: scp .ssh/id_rsa.pub root@<brick>:.ssh/authorized_keys
- chcon the authorized_keys file on each of the nodes: ssh root@<brick> chcon -t ssh_home_t .ssh/authorized_keys
- set up apache on the principal brick node
- check that httpd RPM is installed: rpm -qa | grep httpd
- install mod_ssl: sudo yum install mod_ssl
- N.B. by default mod_ssl uses the cert in /etc/pki/tls/certs/localhost.crt (a.k.a. /etc/ssl/certs/localhost.crt). The CN in this cert is the hostname when the system was installed, so if you change the hostname along the way then this will no longer match. You can generate your own cert and change /etc/httpd/conf.d/ssl.conf to point at it.
- edit /etc/sudoers to give user apache the necessary privs
- Around line 57 add: Defaults:apache !requiretty
- Near the end add: %apache ALL= NOPASSWD: /usr/sbin/gluster, /usr/bin/ssh, /usr/bin/cloudfs, /bin/mkdir, /bin/cp
- open ports in the firewall using the firewall admin utility:
- on the principal node only, open http and https (Trusted Services)
- on all nodes, open ssh (Trusted Services)
- on all nodes, open port 111 (sunrpc) tcp and udp (Other Ports)
- on all nodes, open ports 24007-240xx¹ tcp (Other Ports)
- ¹24009 + number of bricks, e.g., 24013 for a four node cluster
- on all nodes, open ports 38465-384xx² tcp (Other Ports)
- ²38465 + number of bricks, e.g. 38469 for a four node cluster
- alternatively disable the firewall
- install fixed content and python CGI scripts in /var/www/...
- download the tarball from here or here
- install files: cd /var/www; sudo tar xpf ~/Downloads/varwww.tar
- chcon: (for SELinux): sudo chcon -t httpd_unconfined_script_exec_t /var/www/cgi-bin/*provision
- install cloudfs admin and tenant password files
- create admin password file: sudo echo "admin redhat" > /var/lib/glusterd/cloudfs.passwd
- create tenant password file: sudo touch /var/lib/glusterd/cloudfs.tenants
- chown: sudo chown apache /var/lib/glusterd/cloudfs.*
- chgrp: sudo chgrp apache /var/lib/glusterd/cloudfs.*
- chcon: (for SELinux): sudo chcon -t httpd_sys_content_t /var/lib/glusterd/cloudfs.passwd
- chcon: (for SELinux): sudo chcon -t httpd_sys_rw_content_t /var/lib/glusterd/cloudfs.tenants
- setup gluster
- open browser window to principal node
- enter admin/redhat for username and password
- click on 'Initialize Cluster'
- enter node name or IP address, press 'Add Node'
- repeat for remaining nodes
press 'Done'
- click on 'Tenant Management'
- click on 'Add Tenant'
press 'Done'
- click on 'Provision Storage'
- check checkboxes for one or more volumes
- check radiobox for 'Plain', 'Replicated', or 'Striped'
- Enter 'Replica or Stripe count' if desired
- Enter a name for the 'Volume ID'
- press 'Provision'
- press 'Confirm'
- right-click on the link, save the file
- scp the file to the client
- ssh to the client
- on the client, enter `glusterfs -f <filename>` at the shell prompt
- Treat yourself to a beer.