Showing posts with label amd. Show all posts
Showing posts with label amd. Show all posts

Saturday, October 15, 2011

CIS(theta) 2011-2012 - Public Keys! - Meeting IV

The following is a summary of what we've accomplished so far with the 2011-2012 CIS(theta) team. The Shadowfax Cluster is coming along quite well. We have a nice base OS in 64bit Ubuntu 11.04 Natty Narwhal Desktop on top of our AMD dualcore Athlons and gigE LAN. The Unity Desktop isn't that different from the Gnome Desktop we've been using these past few years on Fedora and Ubuntu. Natty is proving very user friendly and easy to maintain! This past week we installed openSSH on a testbed of 4 Linux Boxes and we enabled public-key authenticated ssh as detailed below. Next meeting, we'll install openMPI and we'll use flops.f to stress our mini 8-core cluster! BTW, we need public keys so openMPI can scatter/gather cluster jobs without the overhead of logging into each node as needed. We created a new user common to all nodes called "jobs" in honor of Steve Jobs. The cluster user can simply log into one node and be logged into all nodes at once! We started step 4 as listed below by installing openSSH and enabling public-key authentication. We'll finish step 4 next time by installing and testing openMPI. Maybe we'll rename it step 5?
InstantCluster Step 1: Infrastructure
Make sure your cores have enough ventilation. The room has to have powerful air conditioning too. These two factors may seem trivial but will become crucial when running the entire cluster for extended periods of time! Also, you need to have enough electrical power, preferably with the cabling out of the way, to run all cores simultaneously. Don't forget to do the same with all your Ethernet cabling. We have CAT6E cables to support our gigE Ethernet cards and switches. We are lucky that this step was taken care of for us already!

InstantCluster Step 2: Hardware
You need up to date Ethernet switches plus Ethernet cards and cores as well as plenty of RAM in each Linux box. As stated above, our gigE LAN and switches were already setup for us. Also, we have 64bit dual-core AMD Athlons and our HP boxes have 750 MB of RAM. I'd rather 1 or 2 GB of RAM, but that will have to wait for an upgrade!

InstantCluster Step 3: Firmware
We wasted way too much time last year trying out all kinds of Linux distros looking for a good 64bit base for our cluster. This year we spent way too much time testing out different liveCD distros. Recently, we downgraded from 64bit Ubuntu 10.04 Desktop edition to the 32bit version on our Linux partitions. 64bit gives us access to more RAM and a larger maxint, but was proving to be a pain to maintain. Just to name one problem, jre and flash were hard to install and update on FireFox. Last year we tried Fedora, Rocks, Oscar, CentOS, Scientific Linux and, finally, Ubuntu. 32bit Ubuntu has proven very easy to use and maintain, so I think we'll stick with it for the cluster! We've done this several times over the years using everything from Slakware and KNOPPIX to Fedora and Ubuntu!

InstantCluster Step 4: Software Stack
On top of Ubuntu we need to add openSSH, public-key authentication and openMPI. In step 6 we can discuss an application to scatter/gather over the cluster whether it be graphical (fractals, povray, blender, openGL, animations) or number crunching (C++ or python app for Mersenne Primes or Beal's Conjecture). So, what follows is a summary of what we did to get up to plublic-key authentication. This summary is based on the http://cs.calvin.edu/curriculum/cs/374/MPI/ link listed below. First, we installed openSSH-server from http://packages.ubuntu.com using the proxy server, then:
  1. If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files. 
  2. From your home directory, make .ssh secure by entering:
    chmod 700 .ssh
  3. Next, make .ssh your working directory by entering:
    cd .ssh
  4. To list/view the contents of the directory, enter:
    ls -a [we used ls -l]
  5. To generate your public and private keys, enter:
    ssh-keygen -t rsa
    The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
  6. To compare the previous output of ls and see what new files have been created, enter:
    ls -a [we used ls -l]
    You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
  7. To make your public key the only thing needed for you to ssh to a different machine, enter:
    cat id_rsa.pub >> authorized_keys
    [The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to 
    10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these 
    files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys 
    for each temp file to generate one master authorized_keys file for all nodes that we could 
    just download to each node's .ssh dir.]
  8. [optional] To make it so that only you can read or write the file containing your private key, enter:
    chmod 600 id_rsa
  9. [optional] To make it so that only you can read or write the file containing your authorized keys, enter:
    chmod 600 authorized_keys
    ===================================================
    What we are researching II 
    (look what other people are doing with MPI):
    MPI intro, nice!
    Sample MPI code
    http://www.cc.gatech.edu/projects/ihpcl/mpi.html
    ===================================================
    What we are researching I 
    (look what this school did in the 80s and 90s): 
    Thomas Jefferson High courses
    Thomas Jefferson High paper
    Thomas Jefferson High ftp
    Thomas Jefferson High teacher
    http://www.tjhsst.edu/~rlatimer/
    ===================================================
    Today's Topic:
    CIS(theta) 2011-2012 - Public Keys! - Meeting IV
    Today's Attendance:
    CIS(theta) 2011-2012: GeorgeA, GrahamS, KennyK, LucasE
    Today's Reading:
    Chapter 2 Building Parallel Programs (BPP) using clusters and parallelJava
    ===================================================
    Well, that's all folks, enjoy!

Saturday, October 2, 2010

CIS(theta) Meeting II (2010-2011) - Burn Fest!!


Aim: 
Burn Fest!!


Attending: 
DavidG, HerbertK, JayW, JoshG, RyanH


Reading: Building Parallel Programs, Chapter2


tjhsst
http://www.tjhsst.edu/~dhyatt/supercomp/index.html
tjhsst
http://www.tjhsst.edu/~rlatimer/compsys/compsys2001.html




This week we reviewed the reading from Chapter 1 and the links from Reasearch 1.  We talked about setting up pelicanHPC in the near future via PXEboot to run some jobs using Octave, MPI and MPITB.  This week's reading is Chapter 2 and Research 2.  I put up Research 3 as it is related to TJHSST, but you can skip those links for now.





The next version of Ubuntu is coming soon
Then we talked about our 64bit Ubuntu install as discribed here: http://shadowfaxrant.blogspot.com/2010/06/so-many-linux-distros-so-little-time.html whereby all the PCs in our lab are installed identically (24 student clients, 1 teacher client and 2 servers) via the Ubuntu 10.04 live install CD (dualboot with WinXP making linux the primary boot partition on grub2).  I added JRE to the teacher and student stations so we could do 3D graphs on SAGE.  I also added VLC to the teacher station so I could display my crazy videos.  



We discussed adding flash (for youtube viewing), WINE (for VTI in math class), handbrake (to edit DVDs for youtube upload) and xournal (for writing notes in class with my wireless tablet) to the teacher station.  We will also be installing vsftp and openssh-server on the servers. 




Note: student and teacher PCs are 64bit dualcore AMD Athlons and the new servers are 64bit quadcore Intel Xeons.  So, the "geek squad" has its hands full with helping me do all this in the near future!







Meanwhile, the "geek squad" burned, labeled and cased 50 Ubuntu 32bit liveCDs that I can now give out in my classes for my new students to work from home in the same environment as they do in class.  BTW, we made a mini field trip to the Math Office to get some CDs where we found 10x100 CD and 15x50 DVD spindles!  Thanx guys!


Happy Clustering,

Saturday, March 6, 2010

Rocks Cluster Distro Rox!


It's time for a bit of introspection.  We have not gotten much running on the new 64bit dual-core AMD Athlon based cluster.  So, here's what I'm thinking about.  I'm talking to the powers-that-be at my school about: 



(1) installing an all linux environment (http://www.rocksclusters.org) for my parallel programming class and other programming classes
(2) moderninzing my intro programming class to use "Mathematics for the Digital Age" and SAGE to teach discrete math and programming 
(3) running a Calculus Research Lab using SAGE to teach Calculus using computers (every other day like a science lab in addition to Calculus class in a PC classroom) 


I've seen LittleFe (get it? not "big iron"), its actually based on BCCD which I've used (1.0 based on openMosix, 2.0 based on MPICH, 3.0 based on openMPI). I'm not interested in building the hardware, however. I want to make use of the dualcore AMD 64bit Athons we have! BTW, we've also used clusterKNOPPIX/parallelKNOPPIX/Quantian based on openMosix as well as pelicanHPC based on LAMMPI, MPITB and Octave. I'd like to emulate the pelicanHPC model in a permanently installed cluster.


I have a dedicated ftp server (to share files with my students) and sftp server (for students to save their work) based on Slackware. So all they really need the Linux desktop for is to anonymous ftp or ssh with a passwd into one of those servers and do their work there. I don't use WIMxP for anything.... So, maybe its time to nuke the WIMxP partition and set up a Rocks Cluster!


OK, I think my Computing Independent Study students have more programming vs. hardware experience, so I'd like to leverage that (although, they helped me reinstall my classroom with Fedora). So, "hiding the details" of seting up the cluster is OK at this point. I'd like to focus on parallel programming. I read some of the beginner's dox and have the following concerns: 


MASTER NODE: 
(1) Does the master node have to be dedicated or can it be dual boot? My dept likes WimpDoze for some reason. So, all our PCs have WIMxP on hda1 and 64bit Fedora 12 on hda2. 
ANSWER: Yes it is dedicated. In fact, the install process is simplified to such an extent that nuking the partition table is automatic! 
(2) Does eth0 have to be on the private network? We have always had the internet (public network) on eth0 and the cluster (private network) on eth1. 
ANSWER: Yes, so we'll just have to switch the ethernet cables around.  


WORKER/COMPUTE NODES: 
(3) When installing the compute nodes, can I specify a partition? When the compute node PXE boots, Rocks gets installed to the hdd, right? Or does the cluster just run in RAM on the compute nodes? 
ANSWER: Nope, the compute nodes are installed to hdd from the master node via PXE and the partition tables are nuked again!
(4) When installing the compute nodes, can I boot from CD/DVD as with the master node? PXE boot has had issues in my lab (conflicting DHCP servers?). 
ANSWER: Yes, but PXE boot is a time saver so lets see if we can do it that way.
BOTH NODES: 
(5) Is the resulting installation usable as a desktop for everyday tasks when the cluster is not in use? I teach AP Computer Science with Fedora as the desktop and slackware running my ftp (for sharing files with my students) and sftp (for students to save their work) servers. 
ANSWER: Yes, you get a stable FLOSS version of RHEL that's even better than Fedora called Centos.  


BTW, we've been playing with clusters for a while: 
Colossus: 486 PCs + ethernet + PVM 
Guardian: Pentium I&II PCs + ethernet + openMosix
Centauri: Pentium III&IV PCs + fast ethernet + openMosix 
Shadowfax: 64bit AMD Athon PCs + gigE + ??? 

We got the latest hardware upgrade last school year. At that point we didn't know what to do with it all. So, we installed Debian and wrote bash scrpts to scatter/gather povray jobs via publicly authenticated ssh. This year, we started by installing 64bit Fedora 11 (then Fedora 12) and trying out openMPI. We are having a bear of a time getting openMPI to work over our public key authenticated ssh. Then I read about Rocks and found that it has a similar goal. Rocks is based, in part, on RHEL and openMPI! 
Here's some nice dox: http://www.rocksclusters.org/rocksapalooza/2006/lab-mpi.pdf 





Wednesday, December 23, 2009

Shadowfax is now the newest openMPI Cluster on the block!


That's right, you heard it here first!  My students have finally got the cluster up and running a hello world program.  Thanx go to the whole team who helped install 64-bit Fedora 11 (ArthurD, DevinB, SteveB and JeremyA).  I really like the Gnome version of the Fedora CD we installed.  However, I was surprised to note that GCC is missing.  Its pretty easy to install and update Fedora apps using yum, so we'll have to look into installing GCC.
So, now we have 25 dualcore 64-bit AMD Athlons running at 2GHz per core.  We need to run some benchmarks to see how fast Shadowfax is.  I was looking at LinPack and MTT for this.  Some data sets on there show a 1.2GHz Athlon yeilding 2.4GFlops!  So will one of our 2GHz cores yeild 4GFlops, and a dualcore node 8GFlops?
I'm especially proud of JeremyA for all the time, research and hard work he put in to fleshing out the online howtos (http://www.knoppix.net/forum/viewtopic.php?t=28933 and http://dvbmonkey.wordpress.com/2009/02/27/getting-started-with-open-mpi-on-fedora) as a number of scripts and programs: ftp://centauri.baldwinschools.net/MPI_Cluster
These scripts automate the whole process from installing the openMPI stack, to setting up public key authentication, to compiling and executing a parallel program!  Good Job! 
For more info on openMPI, surf on over to: http://www.open-mpi.org   

Happy Clustering,

Saturday, November 14, 2009

Meeting V

Today's Aim: Install Fest!
Tonight's Reading: Building Parallel Programs, Chapter 5
This Week's Research: PVM and MPI environments
Attending Tues: JeremyA, SteveB, DevinB (fedora install fest)
Attending Thurs: JeremyA, SteveB, DevinB, ArthurD (bzflag stuff)
 
This Tuesday we are finally reinstalling the Linux Partitions on all the PC clients in our PC Classroom.  We have 64-bit AMD Athlon dualcores, so we are using the Fedora 11 64-bit liveCD to do the reinstall over the KNOPPIX 5.3.1 32-bit liveDVD installation we currently have.  

We will also have a make-up meeting this Thursday when we will burn a class set of the Fedora 11 Games liveDVD for our BZFlag LAN Party the day before turkey day!



Happy Clustering,