My last post for 2016.
Its hot here in Sydney so everything is off apart from the Raspberry Pi's that don't seem to mind the hot weather. I need to find somewhere else to put the computers. Somewhere air conditioned.
To recap this year, the farm has grown a bit in size and therefore the compute capability. I have a lot of old hardware to sell off to make some room and fund future upgrades. The farm at the end of this year consists of:
2 x i7-5820k (6 core/12 thread) machines with a GTX1060 each
8 x i7-6700 (4 core/8 thread) machines
2 x i3-6100T with a GTX970 each
9 x Raspberry Pi3's
I am also looking at moving away from Windows due to on-going issues with windows update and Microsoft's insistence in maintaining my computers for me. I will probably move the Intel GPU part of the farm across to Linux initially.
I also want to get the Alphaserver going. I have had it running before but got caught by the hobbyist license running out each year. I have renewed it twice. There is a move to port OpenVMS onto the x86 architecture that could offer another alternative to Windows.
Next years "to do" list:
1. Get air conditioned location for computers
2. Sell off old hardware
3. Move away from Windows
4. Probable multi-core CPU upgrades (Ryzen)
5. Probable networking upgrades
6. Get BOINC-MPI going
7. Get AlphaServer going
8. More GPU upgrades
As you can see quite a few things to get done next year. All the best for 2017.
31 December 2016
23 December 2016
Add jessie-backports repository for Rpi
The Raspberry Pi running Raspbian is as I write this on the Jessie release. This is what is known as a long-term release so while there will be some security and bug fixes no new versions of software are added. If you want to get later versions of software you can compile your own, use a testing/unstable release or add the jessie-backports repository.
Unfortunately the Raspberry Pi foundation don't have a jessie-backports repository. Debian however maintain one. Debian support the ARMv7 and later so this will work for the Raspberry Pi2 and Pi3. Don't use this on the Pi Zero, B+ or older (ARMv6) Raspberry Pi.
I use the command line for accessing my Pi's so these commands are done in a terminal window.
Add to sources.list
The first step is to add the jessie-backports to our sources.list file so that apt can include it. To do this we need to edit the file and add an extra entry. Type following commands:
cd /etc/apt
sudo nano sources.list
We need to add the following line after the Jessie entry
deb http://httpredir.debian.org/debian jessie-backports main contrib non-free
Save the file by doing Ctrl-O and when prompted for the filename press the Enter key. To exit do a Ctrl-X. That is hold the Control key down and press the letter.
The httpredir entry is a special entry that tells it to get the files from the fastest available Debian mirror. The jessie-backports part tells it which release.
Add gpg keys
Now we've added jessie-backports we need to get the latest list of program versions. Type the following command:
sudo apt-get update
This will give 2 error messages about missing gpg keys. We'll need to add them with the following commands. Check the keys given in the error message match the ones below and if not replace the key values with the ones from the error message.
gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553
gpg -a --export 8B48AD6246925553 | sudo apt-key add -
gpg --keyserver pgpkeys.mit.edu --recv-key 7638D0442B90D010
gpg -a --export 7638D0442B90D010 | sudo apt-key add -
Don't forget the hyphen on the end of the apt-key add lines. Now lets try that again.
sudo apt-get update
Which should work this time and will check more repositories. I have received some errors where apt is unable to get the files but retrying the command a few minutes later works fine.
To tell apt we want a version from the jessie-backports repository we need to include -t jessie-backports to the install command, otherwise it simply picks the version from the jessie repository. For example:
sudo apt-get install –t jessie-backports boinc-client
This tells apt we want to install the boinc-client package from jessie-backports.
Unfortunately the Raspberry Pi foundation don't have a jessie-backports repository. Debian however maintain one. Debian support the ARMv7 and later so this will work for the Raspberry Pi2 and Pi3. Don't use this on the Pi Zero, B+ or older (ARMv6) Raspberry Pi.
I use the command line for accessing my Pi's so these commands are done in a terminal window.
Add to sources.list
The first step is to add the jessie-backports to our sources.list file so that apt can include it. To do this we need to edit the file and add an extra entry. Type following commands:
cd /etc/apt
sudo nano sources.list
We need to add the following line after the Jessie entry
deb http://httpredir.debian.org/debian jessie-backports main contrib non-free
Save the file by doing Ctrl-O and when prompted for the filename press the Enter key. To exit do a Ctrl-X. That is hold the Control key down and press the letter.
The httpredir entry is a special entry that tells it to get the files from the fastest available Debian mirror. The jessie-backports part tells it which release.
Add gpg keys
Now we've added jessie-backports we need to get the latest list of program versions. Type the following command:
sudo apt-get update
This will give 2 error messages about missing gpg keys. We'll need to add them with the following commands. Check the keys given in the error message match the ones below and if not replace the key values with the ones from the error message.
gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553
gpg -a --export 8B48AD6246925553 | sudo apt-key add -
gpg --keyserver pgpkeys.mit.edu --recv-key 7638D0442B90D010
gpg -a --export 7638D0442B90D010 | sudo apt-key add -
Don't forget the hyphen on the end of the apt-key add lines. Now lets try that again.
sudo apt-get update
Which should work this time and will check more repositories. I have received some errors where apt is unable to get the files but retrying the command a few minutes later works fine.
To tell apt we want a version from the jessie-backports repository we need to include -t jessie-backports to the install command, otherwise it simply picks the version from the jessie repository. For example:
sudo apt-get install –t jessie-backports boinc-client
This tells apt we want to install the boinc-client package from jessie-backports.
18 December 2016
18th of December
Farm Status
Intel GPU's
Finishing off a burst of Einstein gravity wave work
Nvidia GPUs
Two running Seti work
Raspberry Pis
Running Einstein BRP4 work
Other news
The two GTX1060's I ordered arrived. Unfortunately they don't have DVI-I output so I can't use my normal DVI to VGA adaptors. I bought a pair of Display Port to VGA adaptors from my usual PC shop. They turned out to have a mini-Display Port plug and the graphics cards have full-sized display port sockets. I've ordered a couple of adaptors off eBay while I take the other ones back to the shop for a refund.
AMD gave an update on the Zen CPU which is now being called a Ryzen (pronounced Rye-Zen). There are expecting to release them in Q1 2017.
I have also ordered a bunch of Cat5e network cables for the Pi cluster. I will add the Pi2's which are gathering dust, into the Pi cluster. They seem happy to run even when its 38 degrees.
Intel GPU's
Finishing off a burst of Einstein gravity wave work
Nvidia GPUs
Two running Seti work
Raspberry Pis
Running Einstein BRP4 work
Other news
The two GTX1060's I ordered arrived. Unfortunately they don't have DVI-I output so I can't use my normal DVI to VGA adaptors. I bought a pair of Display Port to VGA adaptors from my usual PC shop. They turned out to have a mini-Display Port plug and the graphics cards have full-sized display port sockets. I've ordered a couple of adaptors off eBay while I take the other ones back to the shop for a refund.
AMD gave an update on the Zen CPU which is now being called a Ryzen (pronounced Rye-Zen). There are expecting to release them in Q1 2017.
I have also ordered a bunch of Cat5e network cables for the Pi cluster. I will add the Pi2's which are gathering dust, into the Pi cluster. They seem happy to run even when its 38 degrees.
09 December 2016
9th of December
Farm status
Intel GPUs
All running Einstein gravity wave work
Nvidia GPUs
Two running Einstein gravity wave work
Raspberry Pis
All running Einstein BRP4 work
Other news
As mentioned above just about everything is running Einstein gravity wave work. Its a CPU only application. Tasks take from 12 to 16 hours depending on the frequency of the work unit. The weather has been mostly hot and then we get a a cooler day or two when I have the entire farm going. The Intel GPU machines currently have 64 tasks and the Nvidia GPU machines have 24 running.
I've been watching the EVGA 6Gb GTX1060's (as replacement for both GTX750Ti's and GTX970's). Nobody seems to have any in stock. Even EVGA is out of stock. Once they become available I will try and get some to replace the existing GPU's.
There's supposed to be an official launch of AMD's Zen CPU in the middle of December. I will look at replacing my existing 6 core/12 thread machines (which use 140 watts) with the Zen 6 core/12 thread part that is said to only use 95 watts and is faster.
Intel GPUs
All running Einstein gravity wave work
Nvidia GPUs
Two running Einstein gravity wave work
Raspberry Pis
All running Einstein BRP4 work
Other news
As mentioned above just about everything is running Einstein gravity wave work. Its a CPU only application. Tasks take from 12 to 16 hours depending on the frequency of the work unit. The weather has been mostly hot and then we get a a cooler day or two when I have the entire farm going. The Intel GPU machines currently have 64 tasks and the Nvidia GPU machines have 24 running.
I've been watching the EVGA 6Gb GTX1060's (as replacement for both GTX750Ti's and GTX970's). Nobody seems to have any in stock. Even EVGA is out of stock. Once they become available I will try and get some to replace the existing GPU's.
There's supposed to be an official launch of AMD's Zen CPU in the middle of December. I will look at replacing my existing 6 core/12 thread machines (which use 140 watts) with the Zen 6 core/12 thread part that is said to only use 95 watts and is faster.
23 November 2016
23rd of November
Farm Status
Intel GPUs
Been off for a week, now running Seti
Nvidia GPUs
Off
Raspberry Pis
Running Einstein
Other news
Weather has been hot so everything apart from the Raspberry Pis has been off.
I got a letter from the electricity provider saying they are going to install a "smart meter" sometime between the 28th of November and the 9th of December. That's about as specific as they get. That means the farm will have to be off. This is of course to save them money (they don't need to come out and read the meter) and allows them to charge higher rates depending on the time of day.
I spent a bit of time last week scouting around for a small industrial unit to rent, to put the farm in. Somewhere well out of Sydney to keep the cost down. Unfortunately not too much success apart from one I looked at a few months ago. It would need a fit-out as its just bare walls and concrete floor at the moment. Ideally what I'd like is a cluster of machines in a server room type of setup.
The seven 1Tb SSHD's that I ordered unfortunately couldn't be delivered yesterday because nobody was at home to receive them. I will have to collect them from the post office tomorrow. They are destined to replace the 500Gb hard disks in the Intel GPU machines.
Intel GPUs
Been off for a week, now running Seti
Nvidia GPUs
Off
Raspberry Pis
Running Einstein
Other news
Weather has been hot so everything apart from the Raspberry Pis has been off.
I got a letter from the electricity provider saying they are going to install a "smart meter" sometime between the 28th of November and the 9th of December. That's about as specific as they get. That means the farm will have to be off. This is of course to save them money (they don't need to come out and read the meter) and allows them to charge higher rates depending on the time of day.
I spent a bit of time last week scouting around for a small industrial unit to rent, to put the farm in. Somewhere well out of Sydney to keep the cost down. Unfortunately not too much success apart from one I looked at a few months ago. It would need a fit-out as its just bare walls and concrete floor at the moment. Ideally what I'd like is a cluster of machines in a server room type of setup.
The seven 1Tb SSHD's that I ordered unfortunately couldn't be delivered yesterday because nobody was at home to receive them. I will have to collect them from the post office tomorrow. They are destined to replace the 500Gb hard disks in the Intel GPU machines.
14 November 2016
14th of November
Farm Status
Intel GPUs
Running Seti work
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Other news
Its been hot for the last few days so the farm has been off apart from the Raspberry Pis. Its forecast to be cool (22-23 degrees C) for the next few days so the Intel GPUs are off and running.
I ordered 7 more SSHD's for the Intel GPU machines. There are 8 machines in the Intel GPU cluster. One has an SSD plus hard disk so I am not upgrading it.
I'm expecting the Einstein project will start to process their Multi Directed Gravity Wave Search soon. They released a few thousand work units as a "tuning" run which we finished off a few weeks ago. There should be the production run of the data starting any day now. Its a CPU only app so I'll probably throw the Intel GPUs at it (which is why I upgraded their memory to 16Gb last month).
I'm still holding off buying GPU replacements given the EVGA cards catching on fire. My current thoughts are a GTX1060 will probably be a good upgrade from the GTX970's. It should be around the same speed but use a lot less electricity.
Intel GPUs
Running Seti work
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Other news
Its been hot for the last few days so the farm has been off apart from the Raspberry Pis. Its forecast to be cool (22-23 degrees C) for the next few days so the Intel GPUs are off and running.
I ordered 7 more SSHD's for the Intel GPU machines. There are 8 machines in the Intel GPU cluster. One has an SSD plus hard disk so I am not upgrading it.
I'm expecting the Einstein project will start to process their Multi Directed Gravity Wave Search soon. They released a few thousand work units as a "tuning" run which we finished off a few weeks ago. There should be the production run of the data starting any day now. Its a CPU only app so I'll probably throw the Intel GPUs at it (which is why I upgraded their memory to 16Gb last month).
I'm still holding off buying GPU replacements given the EVGA cards catching on fire. My current thoughts are a GTX1060 will probably be a good upgrade from the GTX970's. It should be around the same speed but use a lot less electricity.
08 November 2016
Pi Drive
As you can see my Pi Drives arrived.
One thing to be aware of is its just a bare drive. You will also need some form of enclosure to protect the drive Here is one of the enclosures they also sell for the Pi Drive. They have a few different types but I went for one that sits under the official Rpi case.
You get a miniature torx screwdriver to attach the screws and the rubber grommets to the drive. The other two pieces next to the screwdriver handle are for the LED which is a surface mount on the rear of the drive.
And here is the drive with its USB connector on the back. Its a 2.5" 500Gb drive but has been modified to work with the Rpi (ie slower spin up time and capacity reduced to 314Gb).
And here is a side view after putting it into the case.
And a top view. You can see the plastic ridge that the official case sits within. The dimples are where the rubber feet on the bottom of the Pi case sit.
Here's the cable that connects them all up. It too is an optional extra.
Here is one of my modified Pi cases with a fan on top, sitting on top of the drive enclosure.
You can see the copper heatsink directly under the fan to keep the Pi cool. The cable on the top is part of the fan cabling that connects to the GPIO pins for power.
To move the root partition see http://markjatboinc.blogspot.com.au/2016/10/moving-rpi-root-partition.html that I posted in October. It still boots off the SD card but the root drive is now on the Pi Drive.
It cost me USD 98 for the two drives, enclosures and cables. The freight forwarding added another USD 50 to the price. But then one can't buy them in Australia.
It all worked out of the box. There are small parts such as the screws and the light diffuser for the LED which can be difficult to handle if you have big fingers. My only complaint about it so far is its only 288Gb when formatted as ext4. They have a 1Tb drive available, it costs more of course but I haven't tried it.
It cost me USD 98 for the two drives, enclosures and cables. The freight forwarding added another USD 50 to the price. But then one can't buy them in Australia.
It all worked out of the box. There are small parts such as the screws and the light diffuser for the LED which can be difficult to handle if you have big fingers. My only complaint about it so far is its only 288Gb when formatted as ext4. They have a 1Tb drive available, it costs more of course but I haven't tried it.
07 November 2016
7th of November
Farm Status
Intel GPUs
Half running Seti work overnight
Nvidia GPUs
Two running GPUgrid work overnight
Raspberry Pis
All running Einstein BRP4 work
SSHD install
I installed the SSHD's into the GPUgrid crunchers. The first one I cloned the original disk without problems. The second one however had secure boot enabled thanks to Microsoft. I ended up resetting the BIOS and having to clean install windows onto it. The windows experience rating for them was unchanged from the original hard disks they replaced (manufactured in 2007), however they are noticeably quicker.
The third SSHD went into a Linux machine and I had to reinstall Linux on it. When I first powered it up the disk caused a short circuit. The metal plate the drive mounts on was in contact with the SATA power cable. After rearranging the drive placement it was off and running.
GPU replacements
There are reports of the EVGA GTX1070 and GTX1080 catching fire. EVGA are correcting the issues with a BIOS update and applying thermal pads around the power circuitry. Just as well I haven't replaced my GTX970's yet.
While I had the GPUgrid machines apart I did try swapping the dual GTX750Ti's into one. Unfortunately it seems the i3-6100T CPU doesn't have enough PCIe lanes and will run one card at x16 speed and the other at x4. I swapped them back with the GTX970.
GPUgrid have started testing a Pascal version of their acemd science app. Initial reports say that it works fine. Its using CUDA version 8 so up-to-date drivers are required.
One configuration that I'm thinking of for the GPU crunchers is to have the i3-6100T machines with a single GTX1060 in each. That would replace the GTX970's that are currently in them.
Intel GPUs
Half running Seti work overnight
Nvidia GPUs
Two running GPUgrid work overnight
Raspberry Pis
All running Einstein BRP4 work
SSHD install
I installed the SSHD's into the GPUgrid crunchers. The first one I cloned the original disk without problems. The second one however had secure boot enabled thanks to Microsoft. I ended up resetting the BIOS and having to clean install windows onto it. The windows experience rating for them was unchanged from the original hard disks they replaced (manufactured in 2007), however they are noticeably quicker.
The third SSHD went into a Linux machine and I had to reinstall Linux on it. When I first powered it up the disk caused a short circuit. The metal plate the drive mounts on was in contact with the SATA power cable. After rearranging the drive placement it was off and running.
GPU replacements
There are reports of the EVGA GTX1070 and GTX1080 catching fire. EVGA are correcting the issues with a BIOS update and applying thermal pads around the power circuitry. Just as well I haven't replaced my GTX970's yet.
While I had the GPUgrid machines apart I did try swapping the dual GTX750Ti's into one. Unfortunately it seems the i3-6100T CPU doesn't have enough PCIe lanes and will run one card at x16 speed and the other at x4. I swapped them back with the GTX970.
GPUgrid have started testing a Pascal version of their acemd science app. Initial reports say that it works fine. Its using CUDA version 8 so up-to-date drivers are required.
One configuration that I'm thinking of for the GPU crunchers is to have the i3-6100T machines with a single GTX1060 in each. That would replace the GTX970's that are currently in them.
30 October 2016
30th of October
Farm Status
Intel GPUs
Running Seti and Seti beta overnight
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Other news
I finished the memory upgrades for the Intel GPU machines. They now have 16Gb of memory.
I've ordered 3 of the WD 1Tb SSHD drives (Solid State Hybrid Drives). I will be putting two of them into the Nvidia machines to see how they go. If things work out I will order more to replace the drives in the Intel GPU machines.
The Pi Drives have been shipped from the WD store in America. I haven't heard from the freight forwarding service yet.
I have reimaged another two Pi's this weekend as they seem to be getting a lot of inconclusive results, although it might not be the Pi's fault. A lot of them seem to be paired with Intel GPUs which we know produces invalid results.
I'm looking at swapping the graphics cards around in the Nvidia machines. I have dual GTX750Ti's in the 6 core/12 thread machines and a single GTX970 in the i3 machines. The 6 core machines can only run a single slot at PCIe x16 and the second slot runs at x4. The i3 machines are able to run dual PCIe x16 slots.
Intel GPUs
Running Seti and Seti beta overnight
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Other news
I finished the memory upgrades for the Intel GPU machines. They now have 16Gb of memory.
I've ordered 3 of the WD 1Tb SSHD drives (Solid State Hybrid Drives). I will be putting two of them into the Nvidia machines to see how they go. If things work out I will order more to replace the drives in the Intel GPU machines.
The Pi Drives have been shipped from the WD store in America. I haven't heard from the freight forwarding service yet.
I have reimaged another two Pi's this weekend as they seem to be getting a lot of inconclusive results, although it might not be the Pi's fault. A lot of them seem to be paired with Intel GPUs which we know produces invalid results.
I'm looking at swapping the graphics cards around in the Nvidia machines. I have dual GTX750Ti's in the 6 core/12 thread machines and a single GTX970 in the i3 machines. The 6 core machines can only run a single slot at PCIe x16 and the second slot runs at x4. The i3 machines are able to run dual PCIe x16 slots.
22 October 2016
22nd of October
Farm status
Intel GPUs
Three running CPDN work. The rest are running Seti
Nvidia GPUs
Two running GPUgrid work
Raspberry Pis
All running Einstein work
Other news
For the Raspberry Pis I moved the root partition across from the MicroSD card onto an external hard disk for the one Pi's that I was using to test the process. See the blog post prior to this for the process I used.
After successfully moving the root drive on the Pi I decided to order a couple of PiDrives, enclosures and cables from the states. Its the first time I've tried using one of the freight-forwarding companies so it will be interesting to see if they actually turn up and how long they take. Its cost USD 98.80 so far. I have to pay the freight forwarder as well and on top of that there is the exchange rate. Its certainly not a cheap option but then PiDrives aren't available in Australia.
The memory upgrades for the Intel GPU machines arrived during the week. I have already upgraded 5 of them. The other 3 are running CPDN climate models so I will wait until the climate models finish before upgrading them.
The GTX1050 and GTX1050Ti were officially released however most suppliers have run out of stock. Looking at the EVGA ones specifically they don't have a VGA connector which is fairly normal these days but they don't recommend using an adaptor either. That's an issue for me as my KVM's only have VGA connections.
Intel GPUs
Three running CPDN work. The rest are running Seti
Nvidia GPUs
Two running GPUgrid work
Raspberry Pis
All running Einstein work
Other news
For the Raspberry Pis I moved the root partition across from the MicroSD card onto an external hard disk for the one Pi's that I was using to test the process. See the blog post prior to this for the process I used.
After successfully moving the root drive on the Pi I decided to order a couple of PiDrives, enclosures and cables from the states. Its the first time I've tried using one of the freight-forwarding companies so it will be interesting to see if they actually turn up and how long they take. Its cost USD 98.80 so far. I have to pay the freight forwarder as well and on top of that there is the exchange rate. Its certainly not a cheap option but then PiDrives aren't available in Australia.
The memory upgrades for the Intel GPU machines arrived during the week. I have already upgraded 5 of them. The other 3 are running CPDN climate models so I will wait until the climate models finish before upgrading them.
The GTX1050 and GTX1050Ti were officially released however most suppliers have run out of stock. Looking at the EVGA ones specifically they don't have a VGA connector which is fairly normal these days but they don't recommend using an adaptor either. That's an issue for me as my KVM's only have VGA connections.
Moving a Rpi root partition
The Raspberry Pi uses a MicroSD card as its primary storage, Unfortunately it is also prone to corruption. The following is the process I've used to move the root partition from the SD card onto a USB storage device.
I first tried it on a Maxtor OneTouch II external hard disk that I had siting around. It gave me an error when I tried to write the partition table to the disk, so it may not work on all devices. I also tried it on a Sandisk Cruiser Switch (thumb drive) and lastly on a Seagate Expansion external hard disk. The last two worked fine. As they say your mileage might vary.
The Pi isn't very good at powering USB devices like hard disks so use a powered hard disk (ie one that comes with it own power pack) or you could use the PiDrive adaptor cable that supplies power to both the Pi and the external hard disk. Thumb drives should be fine as they require minimal power.
I can't take credit for the instructions, I got them from a Raspberry Pi forum post from 2013. It can be found at https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=44177
The instruction below are done in a terminal window.
Start-up Pi
Start off with a clean Jessie or Jessie Lite image on the micro-SD card. Boot up and change a few settings via Raspi-Config (sudo raspi-config). The HDD or thumb drive should not be plugged into the Pi at this point.
- Change_password
- Change locale as appropriate
- Change the timezone as appropriate
- Change memory split to 16Mb
- Exit and reboot
Find the device name
Plug in HDD or thumb drive and type “tail /var/log/messages” (without the quotes). The log should looks something like this:
Oct 19 20:43:53 pie39 kernel: [512215.189155] usb 1-1.2: SerialNumber: L2069VYG
Oct 19 20:43:53 pie39 kernel: [512215.190065] usb-storage 1-1.2:1.0: USB Mass Storage device detected
Oct 19 20:43:53 pie39 kernel: [512215.208124] scsi host0: usb-storage 1-1.2:1.0
Oct 19 20:43:54 pie39 kernel: [512216.272762] scsi 0:0:0:0: Direct-Access Maxtor OneTouch II 023g PQ: 0 ANSI: 4
Oct 19 20:43:54 pie39 kernel: [512216.282521] sd 0:0:0:0: Attached scsi generic sg0 type 0
Oct 19 20:43:54 pie39 kernel: [512216.328166] sd 0:0:0:0: [sda] 195813072 512-byte logical blocks: (100 GB/93.4 GiB)
Oct 19 20:43:54 pie39 kernel: [512216.383210] sd 0:0:0:0: [sda] Write Protect is off
Oct 19 20:43:54 pie39 kernel: [512216.438164] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Oct 19 20:43:55 pie39 kernel: [512216.629457] sda: sda1
Oct 19 20:43:55 pie39 kernel: [512216.849474] sd 0:0:0:0: [sda] Attached SCSI disk
We can see that device sda has been assigned. If you got another device name (eg sdb) then change the commands below to refer to /dev/sdb instead.
Partition disk
Start fdisk so we can see the partitions and create one as needed. Type “sudo fdisk /dev/sda”. The command p will display the partitions:
Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sda: 93.4 GiB, 100256292864 bytes, 195813072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0b2461bf
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 195809279 195807232 93.4G 7 HPFS/NTFS/exFAT
Lets delete it (D command) and create a new primary partition (N command). We'll use the defaults for everything. Once done we’ll write it to disk (W command). If you need to exit without writing it to disk then use the Q command.
My Seagate Expansion had 4 partitions. They are shown as extra lines as above with /dev/sda2, /dev/sda3 and /dev/sda4 under the Device column. I had to repeat the delete command for each partition.
Command (m for help): d
Selected partition 1
Partition 1 has been deleted.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-195813071, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-195813071, default 195813071):
Created a new partition 1 of type 'Linux' and of size 93.4 GiB
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
If you get an error message after that ioctl line then you know your device is not compatible like my OneTouch II.
Format disk
We now need to format the partition in the ext4 format with the following command “sudo mke2fs -t ext4 -L OneTouchII /dev/sda1”. The word after the -L parameter is the volume label. You might want to name it something other than OneTouchII.
mke2fs 1.42.12 (29-Aug-2014)
/dev/sda1 contains a ext4 file system labelled 'OneTouchII'
last mounted on Sat Oct 1 22:35:25 2016
Proceed anyway? (y,n) y
Creating filesystem with 24476378 4k blocks and 6119424 inodes
Filesystem UUID: d335ad29-9708-478a-80aa-9dabf0e3cdeb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Lets mount the disk by typing “sudo mount /dev/sda1 /mnt”
Check we can see it by typing “df –h” and we should get something like this:
Filesystem Size Used Avail Use% Mounted on
/dev/root 15G 1.2G 13G 9% /
devtmpfs 483M 0 483M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 6.5M 481M 2% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mmcblk0p1 63M 21M 43M 33% /boot
/dev/sda1 92G 60M 88G 1% /mnt
Copy root across
Lets get rsync installed so we can copy all the files. Type “sudo apt-get install rsync". This should install the rsync package.
Copy root partition across by typing “sudo rsync -axv / /mnt”. This can take around 15 minutes.
Point root to the HDD
Backup the boot.txt file by typing “sudo cp /boot/cmdline.txt /boot/cmdline.orig”
Display it by typing "cat /boot/cmdline.txt". It should look like this:
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait
Edit the file (sudo nano /boot/cmdline.txt) and change the root= to our new partition and add a rootdelay of 5 seconds. After we're done it should look like this:
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/sda1 rootfstype=ext4 elevator=deadline rootwait rootdelay=5
Edit /mnt/etc/fstab (sudo nano /mnt/etc/fstab) and add the following:
/dev/sda1 / ext4 defaults,noatime 0 1
And then comment out the memory card entry (put a # symbol at the front)
#/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
It should now look like:
proc /proc proc defaults 0 0
/dev/sda1 / ext4 defaults,noatime 0 1
/dev/mmcblk0p1 /boot vfat defaults 0 2
#/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, so no using swapon|off from here on, use dphys-swapfile swap[on|off] for that
Reboot (sudo reboot)
I first tried it on a Maxtor OneTouch II external hard disk that I had siting around. It gave me an error when I tried to write the partition table to the disk, so it may not work on all devices. I also tried it on a Sandisk Cruiser Switch (thumb drive) and lastly on a Seagate Expansion external hard disk. The last two worked fine. As they say your mileage might vary.
The Pi isn't very good at powering USB devices like hard disks so use a powered hard disk (ie one that comes with it own power pack) or you could use the PiDrive adaptor cable that supplies power to both the Pi and the external hard disk. Thumb drives should be fine as they require minimal power.
I can't take credit for the instructions, I got them from a Raspberry Pi forum post from 2013. It can be found at https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=44177
The instruction below are done in a terminal window.
Start-up Pi
Start off with a clean Jessie or Jessie Lite image on the micro-SD card. Boot up and change a few settings via Raspi-Config (sudo raspi-config). The HDD or thumb drive should not be plugged into the Pi at this point.
- Change_password
- Change locale as appropriate
- Change the timezone as appropriate
- Change memory split to 16Mb
- Exit and reboot
Find the device name
Plug in HDD or thumb drive and type “tail /var/log/messages” (without the quotes). The log should looks something like this:
Oct 19 20:43:53 pie39 kernel: [512215.189155] usb 1-1.2: SerialNumber: L2069VYG
Oct 19 20:43:53 pie39 kernel: [512215.190065] usb-storage 1-1.2:1.0: USB Mass Storage device detected
Oct 19 20:43:53 pie39 kernel: [512215.208124] scsi host0: usb-storage 1-1.2:1.0
Oct 19 20:43:54 pie39 kernel: [512216.272762] scsi 0:0:0:0: Direct-Access Maxtor OneTouch II 023g PQ: 0 ANSI: 4
Oct 19 20:43:54 pie39 kernel: [512216.282521] sd 0:0:0:0: Attached scsi generic sg0 type 0
Oct 19 20:43:54 pie39 kernel: [512216.328166] sd 0:0:0:0: [sda] 195813072 512-byte logical blocks: (100 GB/93.4 GiB)
Oct 19 20:43:54 pie39 kernel: [512216.383210] sd 0:0:0:0: [sda] Write Protect is off
Oct 19 20:43:54 pie39 kernel: [512216.438164] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Oct 19 20:43:55 pie39 kernel: [512216.629457] sda: sda1
Oct 19 20:43:55 pie39 kernel: [512216.849474] sd 0:0:0:0: [sda] Attached SCSI disk
We can see that device sda has been assigned. If you got another device name (eg sdb) then change the commands below to refer to /dev/sdb instead.
Partition disk
Start fdisk so we can see the partitions and create one as needed. Type “sudo fdisk /dev/sda”. The command p will display the partitions:
Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sda: 93.4 GiB, 100256292864 bytes, 195813072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0b2461bf
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 195809279 195807232 93.4G 7 HPFS/NTFS/exFAT
Lets delete it (D command) and create a new primary partition (N command). We'll use the defaults for everything. Once done we’ll write it to disk (W command). If you need to exit without writing it to disk then use the Q command.
My Seagate Expansion had 4 partitions. They are shown as extra lines as above with /dev/sda2, /dev/sda3 and /dev/sda4 under the Device column. I had to repeat the delete command for each partition.
Command (m for help): d
Selected partition 1
Partition 1 has been deleted.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-195813071, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-195813071, default 195813071):
Created a new partition 1 of type 'Linux' and of size 93.4 GiB
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
If you get an error message after that ioctl line then you know your device is not compatible like my OneTouch II.
Format disk
We now need to format the partition in the ext4 format with the following command “sudo mke2fs -t ext4 -L OneTouchII /dev/sda1”. The word after the -L parameter is the volume label. You might want to name it something other than OneTouchII.
mke2fs 1.42.12 (29-Aug-2014)
/dev/sda1 contains a ext4 file system labelled 'OneTouchII'
last mounted on Sat Oct 1 22:35:25 2016
Proceed anyway? (y,n) y
Creating filesystem with 24476378 4k blocks and 6119424 inodes
Filesystem UUID: d335ad29-9708-478a-80aa-9dabf0e3cdeb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Lets mount the disk by typing “sudo mount /dev/sda1 /mnt”
Check we can see it by typing “df –h” and we should get something like this:
Filesystem Size Used Avail Use% Mounted on
/dev/root 15G 1.2G 13G 9% /
devtmpfs 483M 0 483M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 6.5M 481M 2% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mmcblk0p1 63M 21M 43M 33% /boot
/dev/sda1 92G 60M 88G 1% /mnt
Copy root across
Lets get rsync installed so we can copy all the files. Type “sudo apt-get install rsync". This should install the rsync package.
Copy root partition across by typing “sudo rsync -axv / /mnt”. This can take around 15 minutes.
Point root to the HDD
Backup the boot.txt file by typing “sudo cp /boot/cmdline.txt /boot/cmdline.orig”
Display it by typing "cat /boot/cmdline.txt". It should look like this:
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait
Edit the file (sudo nano /boot/cmdline.txt) and change the root= to our new partition and add a rootdelay of 5 seconds. After we're done it should look like this:
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/sda1 rootfstype=ext4 elevator=deadline rootwait rootdelay=5
Edit /mnt/etc/fstab (sudo nano /mnt/etc/fstab) and add the following:
/dev/sda1 / ext4 defaults,noatime 0 1
And then comment out the memory card entry (put a # symbol at the front)
#/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
It should now look like:
proc /proc proc defaults 0 0
/dev/sda1 / ext4 defaults,noatime 0 1
/dev/mmcblk0p1 /boot vfat defaults 0 2
#/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, so no using swapon|off from here on, use dphys-swapfile swap[on|off] for that
Reboot (sudo reboot)
16 October 2016
16th of October
Farm status
Intel GPUs
Running Seti overnight and some Seti-beta on the iGPU
Nvidia GPUs
Did a burst of Seti on two of them during the week, otherwise off
Raspberry Pis
Running Einstein BRP4 work
Other news
Einstein have started a "Tuning run" of their new Gravity Wave search. From initial reports work units use up to 1.5Gb of memory each. The 4 core/8 thread i7 machines only have 8Gb. I have ordered memory upgrades for 8 of them.
Two of the Raspberry Pi'3's were throwing validation errors on the work they completed so I reimaged and reinstalled them. Fortunately it doesn't take too long to rebuild them. The Rpi is prone to having memory card corruptions from time to time. I've read that I doesn't happen if you use some other media for the root partition (eg an external HDD or USB memory stick).
I was running a test Seti multi-beam app as the developers are looking at why the Intel iGPU's throw so many errors. They did some code changes which made things worse. They are looking at trying to disable the optimisation that get applied by the OpenCL compiler.
Intel GPUs
Running Seti overnight and some Seti-beta on the iGPU
Nvidia GPUs
Did a burst of Seti on two of them during the week, otherwise off
Raspberry Pis
Running Einstein BRP4 work
Other news
Einstein have started a "Tuning run" of their new Gravity Wave search. From initial reports work units use up to 1.5Gb of memory each. The 4 core/8 thread i7 machines only have 8Gb. I have ordered memory upgrades for 8 of them.
Two of the Raspberry Pi'3's were throwing validation errors on the work they completed so I reimaged and reinstalled them. Fortunately it doesn't take too long to rebuild them. The Rpi is prone to having memory card corruptions from time to time. I've read that I doesn't happen if you use some other media for the root partition (eg an external HDD or USB memory stick).
I was running a test Seti multi-beam app as the developers are looking at why the Intel iGPU's throw so many errors. They did some code changes which made things worse. They are looking at trying to disable the optimisation that get applied by the OpenCL compiler.
09 October 2016
9th of October
Farm Status
Intel GPUs
Running Seti work overnight
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Pi news
For the last few weeks I have been running the optimised Raspberry Pi app and having all sorts of issues. I have taken all 9 of the Pi3's back to Jessie (from Stretch) because tasks take around 1/3rd longer under Stretch.
The in-place app has issues, however the out-of-place app seems to work fine. The drawback with the out-of-place app is it needs more memory and so I am limited to running 3 tasks at a time to fit within the 1Gb of memory the Pi3 has.
Stock app approx 42k seconds per task x 4
In-place app approx. 23.7k seconds per tasks x 4
Out-of-place app approx. 18.8k seconds per task x 3
The author of the optimised app is looking into the issues with the in-place. Hopefully we'll get it working and it will be the app of choice.
In other Pi-related news our Raspberry Pi team quietly crossed the magic number of 3.14 million credits. I was hoping we might have got a mention on the official Raspberry Pi blog or Magpi (their magazine) but we haven't so far.
I am also trying to sketch together a design for a Pi rack as I am annoyed with the Pi's being scattered around one corner of my computer room. The idea is to get the drawings done in some CAD software that can then be used by a 3D printer to make most of the parts. More on this once I've made some progress. I do have rough hand-drawn ones but need to get it done properly.
Non-Pi news
GPUgrid is having issues with providing sufficient tasks to keep all the crunchers busy. They are also awaiting the release of the CUDA 8.5 toolkit. The CUDA 8.0 toolkit was released about a week ago so I am not sure how long it will take before 8.5 becomes publicly available.
CPDN had a bunch of new Weather at Home2 models but they seem to have all disappeared at the moment as is typical with them - Lots of tasks then none.
I'm still trying to get Seti credits up to the same level as Einstein and Asteroids (35 million and 34.5 million respectively). Unfortunately Seti doesn't give as much credit as the other two projects so it takes longer to get to the same level. Seti has been giving out a lot of noisy work units recently that finish with an overflow error in 3 seconds flat.
The weather is getting increasingly warmer and so its harder to run the farm 24/7. At the moment I am running work overnight only.
Intel GPUs
Running Seti work overnight
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Pi news
For the last few weeks I have been running the optimised Raspberry Pi app and having all sorts of issues. I have taken all 9 of the Pi3's back to Jessie (from Stretch) because tasks take around 1/3rd longer under Stretch.
The in-place app has issues, however the out-of-place app seems to work fine. The drawback with the out-of-place app is it needs more memory and so I am limited to running 3 tasks at a time to fit within the 1Gb of memory the Pi3 has.
Stock app approx 42k seconds per task x 4
In-place app approx. 23.7k seconds per tasks x 4
Out-of-place app approx. 18.8k seconds per task x 3
The author of the optimised app is looking into the issues with the in-place. Hopefully we'll get it working and it will be the app of choice.
In other Pi-related news our Raspberry Pi team quietly crossed the magic number of 3.14 million credits. I was hoping we might have got a mention on the official Raspberry Pi blog or Magpi (their magazine) but we haven't so far.
I am also trying to sketch together a design for a Pi rack as I am annoyed with the Pi's being scattered around one corner of my computer room. The idea is to get the drawings done in some CAD software that can then be used by a 3D printer to make most of the parts. More on this once I've made some progress. I do have rough hand-drawn ones but need to get it done properly.
Non-Pi news
GPUgrid is having issues with providing sufficient tasks to keep all the crunchers busy. They are also awaiting the release of the CUDA 8.5 toolkit. The CUDA 8.0 toolkit was released about a week ago so I am not sure how long it will take before 8.5 becomes publicly available.
CPDN had a bunch of new Weather at Home2 models but they seem to have all disappeared at the moment as is typical with them - Lots of tasks then none.
I'm still trying to get Seti credits up to the same level as Einstein and Asteroids (35 million and 34.5 million respectively). Unfortunately Seti doesn't give as much credit as the other two projects so it takes longer to get to the same level. Seti has been giving out a lot of noisy work units recently that finish with an overflow error in 3 seconds flat.
The weather is getting increasingly warmer and so its harder to run the farm 24/7. At the moment I am running work overnight only.
18 September 2016
18th of September
Farm Status
Intel GPUs
Running Seti work overnight
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Rpi news
One of the users over at Einstein has optimised the BRP4 app to run in almost half the time that the project supplied app does the same work. Its for the Pi3 only.
Since the 3rd of September I have noticed tasks on all the Pi3's were taking longer. Some updates in Debian Stretch seem to have made them take 63k seconds instead of 42k seconds. I am now in the process of going back to Jessie-Lite and then putting the optimised app on them. The optimised app reduces the time by almost half depending on which of the two flavours you run. There is one using an In-place FFT that uses less memory (approx. 137Mb per task) and takes around 24k seconds. There is also an Out-of-place FFT version that uses more memory (approx. 201Mb per task) and takes around 21k seconds. As I don't run a GUI on them I can just squeeze 4 of the out-of-place into the 1Gb memory that the Pi3 has.
I tried putting the optimised app on Pi #9 and it kept locking up. Strangely enough if I upgrade to Stretch it works, but tasks take about 30k seconds. I think this Pi3 is faulty as it had some issues when I first got it. It seems okay to run the project supplied app so I will just leave it running that for the moment. I have updated #1 and #2 which seem to be working fine. The remainder of the Pi's are running down their cache so I can put the optimised app on them.
Cleaning
As the Intel GPU machines have been running pretty constantly I did a round of cleaning dust filters. I have four of the older model Fractal design ARC Midi cases where you have to remove the entire front panel to clean the dust filter. The newer ones have a clip-on filter which is much easier to remove. I found one had a dead fan. I didn't have a spare 140mm fan so I took the rear fan off and swapped it to the front and put a Noctua 120mm fan on the back. It looks like I might need to buy some spare 140mm fans as replacements.
Intel GPUs
Running Seti work overnight
Nvidia GPUs
Off
Raspberry Pis
Running Einstein BRP4 work
Rpi news
One of the users over at Einstein has optimised the BRP4 app to run in almost half the time that the project supplied app does the same work. Its for the Pi3 only.
Since the 3rd of September I have noticed tasks on all the Pi3's were taking longer. Some updates in Debian Stretch seem to have made them take 63k seconds instead of 42k seconds. I am now in the process of going back to Jessie-Lite and then putting the optimised app on them. The optimised app reduces the time by almost half depending on which of the two flavours you run. There is one using an In-place FFT that uses less memory (approx. 137Mb per task) and takes around 24k seconds. There is also an Out-of-place FFT version that uses more memory (approx. 201Mb per task) and takes around 21k seconds. As I don't run a GUI on them I can just squeeze 4 of the out-of-place into the 1Gb memory that the Pi3 has.
I tried putting the optimised app on Pi #9 and it kept locking up. Strangely enough if I upgrade to Stretch it works, but tasks take about 30k seconds. I think this Pi3 is faulty as it had some issues when I first got it. It seems okay to run the project supplied app so I will just leave it running that for the moment. I have updated #1 and #2 which seem to be working fine. The remainder of the Pi's are running down their cache so I can put the optimised app on them.
Cleaning
As the Intel GPU machines have been running pretty constantly I did a round of cleaning dust filters. I have four of the older model Fractal design ARC Midi cases where you have to remove the entire front panel to clean the dust filter. The newer ones have a clip-on filter which is much easier to remove. I found one had a dead fan. I didn't have a spare 140mm fan so I took the rear fan off and swapped it to the front and put a Noctua 120mm fan on the back. It looks like I might need to buy some spare 140mm fans as replacements.
30 August 2016
30th of August
Farm status
Intel GPUs
Running Seti overnight. Three running off CPDN tasks
Nvidia GPUs
Running Seti overnight on two machines.
Raspberry Pis
Running Einstein BRP4 work
Other farm news
The two machines with the mATX motherboards have now been transplanted from mATX cases into ATX cases which allows for a 2nd GPU to be installed later. If the rumours are correct Nvidia is expected to announce availability of the GTX 1050 in September.
The weather has warmed up so I am now running machines overnight even though winter hasn't officially ended.
Some of the Intel GPU machines have been running Weather at Home 2 climate models which have been taking as long as 220 hours depending on the region that the work unit covers. I still have a few left to complete before I can apply updates to the machines. CPDN work units usually don't like being interrupted so I try and finish off any that are running before applying updates.
Einstein Intel OpenCL testing
Einstein got some information from Intel regarding their OpenCL not working and have adjusted some tolerances in their validator. Intel were saying that their implementation of fused multiply-add instructions were more accurate than before.
I ran a batch of OpenCL tasks on two machines using the Intel GPU with mixed results. Some work unit types (BRP4) mostly validated and some (BRP6) all failed to validate. The BRP6's were taking around 8 hours each work unit to process. One of the machines fetched way too much work which I ended up having to abort after a few days of running in high priority mode.
Farm configuration
The Intel GPU part of the farm consist of 8 machines with i7-6700 CPU's and HD Graphics 530 built-in.
There are 4 Nvidia GPU machines. Two of them have i7-5820K CPU's with a pair of GTX750Ti cards. The other two have i3-6100T CPU's with a single GTX970 each.
There are 9 Raspberry Pi3's.
Intel GPUs
Running Seti overnight. Three running off CPDN tasks
Nvidia GPUs
Running Seti overnight on two machines.
Raspberry Pis
Running Einstein BRP4 work
Other farm news
The two machines with the mATX motherboards have now been transplanted from mATX cases into ATX cases which allows for a 2nd GPU to be installed later. If the rumours are correct Nvidia is expected to announce availability of the GTX 1050 in September.
The weather has warmed up so I am now running machines overnight even though winter hasn't officially ended.
Some of the Intel GPU machines have been running Weather at Home 2 climate models which have been taking as long as 220 hours depending on the region that the work unit covers. I still have a few left to complete before I can apply updates to the machines. CPDN work units usually don't like being interrupted so I try and finish off any that are running before applying updates.
Einstein Intel OpenCL testing
Einstein got some information from Intel regarding their OpenCL not working and have adjusted some tolerances in their validator. Intel were saying that their implementation of fused multiply-add instructions were more accurate than before.
I ran a batch of OpenCL tasks on two machines using the Intel GPU with mixed results. Some work unit types (BRP4) mostly validated and some (BRP6) all failed to validate. The BRP6's were taking around 8 hours each work unit to process. One of the machines fetched way too much work which I ended up having to abort after a few days of running in high priority mode.
Farm configuration
The Intel GPU part of the farm consist of 8 machines with i7-6700 CPU's and HD Graphics 530 built-in.
There are 4 Nvidia GPU machines. Two of them have i7-5820K CPU's with a pair of GTX750Ti cards. The other two have i3-6100T CPU's with a single GTX970 each.
There are 9 Raspberry Pi3's.
20 August 2016
20th of August
Farm status
Intel GPUs
Half running Seti and the other half running both CPDN and Seti
Nvidia GPUs
Two off at the shop
Raspberry Pis
All running Einstein BRP4 work
Farm updates
The two new dedicated GPU crunchers are in the shop. They are going from a Fractal Designs ARC Mini (mATX) case to an ARC Midi (ATX) case. That should allow me to use the bottom PCIe slot in them without hitting the power supply. I looked at getting the Asus Z170 Pro motherboard but only the gaming version seems to be available so I took the cheaper option and went with moving the existing mATX motherboards into a bigger case.
While dropping the two GPU crunchers off at the shop this morning I picked up the remaining i7-6700 which has now been setup. Its running a bit of each project to get some numbers up. I will switch it over to Seti once it finishes off its Einstein work. This brings the Intel GPU part of the cluster up to 8 machines.
While this is going on there are a bunch of updates for the Raspberry Pi's so I am backing them up and then updating. The updates don't take too long but backing up the SD card takes a while.
Future upgrades
As I mentioned in my last post I am awaiting some movement in the GPU crunching space and for AMD to see what they've got available, A couple of interesting messages threads I've been reading lately regarding a 40 core CPU cruncher using dual Xeon v4 processors. It looks like it could be a replacement for my two i7-5860K machines. All I can do at the moment is wait.
One thing I am looking at doing is replacing the disks in the various crunchers. Some of them have an Intel SSD plus a hard disk and the remainder have hard disks of varying sizes. I prefer to have things standardised. The hard disks that went into the dedicated GPU crunchers are 320Gb drives manufactured in 2008 that I have reused from previous machines. They work fine but are rather slow.
Intel GPUs
Half running Seti and the other half running both CPDN and Seti
Nvidia GPUs
Two off at the shop
Raspberry Pis
All running Einstein BRP4 work
Farm updates
The two new dedicated GPU crunchers are in the shop. They are going from a Fractal Designs ARC Mini (mATX) case to an ARC Midi (ATX) case. That should allow me to use the bottom PCIe slot in them without hitting the power supply. I looked at getting the Asus Z170 Pro motherboard but only the gaming version seems to be available so I took the cheaper option and went with moving the existing mATX motherboards into a bigger case.
While dropping the two GPU crunchers off at the shop this morning I picked up the remaining i7-6700 which has now been setup. Its running a bit of each project to get some numbers up. I will switch it over to Seti once it finishes off its Einstein work. This brings the Intel GPU part of the cluster up to 8 machines.
While this is going on there are a bunch of updates for the Raspberry Pi's so I am backing them up and then updating. The updates don't take too long but backing up the SD card takes a while.
Future upgrades
As I mentioned in my last post I am awaiting some movement in the GPU crunching space and for AMD to see what they've got available, A couple of interesting messages threads I've been reading lately regarding a 40 core CPU cruncher using dual Xeon v4 processors. It looks like it could be a replacement for my two i7-5860K machines. All I can do at the moment is wait.
One thing I am looking at doing is replacing the disks in the various crunchers. Some of them have an Intel SSD plus a hard disk and the remainder have hard disks of varying sizes. I prefer to have things standardised. The hard disks that went into the dedicated GPU crunchers are 320Gb drives manufactured in 2008 that I have reused from previous machines. They work fine but are rather slow.
14 August 2016
14th of August
Farm Status
Intel GPUs
Half (3) running Seti and the other half running CPDN plus Seti
Nvidia GPUs
One running GPUgrid overnight.
Raspberry Pis
All running Einstein BRP4 work
Project update - Einstein
They were off for the week while upgrading their web site. Its still got a few issues but its running and the backend BOINC components are also running. In fact the Raspberry Pi's didn't run out of work so I have reduced their cache settings back to a more reasonable level.
Hardware upgrades
I got the two GPU crunchers last weekend. They are both i3 based systems who's purpose is to run a graphics card or two. I'm not happy with them. Sure they work okay, well one does. One has a faulty case fan so until its replaced I can't run the GPU in it. The problem is they are mATX boards in a Fractal Designs ARC Mini case and there isn't enough room to put a 2nd GPU in. Its designed for motherboards up to the mATX size. The motherboard has a 2nd PCIe slot, but its so close to the edge of the board a double-slot card won't fit because the power supply is in the way.
I have ordered a couple more ARC Midi cases and Z170 Pro motherboards (they're ATX size) and will cannibalise the mATX machines for parts.
I picked up one of the two i7-6700's on the weekend. Its up and running. It did some Einstein and some Asteroids work to get a burn-in and is now running Seti work. This brings the Intel GPU part of the farm up to 7 machines.
Wish list
I am waiting on a Pascal-based replacement for the GTX750Ti to be announced by Nvidia. I have no idea if they'll call it a GTX1050Ti or not or if they'll be as good as the 750Ti.
I am also waiting for AMD Zen to be released. They sound like they'll make good multi-core CPU crunchers, but we'll have to wait and see. The teasers we've been given by AMD indicate they'll have a 8 core/16 thread CPU that uses a maximum of 95 watts. What their performance is like we don't know yet.
Intel GPUs
Half (3) running Seti and the other half running CPDN plus Seti
Nvidia GPUs
One running GPUgrid overnight.
Raspberry Pis
All running Einstein BRP4 work
Project update - Einstein
They were off for the week while upgrading their web site. Its still got a few issues but its running and the backend BOINC components are also running. In fact the Raspberry Pi's didn't run out of work so I have reduced their cache settings back to a more reasonable level.
Hardware upgrades
I got the two GPU crunchers last weekend. They are both i3 based systems who's purpose is to run a graphics card or two. I'm not happy with them. Sure they work okay, well one does. One has a faulty case fan so until its replaced I can't run the GPU in it. The problem is they are mATX boards in a Fractal Designs ARC Mini case and there isn't enough room to put a 2nd GPU in. Its designed for motherboards up to the mATX size. The motherboard has a 2nd PCIe slot, but its so close to the edge of the board a double-slot card won't fit because the power supply is in the way.
I have ordered a couple more ARC Midi cases and Z170 Pro motherboards (they're ATX size) and will cannibalise the mATX machines for parts.
I picked up one of the two i7-6700's on the weekend. Its up and running. It did some Einstein and some Asteroids work to get a burn-in and is now running Seti work. This brings the Intel GPU part of the farm up to 7 machines.
Wish list
I am waiting on a Pascal-based replacement for the GTX750Ti to be announced by Nvidia. I have no idea if they'll call it a GTX1050Ti or not or if they'll be as good as the 750Ti.
I am also waiting for AMD Zen to be released. They sound like they'll make good multi-core CPU crunchers, but we'll have to wait and see. The teasers we've been given by AMD indicate they'll have a 8 core/16 thread CPU that uses a maximum of 95 watts. What their performance is like we don't know yet.
31 July 2016
31st of July
Farm status
Intel GPUs
All running Seti work
Nvidia GPUs
The two remaining ones are running Seti work
Raspberry Pis
All running Einstein BRP4 work
Other news
As mentioned in my last post Einstein are taking their web site off-line from the 1st to the 5th of August so they can upgrade it. The work unit processing hopefully won't be off-line for as long, but just in case I have increased the cache settings on the Raspberry Pis.
I have transitioned all the machines from Asteroids work onto Seti. Asteroids reached, and now has passed, my Einstein credit of 35.6 million which was my target. The switch to Seti is to bring its credit up to the same level.
As of Friday (the 28th) the computer shop still didn't have the motherboards needed to build the 4 machines on order. That is two more Intel GPU machines (mainly used for CPU work) and two GPU crunchers. I've dropped off various bits that are getting reused from the older machines such as power supplies, hard disks and a pair of GTX970's to go into them. I'll probably get all 4 next weekend now.
Microsoft pushed the Intel beta graphics driver 4474 for the HD Graphics 530 out via Windows update in the last week as a required update. All my Intel GPU machines now have it. It contains a fix for people having issues with 1080p versus 1080i screen modes which didn't effect me. It doesn't fix their OpenCL issue.
Intel GPUs
All running Seti work
Nvidia GPUs
The two remaining ones are running Seti work
Raspberry Pis
All running Einstein BRP4 work
Other news
As mentioned in my last post Einstein are taking their web site off-line from the 1st to the 5th of August so they can upgrade it. The work unit processing hopefully won't be off-line for as long, but just in case I have increased the cache settings on the Raspberry Pis.
I have transitioned all the machines from Asteroids work onto Seti. Asteroids reached, and now has passed, my Einstein credit of 35.6 million which was my target. The switch to Seti is to bring its credit up to the same level.
As of Friday (the 28th) the computer shop still didn't have the motherboards needed to build the 4 machines on order. That is two more Intel GPU machines (mainly used for CPU work) and two GPU crunchers. I've dropped off various bits that are getting reused from the older machines such as power supplies, hard disks and a pair of GTX970's to go into them. I'll probably get all 4 next weekend now.
Microsoft pushed the Intel beta graphics driver 4474 for the HD Graphics 530 out via Windows update in the last week as a required update. All my Intel GPU machines now have it. It contains a fix for people having issues with 1080p versus 1080i screen modes which didn't effect me. It doesn't fix their OpenCL issue.
24 July 2016
24th of July
Farm Status
Intel GPUs
All six crunching Asteroids
Nvidia GPUs
Two remaining are running Asteroids
Raspberry Pis
All nine crunching Einstein
Project news - Einstein
They are going to be taking the project off-line from the 1st of August to the 5th of August. This is so they can convert to their new format website that they have been running on Albert for quite some time. They probably won't need the back-end servers off-line for so long.
Farm news
Given the length of time Einstein will be off-line I have already increased the cache on the Raspberry Pis so they have enough work to ride out the outage. I also took the opportunity to update all of them to the latest version of Raspbian.
I am number 10 in Asteroids top participant list (by recent average credit). This is despite being down two machines that were retired last week. I am waiting for the replacement machines to be assembled by the computer shop. Hopefully they'll be ready for next weekend. I also need to chase the shop as to what is happening with the replacement GPU crunchers.
My internet speed is rather slow these days. I had a service call with the phone company last week. They found two faults, one on the incoming line and one between the incoming line and a second communications cupboard. They swapped the lines over but that doesn't seem to have helped.
I am considering swapping internet provider to one that offers ADSL 2+ annex M support, thus improving upload speed but sacrificing some download speed. I have ordered a new ADSL router for this but it has yet to arrive.
Intel GPUs
All six crunching Asteroids
Nvidia GPUs
Two remaining are running Asteroids
Raspberry Pis
All nine crunching Einstein
Project news - Einstein
They are going to be taking the project off-line from the 1st of August to the 5th of August. This is so they can convert to their new format website that they have been running on Albert for quite some time. They probably won't need the back-end servers off-line for so long.
Farm news
Given the length of time Einstein will be off-line I have already increased the cache on the Raspberry Pis so they have enough work to ride out the outage. I also took the opportunity to update all of them to the latest version of Raspbian.
I am number 10 in Asteroids top participant list (by recent average credit). This is despite being down two machines that were retired last week. I am waiting for the replacement machines to be assembled by the computer shop. Hopefully they'll be ready for next weekend. I also need to chase the shop as to what is happening with the replacement GPU crunchers.
My internet speed is rather slow these days. I had a service call with the phone company last week. They found two faults, one on the incoming line and one between the incoming line and a second communications cupboard. They swapped the lines over but that doesn't seem to have helped.
I am considering swapping internet provider to one that offers ADSL 2+ annex M support, thus improving upload speed but sacrificing some download speed. I have ordered a new ADSL router for this but it has yet to arrive.
16 July 2016
16th of July
Farm status
Intel GPUs
All running Asteroids
Nvidia GPUs
One running Asteroids, one off and the other two in pieces (see below)
Raspberry Pis
All running Einstein BRP4 work
Machine shuffling
As mentioned above two of the Nvidia GPU machines are in pieces. They've been decommissioned while I get a couple more of the i7-6700's. Actually cannibalised is probably a better description. I've also got a couple of i5's on order that will become two new Nvidia GPU machines. The computer shop is still awaiting Asus motherboards.
After this the Intel GPU part of the farm will consist of 8 x i7-6700's. The idea is to have CPU crunchers (the i7-6700's) and GPU crunchers with one or two GPU's each. I still need to get the dual GTX750Ti cards into dedicated GPU crunchers too, but the GTX970's that came out of the two machines will be done first. Oh and I have a bunch of i7-3770 based motherboards looking for a home.
The newer Nvidia 1070 and 1080 cards are too expensive to consider at the moment, but I will look at replacing the GTX970's once the price is more reasonable.
Intel GPUs
All running Asteroids
Nvidia GPUs
One running Asteroids, one off and the other two in pieces (see below)
Raspberry Pis
All running Einstein BRP4 work
Machine shuffling
As mentioned above two of the Nvidia GPU machines are in pieces. They've been decommissioned while I get a couple more of the i7-6700's. Actually cannibalised is probably a better description. I've also got a couple of i5's on order that will become two new Nvidia GPU machines. The computer shop is still awaiting Asus motherboards.
After this the Intel GPU part of the farm will consist of 8 x i7-6700's. The idea is to have CPU crunchers (the i7-6700's) and GPU crunchers with one or two GPU's each. I still need to get the dual GTX750Ti cards into dedicated GPU crunchers too, but the GTX970's that came out of the two machines will be done first. Oh and I have a bunch of i7-3770 based motherboards looking for a home.
The newer Nvidia 1070 and 1080 cards are too expensive to consider at the moment, but I will look at replacing the GTX970's once the price is more reasonable.
07 July 2016
7th of July
Farm status
Intel GPUs
All crunching Asteroids work
Nvidia GPUs
One crunching Asteroids work. The others are off.
Raspberry Pis
All crunching Einstein work
Other news
I'm in the top 20 participants list at Asteroids and I am not even running all the machines.
I was running GPUgrid work earlier in the week, but its been raining for most of the week so they've been off.
The two i7-6700's that I ordered a couple of weeks ago are awaiting motherboards. Apparently ASUS managed to sell everything in Australia by June 30th (our end of financial year) so the motherboards got back-ordered. They aren't expected to be in stock for a couple of weeks now. I am supplying the hard disks so will be taking the i7-3770's out of service for parts.
The Raspberry Pi's don't seem to like running off the Anker Powerport 6. I had one flashing its power LED and another where it went off completely. I plugged both into the official 2A power adapters which seemed to fix their problems.
I was looking at a dual-Xeon machine but after reading some news about AMD Zen processors will wait to see what happens at the end of this year. AMD are targeting the extreme processor market (I have two i7 extreme processors). They will have 8 cores/16 threads and lots of PCIe lanes. We will have to see how they perform.
Intel GPUs
All crunching Asteroids work
Nvidia GPUs
One crunching Asteroids work. The others are off.
Raspberry Pis
All crunching Einstein work
Other news
I'm in the top 20 participants list at Asteroids and I am not even running all the machines.
I was running GPUgrid work earlier in the week, but its been raining for most of the week so they've been off.
The two i7-6700's that I ordered a couple of weeks ago are awaiting motherboards. Apparently ASUS managed to sell everything in Australia by June 30th (our end of financial year) so the motherboards got back-ordered. They aren't expected to be in stock for a couple of weeks now. I am supplying the hard disks so will be taking the i7-3770's out of service for parts.
The Raspberry Pi's don't seem to like running off the Anker Powerport 6. I had one flashing its power LED and another where it went off completely. I plugged both into the official 2A power adapters which seemed to fix their problems.
I was looking at a dual-Xeon machine but after reading some news about AMD Zen processors will wait to see what happens at the end of this year. AMD are targeting the extreme processor market (I have two i7 extreme processors). They will have 8 cores/16 threads and lots of PCIe lanes. We will have to see how they perform.
25 June 2016
25th of June
Farm status
Intel GPUs
Finishing off Einstein O1 search and switching to Asteroids and CPDN work
Nvidia GPUs
Just finished Einstein O1 search now running GPUgrid work
Raspberry Pis
Running Einstein BRP4 work
Project news - Einstein
Their O1 search has pretty much ended. It will take a couple of weeks for the remaining work to be returned. The only work units available now are the resend of failed work units. Hopefully they won't take too long to get the next search running in a different frequency range.
Hardware purchases
I've ordered a couple more i7-6700's to replace the two i7-3770's. The i7-3770's each have a GTX970 which will need to go into a GPU-based machine.
For the GPU-based machines I've selected motherboards and they'll have a low power i3 CPU to run them. I still need to find a suitable case. The Fractal Designs ARC Midi cases that I normally use have been discontinued and are proving hard to find.
Nvidia released GTX1080 and GTX1070 graphics cards recently so once the prices drop a bit and supply becomes plentiful I may look at replacing the GTX970's. They don't work on GPUgrid yet so there's no hurry.
Intel GPU drivers
They've released a new driver 4464 for the HD Graphics 530. Unfortunately they still haven't got their OpenCL working so I can't use the Intel GPU's for anything useful. Despite opening a ticket with Intel in February there doesn't appear to be any progress in fixing their drivers.
Intel GPUs
Finishing off Einstein O1 search and switching to Asteroids and CPDN work
Nvidia GPUs
Just finished Einstein O1 search now running GPUgrid work
Raspberry Pis
Running Einstein BRP4 work
Project news - Einstein
Their O1 search has pretty much ended. It will take a couple of weeks for the remaining work to be returned. The only work units available now are the resend of failed work units. Hopefully they won't take too long to get the next search running in a different frequency range.
Hardware purchases
I've ordered a couple more i7-6700's to replace the two i7-3770's. The i7-3770's each have a GTX970 which will need to go into a GPU-based machine.
For the GPU-based machines I've selected motherboards and they'll have a low power i3 CPU to run them. I still need to find a suitable case. The Fractal Designs ARC Midi cases that I normally use have been discontinued and are proving hard to find.
Nvidia released GTX1080 and GTX1070 graphics cards recently so once the prices drop a bit and supply becomes plentiful I may look at replacing the GTX970's. They don't work on GPUgrid yet so there's no hurry.
Intel GPU drivers
They've released a new driver 4464 for the HD Graphics 530. Unfortunately they still haven't got their OpenCL working so I can't use the Intel GPU's for anything useful. Despite opening a ticket with Intel in February there doesn't appear to be any progress in fixing their drivers.
19 June 2016
19th of June
Farm status
The farm is divided into 3 types of computers. The Intel GPU's have a GPU imbedded into the main processor, Nvidia GPUs which have added Nvidia graphics cards and the Raspberry Pis. The GPU's can be used for calculations in addition to the CPU however not all projects have apps that can use the GPU.
Intel GPUs
All running Einstein O1 search (CPU only)
Nvidia GPUs
Two running Einstein O1 search (CPU only)
Raspberry Pis
All running Einstein BRP4 search
Project news - Einstein
Now that we've got most of the way through the O1 search they have found that the data is quite noisy and so are planning on cleaning it up a bit before the next search starts. The next search will scan different frequencies.
The O1 search had smaller work units available for the slower computers. There were less of these smaller work units and they've now processed all of them. There are only full-size work units left and have opened up all computers to running these full-size work units as we near the end of this search.
Pi news
This week I've been running through the Pi's and updating them. There were quite a few updates in Raspbian Stretch and also BOINC 7.6.33 came out. I've been doing them one at a time so as to minimise the impact on their output. They usually get a new computer Id so I need to finish off any work in progress before upgrading them.
I have put the possibly faulty Pi back in service as number 9. I don't have a case for it so its just sitting on top of a cardboard box at the moment. I started off with a clean Jessie lite image on a new SD card and upgraded it so its the same as all the others.
As part of the tidy-up of the Pi's I got a 5 port 40 watt USB charger. I have plugged 4 Pi's into it at the moment and they seem to be working fine. If all 5 ports are used then it can only supply 8 watts to each device and the recommended is 10 watts so it probably can't handle 5 at once. Its just a cheap no-name one that the computer shop had. I will give it a few days to see how reliable it is. I need to get more USB A to Micro-USB B cables if I am going to use these to replace the individual power adapters.
Future upgrades
I was in the computer shop on the weekend and started off looking at a dual-Xeon build however getting parts was one problem (the motherboard wasn't available in Australia). The case for the motherboard was going to be another problem as it was an SSI EEB sized board. Lastly dissipating 180 watts or more of heat from the case was going to be an issue. In the end I gave up on that idea.
I got a quote for another 6 core/12 thread machine using the Asus X99-A II motherboard. It wasn't as expensive as I thought and I already have two like it. However I think I might just get a couple more i7-6700 machines as they are fairly energy efficient and don't produce too much heat. They'll replace the two remaining i7-3770 machines that also have GTX970 graphics cards installed. I will then need to find a couple of fairly low-powered machines to drive graphics cards.
The farm is divided into 3 types of computers. The Intel GPU's have a GPU imbedded into the main processor, Nvidia GPUs which have added Nvidia graphics cards and the Raspberry Pis. The GPU's can be used for calculations in addition to the CPU however not all projects have apps that can use the GPU.
Intel GPUs
All running Einstein O1 search (CPU only)
Nvidia GPUs
Two running Einstein O1 search (CPU only)
Raspberry Pis
All running Einstein BRP4 search
Project news - Einstein
Now that we've got most of the way through the O1 search they have found that the data is quite noisy and so are planning on cleaning it up a bit before the next search starts. The next search will scan different frequencies.
The O1 search had smaller work units available for the slower computers. There were less of these smaller work units and they've now processed all of them. There are only full-size work units left and have opened up all computers to running these full-size work units as we near the end of this search.
Pi news
This week I've been running through the Pi's and updating them. There were quite a few updates in Raspbian Stretch and also BOINC 7.6.33 came out. I've been doing them one at a time so as to minimise the impact on their output. They usually get a new computer Id so I need to finish off any work in progress before upgrading them.
I have put the possibly faulty Pi back in service as number 9. I don't have a case for it so its just sitting on top of a cardboard box at the moment. I started off with a clean Jessie lite image on a new SD card and upgraded it so its the same as all the others.
As part of the tidy-up of the Pi's I got a 5 port 40 watt USB charger. I have plugged 4 Pi's into it at the moment and they seem to be working fine. If all 5 ports are used then it can only supply 8 watts to each device and the recommended is 10 watts so it probably can't handle 5 at once. Its just a cheap no-name one that the computer shop had. I will give it a few days to see how reliable it is. I need to get more USB A to Micro-USB B cables if I am going to use these to replace the individual power adapters.
Future upgrades
I was in the computer shop on the weekend and started off looking at a dual-Xeon build however getting parts was one problem (the motherboard wasn't available in Australia). The case for the motherboard was going to be another problem as it was an SSI EEB sized board. Lastly dissipating 180 watts or more of heat from the case was going to be an issue. In the end I gave up on that idea.
I got a quote for another 6 core/12 thread machine using the Asus X99-A II motherboard. It wasn't as expensive as I thought and I already have two like it. However I think I might just get a couple more i7-6700 machines as they are fairly energy efficient and don't produce too much heat. They'll replace the two remaining i7-3770 machines that also have GTX970 graphics cards installed. I will then need to find a couple of fairly low-powered machines to drive graphics cards.
11 June 2016
11th of June
Farm status
Intel GPUs
All running Einstein O1 search
Nvidia GPUs
All running Einstein O1 search
Raspberry Pis
All running Einstein BRP4 search
Pi news
Got 2 more Pi3's this week.
Added number 8 to the farm. Went to copy backup from another one and write to new SD card only to have WinDiskImager complain new SD card was smaller. They are all Sandisk Ultra 16Gb cards. I ended up clean installing it, starting with Jessie-lite and upgrade it to Stretch and then update kernel which gave me 4.4.12.
I swapped out number 7 thinking it was faulty. Went to upgrade kernel and bricked it. Apparently it wasn't faulty. So clean install from Jessie-lite, upgrade it to Stretch and did a kernel update which now is up to 4.4.13.
Tonight I tried upgrading Stretch on number 1 and bricked it so looks like I am going to have to go through each one of them and clean install Jessie-lite and upgrade them.
Other news
I reorganising the shelving so I could get the proxy server and some of the Pi's off the floor. That bit is done.
I still need to find a better solution for the Pi's. There are few issues having so many of them. The first one is around power cables/adapters. I currently have 8 and need a couple of power boards to plug them all in. The second issue is how to house them. Currently I have each one in an official case. Most of the multi-user ones seem to stack them using something like the Beaglebone which is simply a piece of Perspex in the shape of a bone and a bunch of metal stand-offs. I could put all of the Pi's in a row on some Perspex and then have another layer above with holes to mount the 40mm fans, held up by metal stand-off's. Its just a idea at the moment.
Intel GPUs
All running Einstein O1 search
Nvidia GPUs
All running Einstein O1 search
Raspberry Pis
All running Einstein BRP4 search
Pi news
Got 2 more Pi3's this week.
Added number 8 to the farm. Went to copy backup from another one and write to new SD card only to have WinDiskImager complain new SD card was smaller. They are all Sandisk Ultra 16Gb cards. I ended up clean installing it, starting with Jessie-lite and upgrade it to Stretch and then update kernel which gave me 4.4.12.
I swapped out number 7 thinking it was faulty. Went to upgrade kernel and bricked it. Apparently it wasn't faulty. So clean install from Jessie-lite, upgrade it to Stretch and did a kernel update which now is up to 4.4.13.
Tonight I tried upgrading Stretch on number 1 and bricked it so looks like I am going to have to go through each one of them and clean install Jessie-lite and upgrade them.
Other news
I reorganising the shelving so I could get the proxy server and some of the Pi's off the floor. That bit is done.
I still need to find a better solution for the Pi's. There are few issues having so many of them. The first one is around power cables/adapters. I currently have 8 and need a couple of power boards to plug them all in. The second issue is how to house them. Currently I have each one in an official case. Most of the multi-user ones seem to stack them using something like the Beaglebone which is simply a piece of Perspex in the shape of a bone and a bunch of metal stand-offs. I could put all of the Pi's in a row on some Perspex and then have another layer above with holes to mount the 40mm fans, held up by metal stand-off's. Its just a idea at the moment.
04 June 2016
4th of June
Farm status
Intel GPUs
All crunching Einstein O1 search
Nvidia GPUs
The 6 core/12 thread machine have been doing Einstein O1 search until today. Currently they're off.
Raspberry Pis
All crunching Einstein BRP4 work
Pi news
The Pi cases finally arrived this week. They are the official Pi3 case and apart from the LED windows being on the opposite side they're identical to the official Pi2 case. I would have thought they would have a cut-out for the WiFi but they don't.
Pi number 7 seems to be faulty. Its been crashing every couple of days. I've even taken to rebooting it regularly to try and prevent it locking up. I updated the firmware/kernels on all the other Pi's today however on number 7 it bricked it and I had to restore the SD card image. I'll order another couple of Pi3's. One will be the replacement and the other will become number 8. I've only got two spare heat sinks so that will use them up.
And this is what the Pi part of the farm looks like
As you can see its rather messy. I have chrome shelving racks that the big machines sit on and that's the legs you can see (with some spare ones lying across the front). There is another Pi3 out of shot sitting on one of the shelves.
Other news
There is a bit of chatter on the forums about the new Nvidia 1080 cards, which from early reports don't seem to be a great advance, at least from the number crunching perspective. Its still early days and may require apps to update to take advantage of them. The 1070 is due out later this month and we're not sure when the lower-end cards will be released.
We've had a bit of wind and rain for the last couple of days, but its set to clear up tomorrow. That is the reason why the 6 core crunchers are off at the moment. I have to keep the loft window closed and it gets too hot with them running.
I have been doing a bit of research into getting a multi-CPU machine. I am tinkering with the idea of a dual CPU motherboard but the 180 watts of heat to dissipate for a pair of Xeon E5-2640v3 CPU's is going to be a problem. Then there is the price tag.
Intel GPUs
All crunching Einstein O1 search
Nvidia GPUs
The 6 core/12 thread machine have been doing Einstein O1 search until today. Currently they're off.
Raspberry Pis
All crunching Einstein BRP4 work
Pi news
The Pi cases finally arrived this week. They are the official Pi3 case and apart from the LED windows being on the opposite side they're identical to the official Pi2 case. I would have thought they would have a cut-out for the WiFi but they don't.
Pi number 7 seems to be faulty. Its been crashing every couple of days. I've even taken to rebooting it regularly to try and prevent it locking up. I updated the firmware/kernels on all the other Pi's today however on number 7 it bricked it and I had to restore the SD card image. I'll order another couple of Pi3's. One will be the replacement and the other will become number 8. I've only got two spare heat sinks so that will use them up.
And this is what the Pi part of the farm looks like
As you can see its rather messy. I have chrome shelving racks that the big machines sit on and that's the legs you can see (with some spare ones lying across the front). There is another Pi3 out of shot sitting on one of the shelves.
Other news
There is a bit of chatter on the forums about the new Nvidia 1080 cards, which from early reports don't seem to be a great advance, at least from the number crunching perspective. Its still early days and may require apps to update to take advantage of them. The 1070 is due out later this month and we're not sure when the lower-end cards will be released.
We've had a bit of wind and rain for the last couple of days, but its set to clear up tomorrow. That is the reason why the 6 core crunchers are off at the moment. I have to keep the loft window closed and it gets too hot with them running.
I have been doing a bit of research into getting a multi-CPU machine. I am tinkering with the idea of a dual CPU motherboard but the 180 watts of heat to dissipate for a pair of Xeon E5-2640v3 CPU's is going to be a problem. Then there is the price tag.
24 May 2016
24th of May
Farm Status
Intel GPUs
All 6 i7-6700's are running Einstein O1 search (CPU only)
Nvidia GPUs
Both 6 core/12 thread machines are running Einstein O1 search (CPU only)
Raspberry Pis
All 7 Pi3's are running Einstein BRP4 search
Status
Pretty much everything that can is running the Einstein O1 search, apart from two machines. It was warmer earlier in the week so they were only running overnight. The weather has turned cooler so they're running 24/7 at the moment.
I had one Intel GPU machine process some Climate Prediction weather at home2 tasks. They took around 220 hours to complete. Not exactly quick. In comparison the Einstein O1 tasks take 12 to 13.5 hours depending on which machine is running them.
I tried Intel's beta video drivers 4444 on one of the i7-6700 's last week to see if they have finally fixed their OpenCL errors. Nope. I raised a bug with Intel in November 2015. I was looking at a 16 core Xeon cruncher, but it and some other purchases are on hold until Intel fix their bugs.
Pi stuff
The Raspberry Pi's have reached a recent average credit of 3000 after a couple of weeks running 24/7. I have had to replace an SD card in one Pi as its been getting slower and slower. Hopefully the new SD card will resolve the issue.
I can't update Raspbian Stretch. Every time I do the Pi loses the network interface after rebooting and I have to reimage the SD card. I reported it two weeks ago in the Raspberry Pi forums.
I am still waiting on the Raspberry Pi cases and Noctua fans. The cases have been on back-order for a while now. The computer shop didn't order the fans so I had to remind them this week. Three of the Pi3's are naked and just sitting on the cardboard boxes they came in. I am thinking of getting another Pi3 to add to the farm.
Intel GPUs
All 6 i7-6700's are running Einstein O1 search (CPU only)
Nvidia GPUs
Both 6 core/12 thread machines are running Einstein O1 search (CPU only)
Raspberry Pis
All 7 Pi3's are running Einstein BRP4 search
Status
Pretty much everything that can is running the Einstein O1 search, apart from two machines. It was warmer earlier in the week so they were only running overnight. The weather has turned cooler so they're running 24/7 at the moment.
I had one Intel GPU machine process some Climate Prediction weather at home2 tasks. They took around 220 hours to complete. Not exactly quick. In comparison the Einstein O1 tasks take 12 to 13.5 hours depending on which machine is running them.
I tried Intel's beta video drivers 4444 on one of the i7-6700 's last week to see if they have finally fixed their OpenCL errors. Nope. I raised a bug with Intel in November 2015. I was looking at a 16 core Xeon cruncher, but it and some other purchases are on hold until Intel fix their bugs.
Pi stuff
The Raspberry Pi's have reached a recent average credit of 3000 after a couple of weeks running 24/7. I have had to replace an SD card in one Pi as its been getting slower and slower. Hopefully the new SD card will resolve the issue.
I can't update Raspbian Stretch. Every time I do the Pi loses the network interface after rebooting and I have to reimage the SD card. I reported it two weeks ago in the Raspberry Pi forums.
I am still waiting on the Raspberry Pi cases and Noctua fans. The cases have been on back-order for a while now. The computer shop didn't order the fans so I had to remind them this week. Three of the Pi3's are naked and just sitting on the cardboard boxes they came in. I am thinking of getting another Pi3 to add to the farm.
14 May 2016
14th of May
Farm Status
Intel GPUs
Running Einstein O1 search overnight. One machine running 4 Weather at home tasks (70% done after 170 hours).
Nvidia GPUs
The 6 core/12 thread machines are running Einstein O1 search overnight.
Raspberry Pis
Running Einstein BRP4 work 24/7.
BOINC testing
We've had 7.6.32 for a little while. I finally put it onto the Windows machines today. I am still waiting for it to come through for the Raspberry Pi's which are on .31 at the moment. The main change is to allow for multiple download servers and if it gets a failure to try the next server on the list.
Parallella's and Pi2's
As I mentioned a couple of posts back I was considering retiring them. They were removed from the farm. The Parallella's had a nice powder-coating of dust due to the fan on top blowing air straight in so I had to give them a clean. Currently they're off.
The Pi2's were also removed from the farm and I have used them for the Beowulf cluster that I wrote the tutorial for. Currently they're off.
Other stuff
With the above Pi3 changes the Pi part of the farm consists of 7 Raspberry Pi3's. Due to the number of power boards and power adapters I am looking at the Anker USB hubs to rationalise the power side of things.
Nvidia has a new GPU chip called the Pascal soon. There will be different versions of them. We're expecting the GTX1080 as a replacement for the GTX980 at the end of May. There is also a GTX1070 due out in June. If I'm replacing any of the GPU cards I would probably get rid of my GTX970's and then look at upgrading the GTX750Ti's but we will have to see how they perform.
Intel GPUs
Running Einstein O1 search overnight. One machine running 4 Weather at home tasks (70% done after 170 hours).
Nvidia GPUs
The 6 core/12 thread machines are running Einstein O1 search overnight.
Raspberry Pis
Running Einstein BRP4 work 24/7.
BOINC testing
We've had 7.6.32 for a little while. I finally put it onto the Windows machines today. I am still waiting for it to come through for the Raspberry Pi's which are on .31 at the moment. The main change is to allow for multiple download servers and if it gets a failure to try the next server on the list.
Parallella's and Pi2's
As I mentioned a couple of posts back I was considering retiring them. They were removed from the farm. The Parallella's had a nice powder-coating of dust due to the fan on top blowing air straight in so I had to give them a clean. Currently they're off.
The Pi2's were also removed from the farm and I have used them for the Beowulf cluster that I wrote the tutorial for. Currently they're off.
Other stuff
With the above Pi3 changes the Pi part of the farm consists of 7 Raspberry Pi3's. Due to the number of power boards and power adapters I am looking at the Anker USB hubs to rationalise the power side of things.
Nvidia has a new GPU chip called the Pascal soon. There will be different versions of them. We're expecting the GTX1080 as a replacement for the GTX980 at the end of May. There is also a GTX1070 due out in June. If I'm replacing any of the GPU cards I would probably get rid of my GTX970's and then look at upgrading the GTX750Ti's but we will have to see how they perform.
08 May 2016
Raspberry Pi Beowulf Cluster
A few weekends ago I spent a bit of time trying to make sense of the various instructions for setting up a Beowulf cluster using Raspberry Pi's. What I have below is the steps I took with a bit of trial and error to get it going.
With Beowulf you have a Head or Master node that is used to control the compute nodes. You'll need a minimum of 2 Pi's. One is for the head node and one as a compute node. You can have as many compute nodes if you wish. In this example I am just doing a single node cluster.
Parts
a. Raspberry Pi's x compute nodes you want + 1 for head node
b. microSD cards x nodes (minimum 4Gb)
c. Network cables x nodes
d. Power adapters/cables x nodes
If you're not comfortable using the Linux command line then this isn't the best project for you as there is no GUI when using SSH.
I have a Windows computer that I use to access the Pi's via SSH and it has a SD card writer. The software I use is Putty for accessing the Pi's and Win32DiskImager to read/write images to the SD cards.
As I only did two nodes I updated each one from the Jessie release of Raspbian to the Stretch release. If you are doing a larger number of nodes you might want to write Jessie-Lite onto the SD card, get it upgraded to Stretch and then take a copy of that image and use it for the other nodes.
Create SD card image
1. Download the Raspbian image and unpack it. I started using the Jessie Lite version from March 2016 as it was the latest available version and doesn't come with too much extra stuff.
2. Write the Raspbian image to the microSD card.
3. Insert microSD card into the Pi and plug all the other bits in and power it up.
4. At this point I have a Pi called "raspberrypi" on my network and the router has automatically given it an IP address of 192.168.0.150. I need to give it a different name to the default and a fixed address. I can see it via my router and assign a specific IP address, I am setting the router up to use 192.168.0.100. When the Pi is rebooted it will get this new IP address.
Login to the Pi over SSH. The default user is "pi" and the password is "raspberry" (without the quotes). At the command prompt run raspi-config by typing "sudo raspi-config".
- Expand the filesystem
- change the user password
- change the name of the Pi
- Change memory split (I usually set it to 16)
- Set locale
- Set timezone
And reboot
For the first one I called it HeadNode as it will be the head of the cluster.
5. Login to the Pi again using your new password and we can now update it. Edit /etc/apt/sources.list to point to the stretch release (change the word Jessie to Stretch). I use nano but there are other text editors. Comment out all the lines in /etc/apt/sources.list.d/raspi.list by putting a # symbol in the first column.
6. Type "sudo apt-get update" and it should fetch the latest list of programs. This next bit takes some time, maybe an hour or two. Type "sudo apt-get dist-upgrade -y" to upgrade everything to the latest versions from the Raspbian repository to the stretch release. Once done you can reboot it.
7. Write the Jessie-Lite image to another microSD card. Insert it into the next Pi. This one is going to be our compute node. Power it up and repeat step 4. For this one I have called it ComputeNode1. Again I have assigned a specific IP address on the router as 192.168.0.101. Update it as per points 5 and 6..
7. At this point we should have one Pi called HeadNode with an IP address of 192.168.0.100 and one called ComputeNode1 with an IP address of 192.168.0.101.
8. Login to the head node and we'll need to provide the names of the other machines on the network we want to use. We need to edit the /etc/hosts file so type in "sudo nano hosts" and we need to add the IP addresses of the compute nodes.
Remove the 127.0.1.1 HeadNode (or ComputeNode1) line.
Add a line for each one at the end that has the IP address and the hostname. Add:
192.168.0.100 HeadNode
192.168.0.101 ComputeNode1
This way each machine will know the IP address for the others. Now lets check the connectivity by pinging each one. Type "ping ComputeNode1" and it should say "64 bytes from ComputeNode1 (192.168.0.101)" and a response time. Press Ctrl-C to stop it.
9. Login to ComputeNode1 and repeat the hosts file and ping test.
Setup NFS share
1. On headnode we'll create a shared directory that all nodes can all access. We start by installing the nfs-server software by typing "sudo apt-get install nfs-kernel-server". Enable services by typing "sudo update-rc.d rpcbind enable && sudo update-rc.d nfs-common enable" and then "sudo reboot".
2. Lets create a directory and set the owner to user pi. Type "sudo mkdir /mirror". Then "sudo chown -R pi:pi /mirror".
3. We now need to export it so the other nodes can see it. Type "sudo nano /etc/exports" to edit the file. At the end we need to add a line that reads "/mirror ComputeNode1(rw,sync,no_subtree_check)".
4. Restart the nfs-kernel-server by typing "sudo service nfs-kernel-server restart". Export the details by typing "sudo exportfs -a" and check its exporting by typing "sudo exportfs" and it should list the details from /etc/exports.
5. Over to computenode1 and we'll set it up now. On computenode1 we need to create a mount point and set the owner to user pi, type "sudo mkdir /mirror" followed by "sudo chown -R pi:pi /mirror".
6. Do a "showmount -e headnode" command. It should show the export list. If it gives an error then the rpcbind service isn't starting automatically. This seems to be a bug in Jessie and is resolved in Stretch, which is why we updated.
7. Mount the drive by typing "sudo mount headnode:/mirror /mirror". Now lets check it worked by doing a "df -h" command and it should be listed. To check permissions type "touch /mirror/test.txt". Go back to headnode and lets see if we can see the file by looking at the directory, type "ls -lh /mirror" which should show our test.txt file.
8. On computenode1 we want it to automatically mount at start up instead of doing it manually. Unmount it by typing "sudo umount /mirror". Edit the fstab file by typing "sudo nano /etc/fstab" and add the following "headnode:/mirror /mirror nfs". To test do a "mount -a" command.
It seems that the mount sometimes fails on the computenode, especially if headnode hasn't booted up first so you may need to manually do the mount command. In other tutorials I have see use of the autofs which will mount the directory when its first accessed. I won't go into details here.
Setup password-less SSH
1. Generate an ssh key to allow password-less login by typing "ssh-keygen -t rsa" and when prompted for a username and password just press enter.
2. Copy the generated public key to the other nodes by typing "cat ~/.ssh/id_rsa.pub | ssh pi@ IP Address 'cat >> .ssh/authorized_keys'" where IP Address is the IP address of the other node(s).
3. SSH into the other machine manually by typing "ssh" and see if it will let you logon without having to type in your username and password.
Repeat for each node.
Install MPICH
1. On both machines we'll need MPICH, so type in "sudo apt-get install mpich". To make sure it installed correctly type "which mpiexec" and "which mpirun".
2. On HeadNode change directory to our shared one by typing "cd /mirror".
3. Create a file listing all our compute nodes. Type "nano /mirror/machinefile" and add the following:
computenode1:4 # spawn 4 processes on computenode1
headnode:2 # spawn 2 processes on headnode
This says ComputeNode1 can run 4 tasks (at a time) and HeadNode can run 2. As you add more compute nodes repeat the computenode lines with the correct names and number of tasks allowed. You can have different machines so a Raspberry Pi B or B+ would only execute 1 task and Pi2's and Pi3's could execute 4 tasks at a time.
If you want a node to run only one task at a time then omit the colon and number. If its listed in the machinefile then its assumed to be able to run at least one task.
3. Lets create a simple program called mpi_hello, so on headnode type "nano mpi_hello.c" and paste the following in:
4. Compile it by typing "mpicc mpi_hello.c -o mpi_hello".
5. Run it by typing "mpirun -n 4 -f machinefile -wdir /mirror ./mpi_hello". The number following the -n tells it how many processes to run and the machinefile is the list of machines we created above. If it works we should get something like this as output:
Hello from processor 0 of 4
Hello from processor 1 of 4
Hello from processor 2 of 4
Hello from processor 3 of 4
Try different numbers after -n, for example -n 6 says to run 6 tasks which if we allowed headnode to run tasks would all run at the same time. If we specific more than we have cpu cores then they will run one after the other. If you allow headnode to run tasks you will notice the complete quicker than the compute node,.
The "-wdir /mirror" tells it the working directory. If you get errors check that its mounted and that all nodes have access. All the nodes need to be able to access it.
Some other suggestions
1. Use an external hard disk for additional disk space. WD make a PiDrive designed for the Raspberry Pi, but any USB hard disk that has its own power source should work.
2. There is a program called ClusterSSH that can be used to login to all the nodes at once and repeat the commands on each node. This can make maintenance a whole lot easier with multiple nodes.
3. Use a powered USB hub to power the Pi's and peripherals instead of using lots of power adapters.
With Beowulf you have a Head or Master node that is used to control the compute nodes. You'll need a minimum of 2 Pi's. One is for the head node and one as a compute node. You can have as many compute nodes if you wish. In this example I am just doing a single node cluster.
Parts
a. Raspberry Pi's x compute nodes you want + 1 for head node
b. microSD cards x nodes (minimum 4Gb)
c. Network cables x nodes
d. Power adapters/cables x nodes
If you're not comfortable using the Linux command line then this isn't the best project for you as there is no GUI when using SSH.
I have a Windows computer that I use to access the Pi's via SSH and it has a SD card writer. The software I use is Putty for accessing the Pi's and Win32DiskImager to read/write images to the SD cards.
As I only did two nodes I updated each one from the Jessie release of Raspbian to the Stretch release. If you are doing a larger number of nodes you might want to write Jessie-Lite onto the SD card, get it upgraded to Stretch and then take a copy of that image and use it for the other nodes.
Create SD card image
1. Download the Raspbian image and unpack it. I started using the Jessie Lite version from March 2016 as it was the latest available version and doesn't come with too much extra stuff.
2. Write the Raspbian image to the microSD card.
3. Insert microSD card into the Pi and plug all the other bits in and power it up.
4. At this point I have a Pi called "raspberrypi" on my network and the router has automatically given it an IP address of 192.168.0.150. I need to give it a different name to the default and a fixed address. I can see it via my router and assign a specific IP address, I am setting the router up to use 192.168.0.100. When the Pi is rebooted it will get this new IP address.
Login to the Pi over SSH. The default user is "pi" and the password is "raspberry" (without the quotes). At the command prompt run raspi-config by typing "sudo raspi-config".
- Expand the filesystem
- change the user password
- change the name of the Pi
- Change memory split (I usually set it to 16)
- Set locale
- Set timezone
And reboot
For the first one I called it HeadNode as it will be the head of the cluster.
5. Login to the Pi again using your new password and we can now update it. Edit /etc/apt/sources.list to point to the stretch release (change the word Jessie to Stretch). I use nano but there are other text editors. Comment out all the lines in /etc/apt/sources.list.d/raspi.list by putting a # symbol in the first column.
6. Type "sudo apt-get update" and it should fetch the latest list of programs. This next bit takes some time, maybe an hour or two. Type "sudo apt-get dist-upgrade -y" to upgrade everything to the latest versions from the Raspbian repository to the stretch release. Once done you can reboot it.
7. Write the Jessie-Lite image to another microSD card. Insert it into the next Pi. This one is going to be our compute node. Power it up and repeat step 4. For this one I have called it ComputeNode1. Again I have assigned a specific IP address on the router as 192.168.0.101. Update it as per points 5 and 6..
7. At this point we should have one Pi called HeadNode with an IP address of 192.168.0.100 and one called ComputeNode1 with an IP address of 192.168.0.101.
8. Login to the head node and we'll need to provide the names of the other machines on the network we want to use. We need to edit the /etc/hosts file so type in "sudo nano hosts" and we need to add the IP addresses of the compute nodes.
Remove the 127.0.1.1 HeadNode (or ComputeNode1) line.
Add a line for each one at the end that has the IP address and the hostname. Add:
192.168.0.100 HeadNode
192.168.0.101 ComputeNode1
This way each machine will know the IP address for the others. Now lets check the connectivity by pinging each one. Type "ping ComputeNode1" and it should say "64 bytes from ComputeNode1 (192.168.0.101)" and a response time. Press Ctrl-C to stop it.
9. Login to ComputeNode1 and repeat the hosts file and ping test.
Setup NFS share
1. On headnode we'll create a shared directory that all nodes can all access. We start by installing the nfs-server software by typing "sudo apt-get install nfs-kernel-server". Enable services by typing "sudo update-rc.d rpcbind enable && sudo update-rc.d nfs-common enable" and then "sudo reboot".
2. Lets create a directory and set the owner to user pi. Type "sudo mkdir /mirror". Then "sudo chown -R pi:pi /mirror".
3. We now need to export it so the other nodes can see it. Type "sudo nano /etc/exports" to edit the file. At the end we need to add a line that reads "/mirror ComputeNode1(rw,sync,no_subtree_check)".
4. Restart the nfs-kernel-server by typing "sudo service nfs-kernel-server restart". Export the details by typing "sudo exportfs -a" and check its exporting by typing "sudo exportfs" and it should list the details from /etc/exports.
5. Over to computenode1 and we'll set it up now. On computenode1 we need to create a mount point and set the owner to user pi, type "sudo mkdir /mirror" followed by "sudo chown -R pi:pi /mirror".
6. Do a "showmount -e headnode" command. It should show the export list. If it gives an error then the rpcbind service isn't starting automatically. This seems to be a bug in Jessie and is resolved in Stretch, which is why we updated.
7. Mount the drive by typing "sudo mount headnode:/mirror /mirror". Now lets check it worked by doing a "df -h" command and it should be listed. To check permissions type "touch /mirror/test.txt". Go back to headnode and lets see if we can see the file by looking at the directory, type "ls -lh /mirror" which should show our test.txt file.
8. On computenode1 we want it to automatically mount at start up instead of doing it manually. Unmount it by typing "sudo umount /mirror". Edit the fstab file by typing "sudo nano /etc/fstab" and add the following "headnode:/mirror /mirror nfs". To test do a "mount -a" command.
It seems that the mount sometimes fails on the computenode, especially if headnode hasn't booted up first so you may need to manually do the mount command. In other tutorials I have see use of the autofs which will mount the directory when its first accessed. I won't go into details here.
Setup password-less SSH
1. Generate an ssh key to allow password-less login by typing "ssh-keygen -t rsa" and when prompted for a username and password just press enter.
2. Copy the generated public key to the other nodes by typing "cat ~/.ssh/id_rsa.pub | ssh pi@ IP Address
3. SSH into the other machine manually by typing "ssh
Repeat for each node.
Install MPICH
1. On both machines we'll need MPICH, so type in "sudo apt-get install mpich". To make sure it installed correctly type "which mpiexec" and "which mpirun".
2. On HeadNode change directory to our shared one by typing "cd /mirror".
3. Create a file listing all our compute nodes. Type "nano /mirror/machinefile" and add the following:
computenode1:4 # spawn 4 processes on computenode1
headnode:2 # spawn 2 processes on headnode
This says ComputeNode1 can run 4 tasks (at a time) and HeadNode can run 2. As you add more compute nodes repeat the computenode lines with the correct names and number of tasks allowed. You can have different machines so a Raspberry Pi B or B+ would only execute 1 task and Pi2's and Pi3's could execute 4 tasks at a time.
If you want a node to run only one task at a time then omit the colon and number. If its listed in the machinefile then its assumed to be able to run at least one task.
3. Lets create a simple program called mpi_hello, so on headnode type "nano mpi_hello.c" and paste the following in:
#include < stdio.h >
#include < mpi.h >
int main(int argc, char** argv) {
int myrank, nprocs;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
printf("Hello from processor %d of %d\n", myrank, nprocs);
MPI_Finalize();
return 0;
}
4. Compile it by typing "mpicc mpi_hello.c -o mpi_hello".
5. Run it by typing "mpirun -n 4 -f machinefile -wdir /mirror ./mpi_hello". The number following the -n tells it how many processes to run and the machinefile is the list of machines we created above. If it works we should get something like this as output:
Hello from processor 0 of 4
Hello from processor 1 of 4
Hello from processor 2 of 4
Hello from processor 3 of 4
Try different numbers after -n, for example -n 6 says to run 6 tasks which if we allowed headnode to run tasks would all run at the same time. If we specific more than we have cpu cores then they will run one after the other. If you allow headnode to run tasks you will notice the complete quicker than the compute node,.
The "-wdir /mirror" tells it the working directory. If you get errors check that its mounted and that all nodes have access. All the nodes need to be able to access it.
Some other suggestions
1. Use an external hard disk for additional disk space. WD make a PiDrive designed for the Raspberry Pi, but any USB hard disk that has its own power source should work.
2. There is a program called ClusterSSH that can be used to login to all the nodes at once and repeat the commands on each node. This can make maintenance a whole lot easier with multiple nodes.
3. Use a powered USB hub to power the Pi's and peripherals instead of using lots of power adapters.
30 April 2016
30th of April
Its all all about the Pi's again this week.
I have ordered three more Pi3's which will be used to replace the remaining Pi2 and the two Parallella's that are still crunching. That will bring the Pi part of the cluster up to 7 and should make maintenance a little easier with only 1 flavour of Linux to deal with.
I have joined a team called Raspberry Pi over at the Einstein project under the handle PorkyPies.
Beowulf Cluster
I used the now redundant Pi2's for medical, I mean computer, experiments. I was trying to follow a tutorial about setting up a Pi cluster. A true Beowulf cluster. Not what I am running at the moment which is called a CoW (Cluster of Workstations). In a Beowulf cluster you use off the shelf hardware and have a head or master node and it issues tasks to the compute nodes to run and typically uses MPI (Message Passing Interface) to communicate.
The tutorial I was using was this one: Build-a-compact-4-node-raspberry-pi-cluster
There are other tutorials but most seem dated and were for other flavours of Linux so I felt this one was closer to what I wanted to try. There are a few things in there that I didn't want to do like using a USB hub to power it (I have plenty of power adapters) so I skipped that bit and they also added a Blinkstick which I wasn't going to do either. I didn't need a fancy case either, just two Pi2's should be enough to try it out.
Following the tutorial using the March 2016 release of Jessie lite was a failure. It seems the rpcbind service fails to start up. Having manually worked around that I got the ssh keys setup and mpich installed and while I could run a task on the head node I couldn't get it to use the compute node.
I had a bit more success using Stretch which at least seems to have fixed rpcbind not starting up, but still can't get it to run tasks on the compute node. At this point I am stuck.
I have ordered three more Pi3's which will be used to replace the remaining Pi2 and the two Parallella's that are still crunching. That will bring the Pi part of the cluster up to 7 and should make maintenance a little easier with only 1 flavour of Linux to deal with.
I have joined a team called Raspberry Pi over at the Einstein project under the handle PorkyPies.
Beowulf Cluster
I used the now redundant Pi2's for medical, I mean computer, experiments. I was trying to follow a tutorial about setting up a Pi cluster. A true Beowulf cluster. Not what I am running at the moment which is called a CoW (Cluster of Workstations). In a Beowulf cluster you use off the shelf hardware and have a head or master node and it issues tasks to the compute nodes to run and typically uses MPI (Message Passing Interface) to communicate.
The tutorial I was using was this one: Build-a-compact-4-node-raspberry-pi-cluster
There are other tutorials but most seem dated and were for other flavours of Linux so I felt this one was closer to what I wanted to try. There are a few things in there that I didn't want to do like using a USB hub to power it (I have plenty of power adapters) so I skipped that bit and they also added a Blinkstick which I wasn't going to do either. I didn't need a fancy case either, just two Pi2's should be enough to try it out.
Following the tutorial using the March 2016 release of Jessie lite was a failure. It seems the rpcbind service fails to start up. Having manually worked around that I got the ssh keys setup and mpich installed and while I could run a task on the head node I couldn't get it to use the compute node.
I had a bit more success using Stretch which at least seems to have fixed rpcbind not starting up, but still can't get it to run tasks on the compute node. At this point I am stuck.
19 April 2016
More Pi3 stuff
The heatsinks turned up. They can be ordered directly from Enzotech (www.enzotechnology.com) if you can't source them locally. These are a set of 8 called the BMR-C1 designed for RAM chips on graphics cards. But that means you can do 8 Pi's with the one set :-)
They are 14x14x14mm with the pins/fins arranged in a 5x5 grid. They are made from forged copper which is a better heat conductor than aluminium and they come with thermal tape applied to the base and ready to stick onto the Raspberry Pi SoC.
They also make lots of other sizes in case you want to cool other chips such as the USB chip.
I have now replaced the B+ with one of the Pi2's. The remaining Pi2's have been replaced with Pi3's. I still need to cut holes in the tops of the cases and mount the 40mm fans but that can wait until the weekend. The desk fan can keep them cool for the moment.
Given the speed advantage the Pi3 now has over the Parallella I am thinking of swapping out the Parallella's as well. Unfortunately the Epiphany chip on the Parallella sits idle.
The ARM part of the farm currently consists of:
1 x Pi2
4 x Pi3
2 x Parallella
If anyone wants the old ones I am happy to give them away but you'll need to pay for postage. I still have some of the original B's looking for a new home too.
16 April 2016
Mod my Pi Case
The case is the "official" one as mentioned in my previous post.
The fan is a Noctua NF-A4x10-5v (40mm x 10mm - 5 volts). Noise-wise they are rated at 17dB. In fact its so quiet that you have to look at it to see if its actually running. They come with a 6 year warranty. I've been using them in my PC's for years and have not had one fail yet, but this is the first time I have used a 40mm one.
The fan kit has 4 mounting screws, a 2 to 3 pin adaptor, an extension cable, an Omni-join connector set and the rubber anti-vibration mounts. The fan has a 3 pin plug.
The end result...
And here it is running
I had a USB to 3 pin fan header which I used at first. After finding out the correct GPIO pins I used the supplied 2 to 3 pin adaptor and simply plugged the 2 pin end onto the GPIO. Now with the Pi3 running flat out its coming in with temps around 55-56 degrees C (room temp is 27 degrees).
To find out how hot it is; in a terminal window issue the command vcgencmd measure_temp
03 April 2016
3rd of April
Farm status
Intel GPUs
All running Einstein overnight
Nvidia GPUs
The 6 core/12 thread machines are running Einstein overnight when the weather allows
Parallellas and Pis
Running Einstein work around the clock
The Einstein O1 search is running on the faster machines. They expect the first run to take 100 days to complete. After that they will do other searches using the same data.
Pi case mods
The official Raspberry Pi cases arrived a week ahead of schedule. While the Pi3 is the same size as the Pi2 and the earlier B+ there is one difference that the case doesn't allow for. The LED's on the Pi3 are on the opposite side. The official Pi case has a couple of holes covered by clear plastic but they are on the wrong side for the Pi3 so I will need to drill a couple of holes.
My original idea was to mount a 60mm fan on the top of the case. Unfortunately there is a ridge of plastic on the underside of the top piece that is exactly where the screws would go.
Plan B is to use a 40mm fan instead. I have ordered four Noctua 40mm 5 volt fans and already purchased a 38mm drill bit to cut the hole.
Lastly the copper heatsinks are proving difficult to find. I had one set left over from the Pi2's which also fit the Pi3. The larger SoC heatsink is 13x13x13mm in size and there is a slightly smaller one for the USB/network chip. There is one seller on eBay who sells a kit of two heatsinks with a miniature fan, however I don't want their fan. The other couple of companies I have purchased from previously don't seem to have any stock.
There are smaller (height) copper heatsinks available but they don't have enough surface area to be of any use. There are also plenty of aluminium ones but it isn't as efficient as copper.
Noctua have a C shaped PC cooler (NH-C14) that would be ideal for these if only they could shrink it to fit the Pi3.
Intel GPUs
All running Einstein overnight
Nvidia GPUs
The 6 core/12 thread machines are running Einstein overnight when the weather allows
Parallellas and Pis
Running Einstein work around the clock
The Einstein O1 search is running on the faster machines. They expect the first run to take 100 days to complete. After that they will do other searches using the same data.
Pi case mods
The official Raspberry Pi cases arrived a week ahead of schedule. While the Pi3 is the same size as the Pi2 and the earlier B+ there is one difference that the case doesn't allow for. The LED's on the Pi3 are on the opposite side. The official Pi case has a couple of holes covered by clear plastic but they are on the wrong side for the Pi3 so I will need to drill a couple of holes.
The top of the official Pi case
The underside of the top.
Top next to the rest of the case so you can see how it clips together.
My original idea was to mount a 60mm fan on the top of the case. Unfortunately there is a ridge of plastic on the underside of the top piece that is exactly where the screws would go.
Plan B is to use a 40mm fan instead. I have ordered four Noctua 40mm 5 volt fans and already purchased a 38mm drill bit to cut the hole.
Lastly the copper heatsinks are proving difficult to find. I had one set left over from the Pi2's which also fit the Pi3. The larger SoC heatsink is 13x13x13mm in size and there is a slightly smaller one for the USB/network chip. There is one seller on eBay who sells a kit of two heatsinks with a miniature fan, however I don't want their fan. The other couple of companies I have purchased from previously don't seem to have any stock.
There are smaller (height) copper heatsinks available but they don't have enough surface area to be of any use. There are also plenty of aluminium ones but it isn't as efficient as copper.
Noctua have a C shaped PC cooler (NH-C14) that would be ideal for these if only they could shrink it to fit the Pi3.
Subscribe to:
Posts (Atom)