15 March 2015

15th of March

Farm news
We're crunching overnight at the moment. Not all of the machines as its still not cool enough overnight but some of them are getting work.

The Pi's and the Parallella's are still running constantly too.

This week saw a bunch of windows updates, further complicated by the fact most of the windows machines have been off for the last month, so a heap of updates to download and apply.

I installed the GTX970's into the GPUgrid crunchers and have run a "short" work unit on each machine (they take around 2 hours). The GTX670's will go up on eBay soon.

BOINC testing
We got 7.4.42 for Windows to play with. No major changes just some bug fixes.

Project news - Asteroids
They've been having issues with work units failing to download (server side issue) and the guy that runs it is working in another city so its been left alone. It also was getting a certificate error, but he has managed to fix that. Its out of work at the moment and we're waiting on more.

Project news - CPDN
They've been restricting their work units to different computers, so the ANZ climate models which I used to run are now restricted to Mac computers only. The EU climate models I can get on the Windows machines and they take around 9 hours. The remaining work units are restricted to Linux hosts.

01 March 2015

1st of March

Farm news
Everything is off except the Raspberry Pi's and Parallella's.. Today got up to 32 degrees, which is basically how hot the room with the computers gets. I really need to look for an alternative (air-conditioned) location for the computers.

Raspberry Pis and wisdom
Much like the last fortnight I have been concentrating on tuning the Pi2's to get the most out of them. That involved the fftw wisdom files which tell fftw what function choices are quicker. The Einstein app that runs on the Pi's and Parallella's will use it if its there.

I ordered some more copper heat sinks as I only had 2 sets, but 3 Pi2's. I also got some USB cables with power switches, as the only way to power them off is to unplug it. The power cables arrived two weeks ago and the extra heat sinks last week.

The timings I am getting for the Einstein BRP4 tasks are around 16.17 hours for Pi2 with wisdom and 13.83 hours with the wisdom and over clocking to 1Ghz. In contrast the B's and the B+ take around 31.5 hours. I have generated a wisdom file for the B+ as well (all the B's have been retired) and will have to wait for some results to see if it helps.

Generating a wisdom file can be tricky because the Einstein app has been compiled with fftw 3.3.2. Debian Wheezy has 3.3.2, Debian Jessie has 3.3.4 and Ubuntu Trusty has 3.3.3. The wisdom file has to match the version of fftw. For the B+ I had to put Wheezy on an SD card, boot up the Pi with Wheezy, generate the wisdom and then copy it over to Jessie.

Pi surgery
I also did a bit of surgery on one of the cases to see if it helps with cooling as you can see below.

The idea was the same as the older B model Pi that I mounted a fan on top, however thought I would try with just the grill and see how it goes. The Pi with this is running about 5 degrees (C) hotter than the ones without their top on. The case design is curved so its difficult to mount a fan.

15 February 2015

15th of February

Farm news
Its still hot so nothing much running except the Parallella's and the Raspberry Pi's.

The GTX970's arrived but I haven't even opened the parcel they came in as I have been concentrating on the Pi's When it cools down a bit I might get them running.

Raspberry Pi's
It was a bit of a surprise that the Raspberry Pi Foundation released the Pi2. Last week I ordered first 1 and then another pair of the Pi2's. I spent quite a bit of time trying to get them to work until I swapped the power supply. It seems the dual USB chargers that work fine with the B and B+ are not what the Pi2 wants. Once I worked that out it was a straight forward process to get them going.

Unlike the B+ they now have a quad-core ARM SoC (System on a Chip). On top of that they are an ARM v7 and have 1Gb of memory. They cost the same as the B+ and have the same board layout. Even Microsoft is looking at running Windows 10 on them. Currently they run the same Debian as the B+ but with an updated bootloader and kernel. There is even an Ubuntu Snappy available for them.

I have 2 up and running at the moment. I have been running the B's down and as they complete their last task retire them. I will probably keep the B+ for a bit as I am using it for some Seti app testing. Still waiting on a case for the second pair as they were out of stock at the time. I will probably get different cases as I need to add a fan. The little case offered on element14 isn't suitable to add one (the case is thin plastic and curves). Most B+ cases have holes for the Pi camera module which doesn't help either.

I have completed some Einstein work so far with the Neon apps being a bit slower than the Parallella's by about an hour but need to try a few more to get better timings. Once done there is an fftw plan that I wanted to try and see if that makes any difference and then finally I will look at overclocking them once I have a better cooling solution.

30 January 2015

30th of January

Farm news
Weather has been hot so not much running again. The last few days were wet and cooler so I managed to get some of the farm crunching. Overnight I managed to have one of the GPUgrid crunchers processing their "long" work units (they ran out of short ones). I also ran a few CPDN work units (they take 60+ hours).

I have ordered a pair of EVGA GTX970 cards that will go into the GPUgrid crunchers. While my existing GTX670's can still do the work, the work units keep getting bigger and they are taking longer to complete. The new ones should use less power and produce less heat. I will put the old cards up on eBay once they've been swapped out.

Speaking of bigger work units GPUgrid did a special batch of "very long" work units that were expected to take 24 hours on the fastest of cards ("long" work units take 8 to 12 hours). A lot failed at the end because the project forgot to increase the maximum allowed file size. They create 185Mb result files. They have since corrected the file size limit and a number of users did a work around.

Seti App testing
Claggy has managed to get a Seti Multi-beam app compiled for the Raspberry Pi and the Parallella.

In early testing we found that the current 7.28 version would run but failed to validate. He has gone back to an earlier 7.0 version. I have done a couple of normal work units (they take around 66 hours on the Pi) and then I got a vlar (very low angle range) work unit that took 157 hours on the Pi. These at least validate.

The Parallella is faster but I had trouble transferring the program across. It seems the SD card partitions aren't set up as mountable. After posting a message on the Seti forums it seems one has to do the following:

sudo mknod -m 660 mmcblk0 b 179 0 sudo mknod -m 660 mmcblk0p1 b 179 1 sudo mknod -m 660 mmcblk0p2 b 179 2

In a directory of your choosing. Suggested places are /media or /dev. After that one can mount the SD card partitions:

cd /media
sudo mkdir boot
sudo mount mmcblk0p1 /media/boot

One of my Parallella's currently has 2 vlar tasks that have been going for the last 70 hours and think they have another 20 hours left to completion.

A big thankyou goes out to Claggy for the app, which we hope we can optimise once we've found a version that can validate.