06 February 2016

6th of February

Farm status
Intel GPUs
Running down their cache of Seti work

Nvidia GPUs

Parallellas and Pis
One Pi2 doing Seti Beta, the rest doing Einstein BRP4 work

Project news - Einstein
It seems Debian released an updated ca-certificates that broke connectivity to the project for people running the Jessie release (the Pi's included). They removed a certificate that the project need to validate against.

Fortunately I haven't updated my B+ that's still running Jessie so it can still connect to the project. The Stretch release doesn't have this issue as it has an updated libcurl.

Project news - Seti
They have now got a Multi Beam v8 app that will run on the Pi's and Pi2. Well I haven't tried it on the B+ but it says its for armv6l architecture. This is over on the beta project. I've run a few work units through without problems so far. Times have ranged from 1000 seconds (task overflowed results) to 100,000 seconds elapsed. The average depending on the angle range seems to be around 92,000 to 100,000 seconds.

There is an optimised version of the app available that has been compiled to make use of the Pi2's armv7l CPU features but haven't tried it yet.

Other stuff
I have been using one of the older i7-3770 machines to test Linux functionality and getting BOINC running on it. That's fine for CPU tasks and all is well but I want to also be able to use the Intel HD graphics. I have installed Beignet on it however when testing the Seti beta app it crashed. We think this maybe due to the fact its using GCC v5.0 and the apps are compiled using an older GCC that isn't compatible. Further testing to be carried out. I couldn't get any OpenCL work from Einstein (but plenty of CPU tasks available).

After getting this working I will try and get the Nvidia drivers running under Linux. If it works out then I could get the Intel GPU and Nvidia GPU machines running Linux rather than Windows.

I have also trialled the Intel HD Graphics 530 drivers under Windows and they produce invalid results. I've tried 3 different driver versions and logged a bug with Intel. They told me to deal with the OEM. So much for Intel support.

14 January 2016

14th of January

Farm status
Everything is off. Temps are in the 30's without the any help from the computers :-)

Project news - Seti
On new years day they released their v8 multi-beam app. Data for it isn't compatible with the previous v7 app and no data is being split for the old app so you have to update.

They have only released a CPU app and an AMD GPU app at this point. The other GPU apps are currently being tested on the beta site. I have attached a couple of different machines to the beta project to help in the testing. There aren't any optimised apps yet.

28 December 2015

28th of December 2015

Farm status
Intel GPU machines: Running Seti

Nvidia GPU machines: Two running GPUgrid work. The other two were running Seti yesterday

Parallella's and Pi's: Running Einstein BRP4 work

Farm recap
The "farm" currently consists of 3 groups of machines.

Intel GPU machines
6 x Intel Core i7-6700 computers (4 cores/8 threads) with 8Gb of memory. They have Intel HD Graphics 530 built-in.

Nvidia GPU machines
2 x Intel Core i7-5860 computers (6 cores/12 threads) with 16Gb of memory. They have two GTX750Ti graphics cards each.
2 x Intel Core i7-3770 computers (4 cores/8 threads) with 8Gb of memory with a GTX970 graphics card in each.

Parallella and Pi's
2 x Parallella's (dual-core ARM CPU with 1Gb of memory).
3 x Raspberry Pi2's (quad-core ARM CPU with 1Gb of memory).
1 x Raspberry Pi B+ (single core ARM CPU with 512Kb memory).

BOINC testing
We got 7.6.22 earlier in the week. I have it installed on the Parallella's and Windows machines. I am waiting on it to get into Debian Stretch (takes about a week) before I can install it on the Pi's. The main changes are around OpenCL detection and closing open files.

Raspberry Pi updates
Debian Stretch introduced a new libc6 (C runtime library) last week. It broke the Einstein BRP4 1.06 app that is sent to neon-incapable hosts like my B+. I ended up going back to the official Jessie release to get it going again.

I have an on-going issue with Debian Stretch - the std*.log files that BOINC creates are no longer getting updated. This seems to have happened some time back around BOINC 7.6.12 (October 2015) but I don't think its BOINC, rather some other changes that Debian are doing. I say that because the Windows machines and the Parallella don't have this issue. I did email the Debian BOINC maintainers however they can't see anything in the BOINC code that would cause it.

End of year
The end of 2015 is almost upon us. In reflection the farm has been updated to the most recent hardware available. Hopefully next year we will see some useful (from a crunchers perspective) changes to BOINC.

I mentioned in my last post an MPI capable BOINC would be useful for those who run farms and clusters. Another one I had hoped for was the Superhost idea. I may need to help fund development on one or both of them to get things happening.

Lets see what next year brings.

12 December 2015

12 December 2015

Farm status
Parallella's and Pi's are crunching Einstein work

The Intel-GPU machines are crunching Seti work overnight, only because its too hot during the day.

The Nvida-GPU machines are off.

As you'd know from reading my blog I have a "farm" of machines that currently are running BOINC to do the task scheduling. I had an idea that rather than running BOINC on each machine, why not run it on one machine and have it use MPI to communicate with the others and run tasks that way. If other people are interested in this idea then lets talk.

To that end I was following the instructions from the University of Southampton in setting up an MPI cluster using my redundant Raspberry Pi B models. The details about their Lego Supercomputer can be found Here. In the end it was still compiling the MPI software after some hours, despite the fact that I overclocked the Pi in question to 1Ghz. I gave up and went to bed. I will have to kick it off when I have plenty of time to waste seeing as it takes so long on the old B model.

The idea is that one would replace the BOINC API calls with MPI calls. Each app runs stand-alone, which most science apps seem to be written to do anyway, and then just passes the result files back for the BOINC client to handle. A better solution would be to have both sets of code and then the app can work via MPI or the BOINC API.

The BOINC client would need to be updated to check the status of worker machines and give tasks to the worker machines as needed. Exactly what a real compute cluster does, only BOINC is doing the scheduling and still handling the file transfers, etc.