Farm status
Intel GPU machines: Running Seti
Nvidia GPU machines: Two running GPUgrid work. The other two were running Seti yesterday
Parallella's and Pi's: Running Einstein BRP4 work
Farm recap
The "farm" currently consists of 3 groups of machines.
Intel GPU machines
6 x Intel Core i7-6700 computers (4 cores/8 threads) with 8Gb of memory. They have Intel HD Graphics 530 built-in.
Nvidia GPU machines
2 x Intel Core i7-5860 computers (6 cores/12 threads) with 16Gb of memory. They have two GTX750Ti graphics cards each.
2 x Intel Core i7-3770 computers (4 cores/8 threads) with 8Gb of memory with a GTX970 graphics card in each.
Parallella and Pi's
2 x Parallella's (dual-core ARM CPU with 1Gb of memory).
3 x Raspberry Pi2's (quad-core ARM CPU with 1Gb of memory).
1 x Raspberry Pi B+ (single core ARM CPU with 512Kb memory).
BOINC testing
We got 7.6.22 earlier in the week. I have it installed on the Parallella's and Windows machines. I am waiting on it to get into Debian Stretch (takes about a week) before I can install it on the Pi's. The main changes are around OpenCL detection and closing open files.
Raspberry Pi updates
Debian Stretch introduced a new libc6 (C runtime library) last week. It broke the Einstein BRP4 1.06 app that is sent to neon-incapable hosts like my B+. I ended up going back to the official Jessie release to get it going again.
I have an on-going issue with Debian Stretch - the std*.log files that BOINC creates are no longer getting updated. This seems to have happened some time back around BOINC 7.6.12 (October 2015) but I don't think its BOINC, rather some other changes that Debian are doing. I say that because the Windows machines and the Parallella don't have this issue. I did email the Debian BOINC maintainers however they can't see anything in the BOINC code that would cause it.
End of year
The end of 2015 is almost upon us. In reflection the farm has been updated to the most recent hardware available. Hopefully next year we will see some useful (from a crunchers perspective) changes to BOINC.
I mentioned in my last post an MPI capable BOINC would be useful for those who run farms and clusters. Another one I had hoped for was the Superhost idea. I may need to help fund development on one or both of them to get things happening.
Lets see what next year brings.
MarkJ
28 December 2015
12 December 2015
12 December 2015
Farm status
Parallella's and Pi's are crunching Einstein work
The Intel-GPU machines are crunching Seti work overnight, only because its too hot during the day.
The Nvida-GPU machines are off.
BOINC and MPI
As you'd know from reading my blog I have a "farm" of machines that currently are running BOINC to do the task scheduling. I had an idea that rather than running BOINC on each machine, why not run it on one machine and have it use MPI to communicate with the others and run tasks that way. If other people are interested in this idea then lets talk.
To that end I was following the instructions from the University of Southampton in setting up an MPI cluster using my redundant Raspberry Pi B models. The details about their Lego Supercomputer can be found Here. In the end it was still compiling the MPI software after some hours, despite the fact that I overclocked the Pi in question to 1Ghz. I gave up and went to bed. I will have to kick it off when I have plenty of time to waste seeing as it takes so long on the old B model.
The idea is that one would replace the BOINC API calls with MPI calls. Each app runs stand-alone, which most science apps seem to be written to do anyway, and then just passes the result files back for the BOINC client to handle. A better solution would be to have both sets of code and then the app can work via MPI or the BOINC API.
The BOINC client would need to be updated to check the status of worker machines and give tasks to the worker machines as needed. Exactly what a real compute cluster does, only BOINC is doing the scheduling and still handling the file transfers, etc.
Parallella's and Pi's are crunching Einstein work
The Intel-GPU machines are crunching Seti work overnight, only because its too hot during the day.
The Nvida-GPU machines are off.
BOINC and MPI
As you'd know from reading my blog I have a "farm" of machines that currently are running BOINC to do the task scheduling. I had an idea that rather than running BOINC on each machine, why not run it on one machine and have it use MPI to communicate with the others and run tasks that way. If other people are interested in this idea then lets talk.
To that end I was following the instructions from the University of Southampton in setting up an MPI cluster using my redundant Raspberry Pi B models. The details about their Lego Supercomputer can be found Here. In the end it was still compiling the MPI software after some hours, despite the fact that I overclocked the Pi in question to 1Ghz. I gave up and went to bed. I will have to kick it off when I have plenty of time to waste seeing as it takes so long on the old B model.
The idea is that one would replace the BOINC API calls with MPI calls. Each app runs stand-alone, which most science apps seem to be written to do anyway, and then just passes the result files back for the BOINC client to handle. A better solution would be to have both sets of code and then the app can work via MPI or the BOINC API.
The BOINC client would need to be updated to check the status of worker machines and give tasks to the worker machines as needed. Exactly what a real compute cluster does, only BOINC is doing the scheduling and still handling the file transfers, etc.
Subscribe to:
Posts (Atom)