Mapping a large parameter space

Using the Multiple Run Fitter, praxis, etc..
Post Reply
rcalinjageman

Mapping a large parameter space

Post by rcalinjageman »

Nice to see the forum back up.

I've been working on a project to map a large parameter space for a CPG network (9 parameters, 5 values = 1,953,125 simulation runs).

In the process, I've developed a rudimentary client/server application for farming out the work. The client is a Windows Screen Saver written in visual basic. Once installed (with Neuron) on a client machine, it starts during idle time, downloads models and assignments from the server, starts up neuron and loads it with the assignment, and uploads results files when a unit is done. When the computer leaves the idle state, the client unceremoniously kills the Neuron process, but marks its place so it can resume later.

The server is just a set of PHP/MySQL pages--it assigns work, deploys models, receives uploaded files, and has some limited admin functions (e.g. kicking a client).

Neuron interacts with the client in very rudimentary ways--it picks up work assignments and signals when it has completed an assignment. However, I've written a set of object classes for Neuron that abstract the parameter assignments, so you can basically plug any existing Neuron model into the client and it will be able to systematically vary parameters.

I've designed the client only for Windows because this is the only pool of commonly available underutilized clients I could imagine. The server should run on any AMP setup (Apache+mysql+php). I briefly looked into tying Neuron into BOINC, but didn't have a clue. Also, the goal (to me at least) wouldn't be to get everyone on the interenet running your model--just 10-50 machines to help return your results 10-50x faster.

Here are my questions:

1) What is already out there for distributed computing in Neuron and how easy/hard is it to implement? I have no experience using Neuron on a workstation cluster. How easy/hard is this to accomplish? How well does it scale? What types of problems is it good for? How much modification do you need of the existing Neuron model?

2) Given what is out there, would the cleint/server approach I'm working be a useful resource to anyone? What I have so far is completely bare-bones. Would it be worth polishing?



Thanks for any feedback here or to rcalinjageman@gsu.edu,


Bob
Guest

Post by Guest »

For clusters, I'd only mention
http://www.neuron.yale.edu/neuron/stati ... arcon.html
which should scale to several hundred machines til eventually the master would get overloaded by worker results.

I think the BOINC style along with your homegrown system is very effective.
It just requires a bit of sophistication to set up a web site for managing a simulation.
It would be nice to have a HowTo for how you did it.
Post Reply