Page 1 of 1

NEURON on GPUs?

Posted: Wed Jul 06, 2016 12:18 am
by catubc
Hi everyone

Is this project MIA? Last link I found dates back to 2014:

http://bitbucket.org/nrnhines/nrngpu

Just getting bit impatient with 1 node jobs sitting in cluster queues for many hrs.... Fantasizing about taking matters into my own hands :)

Thanks!
catubc

Re: NEURON on GPUs?

Posted: Thu Jul 07, 2016 3:42 pm
by hines
The original attempt has been abandoned due to the difficulty of managing two very different code bases. The present attempt is in the context of CoreNEURON which will become a plugin to NEURON but presently
requires NEURON to write a model data file which is then read by CoreNEURON and simulated (7 fold memory savings). All computations, including tree solver and NET_RECEIVE, are now on the gpu and only
spike exchange is handled by the cpu. Works very nicely for fixed step spike coupled networks and gap junctions are coming very soon. At that point it will be available as opensource, hopefully by the end of the summer.
For large memory bandwidth limited models, the speedup over a single core is about a factor of 10.

Re: NEURON on GPUs?

Posted: Sat Jul 23, 2016 4:05 pm
by catubc
Thanks Michael.

It sounds like this is partly developed on the Bluebrain work you've done already. The speedup is a bit unclear. Will an arbitrarily sized network speed up 10x with an ~500-1000core GPU?

More practically, higher-end GPUs now have 3000-3500 cores - so will it be possible to assign each cell from a 3000 cell network to each of the ~3000 cores? Even if that scales as sqrt(#cores) will be a speed up of 10-50 times, so we can all now run small networks on our desktops. Would be amazing!

Thanks for the amazing work (also to Ted!).

catubc

Re: NEURON on GPUs?

Posted: Mon Jul 25, 2016 5:26 pm
by sgratiy
@ Michael
Could you clarify, what do you mean by
large memory bandwidth limited models
?