GPUs vs intel PHI coprocessor

General issues of interest both for network and
individual cell parallelization.

Moderator: hines

Post Reply
atknox
Posts: 5
Joined: Wed Aug 26, 2015 2:07 pm

GPUs vs intel PHI coprocessor

Post by atknox » Mon Apr 16, 2018 12:54 pm

I'm somewhat new to parallel computing, so please forgive anything that sounds uninformed.

I'm interested in computational models of epilepsy. I've worked with some relatively simple models (~400 single compartment neurons), and have some plans to expand to significantly larger models, probably implemented with netpyne. I was thinking of purchasing a workstation with a number of cores as a sort of local testing ground for models that I might try to expand and run on a cluster later. And then I began to think of adding additional computational power to the workstation that would be well suited for parallelization.

And that's where I started running into questions. It seems that two possible routes to take were either adding GPUs or an intel xeon phi co-processor. But it's a little unclear to me how well either of these options would work with neuron. It looks like there's support for GPUs through coreNeuron, but it looks like the output data that can be recorded is limited, and it wasn't clear to me whether coreNeuron and netpyne could be used together. Can they?

I haven't found much that anyone has written about using intel xeon phi co-processor and neuron together; I wasn't sure if that was because it works so well that no one needs to write about it, or if it's because it clearly doesn't work, or if no one has tried doing it. It looks like you would probably want to build against intel MPI, but beyond this it wasn't obvious to me whether you would need to use coreNeuron or not, or whether there would be anything else that you would need to do to utilize the co-processor.

It would be great to avoid buying something that can't be easily used with neuron / netpyne, so I'd appreciate whatever advice you have to offer. I'll post this on the google netpyne QA forum as well.

Thanks,
Andrew

ted
Site Admin
Posts: 5266
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: GPUs vs intel PHI coprocessor

Post by ted » Mon Apr 16, 2018 2:06 pm

Would defer purchase of specialized parallel hardware until if and when it becomes absolutely necessary. Your initial purchase should be the best system you can afford for interactive use. That means lots of RAM, SSD, and enough hard drive to save results of simulations and (if necessary) visualizations. And get a good enough GPU for purely graphical purposes.

Why?
1. Lots of development and debugging is necessary to implement a model that is ready to be run on parallel hardware. That will consume most of your time and effort, and it is most conveniently done interactively.
2. GPU and multicore architectures are still undergoing rapid development, so performace/price ratio continues to improve. NVIDIA's development cycle is fast--think "18 months from brand new to obsolete"; Intel's is probably similar. Defer such purchases until the last moment possible, and you'll end up with more bang for the buck.
3. Or you may decide not to buy a cow because milk is cheap. Your own academic institution may have parallel hardware that you can use at no charge, or for a small subscription fee; let someone else own the rapidly obsoleting hardware. Or get an account with the Neuroscience Gateway Portal www.nsgportal.org, which gives away CPU time on high performance parallel hardware where NEURON, NetPyNE, and other simulators (not to mention MATLAB, Python, and a bunch of other software) have already been optimally installed and configured.

Post Reply