GPUs vs intel PHI coprocessor
Posted: Mon Apr 16, 2018 12:54 pm
I'm somewhat new to parallel computing, so please forgive anything that sounds uninformed.
I'm interested in computational models of epilepsy. I've worked with some relatively simple models (~400 single compartment neurons), and have some plans to expand to significantly larger models, probably implemented with netpyne. I was thinking of purchasing a workstation with a number of cores as a sort of local testing ground for models that I might try to expand and run on a cluster later. And then I began to think of adding additional computational power to the workstation that would be well suited for parallelization.
And that's where I started running into questions. It seems that two possible routes to take were either adding GPUs or an intel xeon phi co-processor. But it's a little unclear to me how well either of these options would work with neuron. It looks like there's support for GPUs through coreNeuron, but it looks like the output data that can be recorded is limited, and it wasn't clear to me whether coreNeuron and netpyne could be used together. Can they?
I haven't found much that anyone has written about using intel xeon phi co-processor and neuron together; I wasn't sure if that was because it works so well that no one needs to write about it, or if it's because it clearly doesn't work, or if no one has tried doing it. It looks like you would probably want to build against intel MPI, but beyond this it wasn't obvious to me whether you would need to use coreNeuron or not, or whether there would be anything else that you would need to do to utilize the co-processor.
It would be great to avoid buying something that can't be easily used with neuron / netpyne, so I'd appreciate whatever advice you have to offer. I'll post this on the google netpyne QA forum as well.
Thanks,
Andrew
I'm interested in computational models of epilepsy. I've worked with some relatively simple models (~400 single compartment neurons), and have some plans to expand to significantly larger models, probably implemented with netpyne. I was thinking of purchasing a workstation with a number of cores as a sort of local testing ground for models that I might try to expand and run on a cluster later. And then I began to think of adding additional computational power to the workstation that would be well suited for parallelization.
And that's where I started running into questions. It seems that two possible routes to take were either adding GPUs or an intel xeon phi co-processor. But it's a little unclear to me how well either of these options would work with neuron. It looks like there's support for GPUs through coreNeuron, but it looks like the output data that can be recorded is limited, and it wasn't clear to me whether coreNeuron and netpyne could be used together. Can they?
I haven't found much that anyone has written about using intel xeon phi co-processor and neuron together; I wasn't sure if that was because it works so well that no one needs to write about it, or if it's because it clearly doesn't work, or if no one has tried doing it. It looks like you would probably want to build against intel MPI, but beyond this it wasn't obvious to me whether you would need to use coreNeuron or not, or whether there would be anything else that you would need to do to utilize the co-processor.
It would be great to avoid buying something that can't be easily used with neuron / netpyne, so I'd appreciate whatever advice you have to offer. I'll post this on the google netpyne QA forum as well.
Thanks,
Andrew