CPU usage while running a simulation

Post Reply
udi
Posts: 13
Joined: Wed Aug 10, 2005 3:25 am
Location: Hebrew University, School of Medicine

CPU usage while running a simulation

Post by udi »

Hi,
I'm using NEUON under winXP on a pentium 4 computer, cpu 4 GHz, RAM 2 GB. When I run simulation (of a single neuron) without any other program running in the background I see that the CPU usage is almost 100%. Should I increase the CPU speed in order to get 'faster' results?

Thanks in advance,
--Udi.
ted
Site Admin
Posts: 6299
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: CPU usage while running a simulation

Post by ted »

When I run simulation (of a single neuron) without any other program running in the background I see that the CPU usage is almost 100%.
That's what should happen.
Should I increase the CPU speed in order to get 'faster' results?
If simulation execution isn't fast enough for you, that's one way to do it. But your
processor is already running at 4GHz, so you won't be able to get much of an
improvement.

Does your model use reasonable spatial discretization? i.e. the d_lambda rule.
Excessively fine parcellation of space is a great way to waste CPU cycles.

Have already tried adaptive integration (NEURON Main Menu / VariableStepControl)
and found that insufficient?

If you need to execute lots of runs, e.g. for optimization or parameter space
exploration, you'll do better by using the ParallelContext class to distribute your
runs over multiple CPUs (assuming you have access to multiple processors,
a workstation cluster, or other parallel hardware).

If your model is enormously complex, you might be able to take advantage of
NEURON's ability to distribute single neuron models across multiple processors
(still under development, but maybe it's at the stage where a few daring users
could serve as willing guinea pigs).
udi
Posts: 13
Joined: Wed Aug 10, 2005 3:25 am
Location: Hebrew University, School of Medicine

Post by udi »

Thanks for the answer. I'll take into consideration the last offer ...
Kahlig

Post by Kahlig »

I would be interested in the specifics of distributing single neuron models over multiple processors (within a linux cluster) during a single simulation run.

I also have a complex single neuron simulation and run times on the order of 60 minutes. However, I have access to a linux cluster (in-lab) that would significantly reduce simulation time.

I have read the ParallelContext documents and I have NEURON setup properly with access to all nodes. But I cannot seem to visualize the hoc code to distribute this one cellular model over the multiple processors for a single run.

Thanks in advance.
ted
Site Admin
Posts: 6299
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Post by ted »

Kahlig wrote:I would be interested in the specifics of distributing single neuron models over multiple processors (within a linux cluster) during a single simulation run.
For starters, see
https://www.neuron.yale.edu/phpBB2/view ... highlight=

--Ted
porio

Post by porio »

Hi,
I'm having a different problem: when running a simulation the CPU usage (as reported by the Task Manager) never gets higher than 55-60%.
My configuration is Pentium 4 at 3.4GHz, 2GB of RAM, running WinXP Professional. At home I have the same processor, with 1GB RAM and WinXP Home, and I see just the same: CPU never gets higher than 55%.
Why is that?
I though it may be some part of the code dedicated to store some things in Vectors, but after commenting those line the problem (I'm not sure if this is a problem, though) remained the same.

Regards.
ted
Site Admin
Posts: 6299
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Post by ted »

porio wrote:CPU usage (as reported by the Task Manager) never gets higher than 55-60%.
NEURON isn't the only program that is running. What else is Windows doing? If you want
the CPU to spend more time on a particular program, you have to increase that program's
priority. Presumably Windows allows you to specify priority.
mikey
Posts: 37
Joined: Mon Jun 19, 2006 3:06 pm

Post by mikey »

Can anyone refer me to a tutorial on how to increase the CPU priority for a given program on Windows XP? I want to put all my CPU resources into NEURON.
Raj
Posts: 220
Joined: Thu Jun 09, 2005 1:09 pm
Location: Groningen, The Netherlands
Contact:

Post by Raj »

On machines on which hyperthreading is turned on CPU usage doesn't go over 50%, because it seems that every hyperthread is counted as a full resource, where in fact it is a clever way using the otherwise idle time time of a single processor. On machines with multiple real processors programs that are not written to be parallel will use maximally one processor.

Under XP CPU priority can be changed from the Task Manager by right clicking on a process and going to CPU priority. Sofar all programs (including Neuron) I inspected in the task manager were bound to all processors. So you can make fewer resources available to Neuron not more. By default Neuron is only running on a single processor and you will have to learn how to run Neuron in parallel to benefit from your second processor for reducing simulation time further.
mikey
Posts: 37
Joined: Mon Jun 19, 2006 3:06 pm

Post by mikey »

Thanks so much Raj. Yes - I have played around and cannot get NEURON any more resources with Task Manager. 50% seems to be the max. I'm desperately trying to speed up my NEURON. what kind of speed increases can a reinstall of the windows operating system confer? my last reinstall was about 1 year ago. but was a headache and that was an emergency reinstall, not a kind of optional performance reinstall.
Raj
Posts: 220
Joined: Thu Jun 09, 2005 1:09 pm
Location: Groningen, The Netherlands
Contact:

Post by Raj »

If you have two processors you can gain speed by using parallel neuron, you will have to search this website for tips. It is still on my todo list so I cannot help you further there. Reinstalling the OS will only cost you time.

Inspecting your model and adapting it sometimes helps. If, for example, you have synapses satisfying linear differential equations there are ways to rewrite these in such a way that all synapses in the same segment can be pooled into a single point mechanisms for which the solver only needs to solve one differential equation. These types of optimizations are however very model specific and there are no general rules for it.
Post Reply