length scales and SparseEfficiencyWarning

Extending NEURON to handle reaction-diffusion problems.

Moderators: hines, wwlytton, ramcdougal

Post Reply
bschneiders
Posts: 33
Joined: Thu Feb 02, 2017 11:30 am

length scales and SparseEfficiencyWarning

Post by bschneiders » Fri Nov 08, 2019 7:02 pm

Hi. I have two likely related questions. The first is, I sometimes get the following warning:

Code: Select all

/Applications/NEURON-7.6/nrn/lib/python/neuron/rxd/section1d.py:155: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
  g[io, io] += rate_r + rate_l
I get this a bunch of times, through line 174 of section1d.py (the first line - 155- is in the setup for the diffusion matrix), and again in species.py, and again in rxd.py. Sometimes it seg faults - if so, I rerun the exact same code and it runs. Which is why it has taken me so long to get to the bottom of this. Any idea what exactly is going on here? can I change the tolerance somewhere? I know "rate_r" and "rate_l" are related to section.L and section.nseg, which is why I think this is related to the below question.

The second related question is with respect to length scales (the diffusion matrix is an issue with large nsegs/small compartments, which seems to make sense). I have been setting my segment lengths according to the d_lambda rule for most sections, but that only takes electrical properties into account. The diffusion of voltage is way higher than it is for calcium (I am using D_Ca = 0.5 um^2/ms), where my segments should be much smaller with respect to calcium diffusion. Is there any way to separate these length scales?

Note: as you can see, I am still using Neuron 7.6 - apologies if this was addressed in 7.7! I haven't made the switch yet.

ramcdougal
Posts: 192
Joined: Fri Nov 28, 2008 3:38 pm
Location: Yale School of Public Health

Re: length scales and SparseEfficiencyWarning

Post by ramcdougal » Tue Nov 12, 2019 11:17 am

The first issue should go away when you upgrade to 7.7.

As far as tolerance goes: if you're using variable step, you can specify an atolscale when you declare the Species... I don't know that that's related, but I'm mentioning it just in case.

As of this time, we have no automatic advice for automatically discovering the appropriate discretization. We have explored subsegment discretization, but that is currently not supported; for now, 1d simulations must use the same discretization for both chemical and electrical kinetics. An empirical test of the discretization is to try tripling nseg and seeing if it qualitatively changes the results; if so, you need a larger nseg.

That's not great, I know, but for what it's worth 7.7 should simulate the reactions faster.

bschneiders
Posts: 33
Joined: Thu Feb 02, 2017 11:30 am

Re: length scales and SparseEfficiencyWarning

Post by bschneiders » Tue Nov 12, 2019 12:34 pm

That's good to know about 7.7, thanks!

As for the lengths scales and tolerance, I figured that was the case but it was worth a shot. I have set atolscales appropriately I believe (the Atol Scale Tool caught some that I had missed - very handy), but I don't think that catches the discretization I'm talking about. I'll try the test you mention and see if that helps. Thanks!

bschneiders
Posts: 33
Joined: Thu Feb 02, 2017 11:30 am

Re: length scales and SparseEfficiencyWarning

Post by bschneiders » Tue Feb 04, 2020 2:30 pm

A quick follow-up. I did finally update to 7.7, but I am still getting frequent seg faults. It now seg faults every time I run my full code (i.e. execfile("driver.py")) but if I run every line manually, it doesn't seg fault. The only warning I get is the following, whether it segfaults or not:

Code: Select all

NEURON: syntax error
 near line 1
 ^
NEURON: syntax error
 near line 1
 {sw5=new PlotShape(0)}
        ^
NEURON: syntax error
 near line 1
 ^
NEURON: syntax error
 near line 1
 {sw6=new PlotShape(0)}
        ^
h.cvode.atol(absTolerance)
h.cvode.re_init()
Segmentation fault: 11
When I run my code manually, I still get the above warning, just no seg fault, and the shape plots still show up and look as I would expect them to. So I narrowed it down to this bit of code by making a "neuron_gui" toggle. When I turn the gui off, there is no seg fault. So here is the segment of my gui code that yields this warning, and likely the seg fault:

Code: Select all

def create_gui_windows(nablock):
    # no problems with shapeWindow, sw2, sw3, ... so I'm starting at sw4
    
    h('{objref sw4}')
    h('{sw4=new PlotShape(0)}')
    sw4 = h.sw4
    sw4.variable('dfof_max_gmax')
    # sw4.variable('dfof_gmax')
    sw4.view(703, 192, 35, 0, 250, 500, 1000, 250)
    # sw4.view(690, 190, 35, 0, 250, 500, 1000, 250)  #spineloc .3
    sw4.exec_menu('Show Diam')
    sw4.exec_menu('Shape Plot')
    sw4.scale(0, 200) #600)
    sw4.show(0)
    h.fast_flush_list.append(sw4)

    h('{objref sw5')
    h('{sw5=new PlotShape(0)}')
    # sw5 = h.sw5
    sw5 = h.sw4
    # sw5.variable('dfof_max_gmax')
    sw5.variable('dfof_max_gmax')
    sw5.view(630, 195, 150, 0, 700, 0, 700, 500)
    sw5.exec_menu('Show Diam')
    sw5.exec_menu('Shape Plot')
    sw5.scale(0, 200) #600)
    sw5.show(0)
    h.fast_flush_list.append(sw5)

    h('{objref sw6')
    h('{sw6=new PlotShape(0)}')
    # sw6 = h.sw6
    sw6 = h.sw3
    sw6.variable('cam_camax')
    sw6.view(630, 195, 150, 0, 700, 0, 700, 500)
    sw6.exec_menu('Show Diam')
    sw6.exec_menu('Shape Plot')
    sw6.scale(5e-5, .0005)
    sw6.show(0)
    h.fast_flush_list.append(sw6)

    #.....same setup for sw7, sw8, sw9

    return [shapeWindow, sw2, sw3, sw4, sw5, sw6, sw7, sw8, sw9]
I start at sw4 because I have no issues with sw1-sw4. When I run this, I get the issues shown above. However, if I run this code but with "sw5 = h.sw5" and "sw6 = h.sw6" as I would like to, this function doesn't run:

Code: Select all

In [10]: h('{objref sw4}')
Out[10]: 1

In [11]: h('{objref sw5')
NEURON: syntax error
 near line 1
 ^
Out[11]: 0
(note that the quantities being plotted are the same in 4 & 5 and 3 & 6, which is why I "cheated" and paired them like this when the original commented out lines didn't work). Any ideas on why I can't create sw5 etc? Can I not have two shapeWindows of the same quantity? (I have several zoomed in versions to show spines and the broader neuron).

ramcdougal
Posts: 192
Joined: Fri Nov 28, 2008 3:38 pm
Location: Yale School of Public Health

Re: length scales and SparseEfficiencyWarning

Post by ramcdougal » Wed Feb 05, 2020 9:32 am

That's very weird because there's nothing wrong with

Code: Select all

    h('{sw4=new PlotShape(0)}')
by itself. There's no way there's a weird character in there, is there? (Non-breaking space, that sort of thing?) Otherwise, my guess is that something badly destabilized NEURON before you got that far. If you're willing to share a version (ideally minimal, but whatever) of the code that exhibits this problem, just email me and I'll take a look. We definitely don't want to have NEURON seg-faulting on people.

Also on the list of things that aren't wrong: there's nothing wrong with having two PlotShapes of the same variable.

I'm assuming you don't actually need to create the PlotShape as a top-level HOC variable, do you?

If not, does the problem go away if you replace:

Code: Select all

    h('{objref sw4}')
    h('{sw4=new PlotShape(0)}')
    sw4 = h.sw4
with just

Code: Select all

    sw4 = h.PlotShape(False)
etc?

bschneiders
Posts: 33
Joined: Thu Feb 02, 2017 11:30 am

Re: length scales and SparseEfficiencyWarning

Post by bschneiders » Wed Feb 05, 2020 3:29 pm

Hmm. This got weirder.

I replaced the PlotShape code with your suggestion, and now the warning goes away, but the seg fault does not. That is, when I enter the code manually, I don't get that syntax warning anymore and can make sw5, sw6 etc. no problem. However, if I run the code all at once ("execfile("driver.py")) it still seg faults.

I will try to get a minimal model going to send to you asap - thanks!

Post Reply