Page 1 of 1

ode problem with long syn delays in pNEURON

Posted: Tue May 15, 2007 12:59 pm
by ubartsch
Hi,
A question about parallel Neuron:
The parallel network model I simulate doesn't seem to like too big synaptic delays. I used synaptic delays up to 15 with set_maxstep(100).
To be precise, there seems to be a problem with a gaba mech that I use.
Below you find the error message and the respective gaba.mod.
[A test simulation running the exact same code only with less cells and shorter run time on a serial machine runs without any problems!]

Is there a problem with the mod file?

I'm using Neuron VERSION 6.0.867 (1738) 2007-05-10 on a beowulf cluster running SUSE Linux 10.

Many thanks for some help!

UB

The error message:

Code: Select all

Assertion failed: file gaba.c, line 242
23 /usr/local/Cluster-Apps/nrn/nrn-6.0-867/mx-noiv-mpi-gcc/x86_64/bin/nrniv: _tsav <= t
23  in p_DA3Run.hoc near line 218
23  prun()
       ^
        23 ParallelContext[0].psolve(30000)
      23 ParallelNetManager[0].psolve(30000)
    23 prun()
[23] MPI Abort by user Aborting program !
MX:node012:send req(already completed):req status 8:Remote endpoint is closed
	type: 1 (send_small)
	state (0x10):
		dead
	mcp_handle : 59
	seg:0x8edee0,8
	dest:peer_index=5,eid=2,seqnum=25185
	slength=8,xfer_length=8
	caller: 0x518f80

This is the respective gaba.mod:

Code: Select all

TITLE gaba synapse 

NEURON {
	POINT_PROCESS gaba
	NONSPECIFIC_CURRENT i
        RANGE g,a,b,gGABAmax,tauD,tauF,util
}

UNITS {
        (uS) = (microsiemens)
        (nA) = (nanoamp)
        (mV) = (millivolt)
}

PARAMETER {
	tcon = .5 (ms)
	tcoff = 5.0 (ms)
	egaba = -75 	(mV)
	gGABAmax = 0	(uS)
        tauD = 800         (ms)
        tauF = 800         (ms)
        util= .3
}

ASSIGNED {
	v 	(mV)
	i	(nA)
	g       (uS)
	factor
}

INITIAL { 
   a=0  
   b=0 
   factor=tcon*tcoff/(tcoff-tcon)
}

STATE {
      a
      b
}

BREAKPOINT {
	SOLVE states METHOD cnexp
        g = b-a
	i = gGABAmax*g*(v-egaba)
}

DERIVATIVE states {
	a' = -a/tcon
	b' = -b/tcoff
}

NET_RECEIVE(wgt,R,u,tlast (ms),nspike) {
        LOCAL x
        if (nspike==0) { R=1  u=util }
	else {
	     if (tauF>0) { u=util+(1-util)*u*exp(-(t-tlast)/tauF) }
	     if (tauD>0) { R=1+(R*(1-u)-1)*exp(-(t-tlast)/tauD) }
	     }
	x=wgt*factor*R*u
	state_discontinuity(a,a+x)
	state_discontinuity(b,b+x)
        tlast=t
        nspike= nspike+1
}

Posted: Thu May 17, 2007 9:44 am
by hines
The problem is not with gaba.mod but that two events have arrived out of order. It is likely that the events are within 1e-10 ms of each other but still that is not supposed to happen and needs to be traced to the underlying internal error. The easiest way to proceed is for me to try to reproduce the problem if you are not using more cpu's than I have available (12). And it would be great if it is happening within a few minutes of the launch. Also, are you using only default methods. I.e. not using
variable step method, multisplit, binqueue, spikecompression, multisend, selfevents-not-on-queue.

Anyway, if you send me all the necessary hoc,ses,mod files, number of processors, and the nrniv args you are using, I'll see what I can do. Otherwise, the first diagnostic step is to reach into the x86_64/gaba.c file and insert before the assertion a printf statement that prints the t and tsav value. You can continue or not according to your judgment. The next diagnostic step is to verify that it is repeatable. Then one does something to figure out what those two events are. I can of course help with this process but what one does next generally depends on what happens.

Posted: Thu May 17, 2007 4:30 pm
by ubartsch
I will do my best to narrow down the problem.
As soon as I can reproduce the problem with a minimal version, I'll send you the files.

Cheers
Ullrich

Posted: Tue May 22, 2007 10:58 am
by ubartsch
Hi,
Sorry took a while to get back on this.
It seems like the error occurs only with use_local_dt(1).
I am running my networks now without local dt, but just a quaestion:

Is it the case then that you are not supposed to use the local dt solver for parallel networks?

Cheers
Ulli

Posted: Tue May 22, 2007 11:06 am
by hines
The lccal variable time step method is supposed to work with parallel networks. It does not work with gap junctions or multisplit cells. i.e if the only communication between cells is via discrete events, lvardt should work.