swc-format-problems

Managing anatomically complex model cells with the CellBuilder. Importing morphometric data with NEURON's Import3D tool or Robert Cannon's CVAPP. Where to find detailed morphometric data.
Post Reply
PhilippRautenberg
Posts: 15
Joined: Wed Dec 06, 2006 10:53 am

swc-format-problems

Post by PhilippRautenberg »

Hello everybody,

look at the following 2 files:
---------swc1----------
1 1 1465.700 1934.700 97.600 0.7400 -1
2 1 1462.000 1931.400 96.700 0.8300 1
3 1 1458.300 1928.200 95.800 0.9200 2
4 1 1459.300 1935.700 97.500 0.8300 2
5 1 1454.300 1925.300 94.900 0.9800 3
6 1 1462.500 1928.700 94.700 0.8200 3
7 1 1456.700 1940.100 98.300 0.8000 4
8 1 1450.300 1922.500 94.100 0.9900 5
-----------------------
change
4 -> 5
5 -> 7
7 -> 6
6 -> 4
to get the following:
-----------swc2----------
1 1 1465.700 1934.700 97.600 0.7400 -1
2 1 1462.000 1931.400 96.700 0.8300 1
3 1 1458.300 1928.200 95.800 0.9200 2
4 1 1462.500 1928.700 94.700 0.8200 3
5 1 1459.300 1935.700 97.500 0.8300 2
6 1 1456.700 1940.100 98.300 0.8000 5
7 1 1454.300 1925.300 94.900 0.9800 3
8 1 1450.300 1922.500 94.100 0.9900 7
--------------------------

The import3D-tool can't display the first one correctly but the second one. Any idea why this is so?
so long, Philipp
ted
Site Admin
Posts: 6303
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Post by ted »

Looks to me like its algorithm was constructed under the assumption that data points are
obtained by an orderly recursive descent through the tree. That's the sequence that the
second data set follows, whereas the first data set does more of a drunken jig. You're
probably going to say that the tool should be able to follow such a dance, right?
PhilippRautenberg
Posts: 15
Joined: Wed Dec 06, 2006 10:53 am

Post by PhilippRautenberg »

ted wrote:Looks to me like its algorithm was constructed under the assumption that data points are
obtained by an orderly recursive descent through the tree. That's the sequence that the
second data set follows, whereas the first data set does more of a drunken jig. You're
probably going to say that the tool should be able to follow such a dance, right?
My tracing-program (amira) gives the first type of format. But I can load it with Neuromantic by Darren Myatt and save it again. By this I get the second type. The question is whether I can spare this part as the problem is just representational or whether I have to use for the hole process/modelling the second type of data.
hines
Site Admin
Posts: 1692
Joined: Wed May 18, 2005 3:32 pm

Post by hines »

Sorry. I made the unwarranted assumption that all the points of an unbranched neurite would be contiguous in the swc file. In the first example point 8 is connected to point 5 which is connected to point 3 and point 5 is unbranched. In regard to fixing the bug, there is some ambiguity with regard to whether to place 8 and 5 in the same section. Since they are not contiguous I decided to put them in separate sections. I'd be happy to entertain arguments as to whether the alternative is superior. Anyway I have committed the bug fix to the subversion repository but you can make the three line substantive change yourself by considering the diff:

Code: Select all

[hines@localhost nrn]$ svn diff -r 1634:1635 share/lib/hoc/import3d/read_swc.hoc
Index: share/lib/hoc/import3d/read_swc.hoc
===================================================================
--- share/lib/hoc/import3d/read_swc.hoc (revision 1634)
+++ share/lib/hoc/import3d/read_swc.hoc (revision 1635)
@@ -144,6 +144,8 @@
        if (id.size < 2) { return }

        // tobj stores the number of child nodes with pid equal to i
+       // actually every non-contiguous child adds 1.01 and a contiguous
+       // child adds 1
        mark_branch(tobj)

        point2sec = new Vector(id.size)
@@ -167,6 +169,8 @@

 proc mark_branch() { local i, p
        //$o1 is used to store the number of child nodes with pid equal to i
+       // actually add a bit more than 1
+       // if noncontiguous child and 1 if contiguous child
        // this is the basic computation that defines sections, i.e.
        // contiguous 1's with perhaps a final 0 (a leaf)
        // As usual, the only ambiguity will be how to treat the soma
@@ -189,6 +193,9 @@
                p = pid.x[i]
                if (p >= 0) {
                        $o1.x[p] += 1
+                       if ( pid.x[i] != i-1) {
+                               $o1.x[p] += .01
+                       }
                        if (type.x[p] != type.x[i]) {
                                // increment enough to get past 1
                                // so force end of section but
[hines@localhost nrn]$
darrenmyatt

Post by darrenmyatt »

Hi Phillip,

I did indeed need to do quite a bit of work to get the output of Neuromantic to be successfully imported into NEURON due to a few eccentricities of the import tool

Firstly, the parent of any segment needs to appear earlier in the list than the child, so the first step was simply sorting the segments based on their distance (in segments) from the soma.

Unfortunately, this then runs into the second problem whereby the interlacing of different branches in the file leads to incorrect linkage by the import tool i.e. for two branches with segments ABCD and 1234 the ordering needs to be ABCD1234 (or 1234ABCD), and *not* A1B2C3D4.
Last edited by darrenmyatt on Mon Jan 15, 2007 6:24 am, edited 1 time in total.
PhilippRautenberg
Posts: 15
Joined: Wed Dec 06, 2006 10:53 am

Post by PhilippRautenberg »

I changed the read_swc.hoc and everything works just fine now! I am just not sure, whether this causes now problems. E.g. when you put all points in different seperate sections will there be a disadvantage (increasing calculation time). In my case I just model one single neuron and would like to model it as detailed as possible - therefore the more sections the better, right?
There comes the next question in my mind:
When I have a thick dendrite with little spines or filipodia, the model simulates there dynamics not beginning at the surface but beginning in the middle of the dendrite - right? Is this overlay negligible?
hines
Site Admin
Posts: 1692
Joined: Wed May 18, 2005 3:32 pm

Post by hines »

It is generally a bad idea to put all the points in different sections. You want a section to be a length of unbranched cable. Then you can use the d_lambda rule in the CellBuild to divide that into segments.
But a wild card here is your mention of the dendrite surface being covered with spines. This is normally handled by increasing the effective area of the dendrite based on the spine area and
spine density. It was not clear to me that your swc file contained that information. Anyway, if you wish to simulate each individual spine it is probably more efficiently handled as a one segment section connected to the dendrite section at the location of the spine. Then you can have a modest number of segments representing the dendrite and many many sections connecting to the centers of those segments. After all, the
electrical properties of what you want to capture are that when there is no current
flowing through the spine synapse, then the voltage at the tip of the spine is almost exactly the voltage of that region of the dendrite, but when the synapse is active there is an i*r voltage difference between the dendrite and the tip of the spine. The bottom line is to get your
basic dendritic backbone right using a minimum compartmentalization and then you can have a separate spine data file that you read and create the spine sections. i.e I'd separate the 3-d info into two parts. the dendritic backbone and then the spine geometry info.
PhilippRautenberg
Posts: 15
Joined: Wed Dec 06, 2006 10:53 am

surface with spines

Post by PhilippRautenberg »

Thanks a lot! This is what I am planing to do now:
1. files with morphologoical data (with spines like you suggested, and swc-structure from neuromantic)
2. files with the parameters like Ra, Ri,...
3. python-script to set parameters in the parameter files for parameter search, merging with morph and execute it with nrniv
I am thinking of "how to handle the big amount of data that I will get from the parameter-search. Are there any experiences with databases? that seems to me the most straight forward way in order to not to loose the overview! e.g a postgresql db also controlled by the python-script.
P.S. python is also good to plot the results fast and in a nice way.
Post Reply