Dear Forum,

I'd be grateful to hear people's general thoughts on the following.

I would like to use a cable model to simulate action potentials along human CNS axons for specific long-range white matter fibre bundles. The main objective is to obtain an estimate of the expected conduction delay for a given fibre bundle given three pieces of information that can now be obtained from noninvasive imaging:

1. Axon length (from diffusion-weighted MRI tractography, not particularly new)

2. Axon diameter distribution (from a slightly newer diffusion-weighted MRI technique based on MR signature of restricted diffusion in cylinders)

3. G-ratio (from another fairly new technique - magnetization transfer imaging with bound pool fractions).

Ultimately I would like to plug these three quantities into a series of cable models (one for each bin in the diameter distribution) to get a predicted conduction delay distribution, which will then be used as an anatomically-informed prior on the conduction delay between the pair regions connected by the fibre bundle in question in an EEG/MEG connectivity analysis.

I have found the discussions on the details of cable models in this forum extremely helpfuly, but I am still left with the general question: is this approach (simulating human CNS conduction delays from length, diameter, and g-ratio) a feasible thing to try and do with a cable model? Are there any specific and/or principled reasons why this cannot / should not be done? If not, could anyone point me to the most appropriate published model (ideally on the modelDB, but also in the general literature) for this kind of analysis?

The main motivation for pursuing this is because it potentially provides a physiologically well-motivated link between structural connectivity information and dynamic (functional/effective) connectivity models that are used to analyze EEG/MEG data. The other motivation is that I want to simulate the effects of age-related demyelination on action potential propagation, particularly with respect to conduction delays, conduction block, and spike frequency limits. I'm fully aware of the extensive literature looking at demyelination in MS, which focuses mostly on peripheral motor pathways. One of the ways that my application differs from this I want to simulate demyelination on specific fibre bundles in individual subjects, where I know the 3D shape, trajectory, length, and (in theory) spatial distribution of demyelination. So my second question is a general one: would it be useful to construct realistic simulations with this rich geometry and microstructure information? Or would it be just as informative to look at the effects of different spatial distributions of demyelination on a set of generic 30 node axons, and try to generalize the results to specific fibre bundles with appropriate scaling etc. as a second step? Relatedly, aside from whether or not it would be useful to go for the more complex simulation, and assuming that computation time isn't an issue, would there be any actual numerical problems with using a large-scale model like this, and with trying to simulate delays directly rather than calculating them from length/(simulated conduction velocity) ?

(Just FYI: So far I am using the Hursh/Rushton etc. approximation of conduction velocity = (k/g)*d, where g is the g-ratio, k is a scaling constant (~5.6; c.f. Caminiti et al. 2009), and d is the axon diameter. Conduction delay is then axon length/CV. Does anyone have any passionate objections to using this expression to estimate delays for fibre bundles in the human CNS when length, g, and d are known? )

Answers to my (three, I think there were) questions and general comments are greatly appreciated.

Many thanks,

JG