Calm down please.
1) I just want to say that there is some oversimplification exist in articles. some times we used to some common simplifications and forget to reevaluate them in new situations.
2) If I was sure about what I am saying then I did not decide to consult with experts. I just want to organize my mind about this topic. I do not blame any body. I am thinking of an additive approach in modeling (like other area of science): start from the simplest possible form and add more complexities to it to reach a systematic model that can adapt itself to future demands and can explain future findings or works as is in vivo condition.
as an example look at the Golding 2005 and Jarsky 2005 articles. Both of them are supervised by Nelson Spruston in the same lab. Golding adjusted the passive parameters of the model with experimental results. Jarsky used the same cell created by Golding to show that how distal and proximal dendrites collaborate with each other but decided to do not use the Golding et al adjusted passive parameters for that reconstructed morphology. When I look into these two articles I find a big "?" in my mind. What's happening here? If they did sth important why they do not use their own findings?
I am just trying to find the correct approach to science. I am really confused. these things are unanswered questions for me.
What I am trying to say is that each lab works on a topic. they start a subject and each work or progress is a backbone for the future works. why this does not happening in modeling (at least in modeling dendrites)?
These statements may be true but they do not bear on the role of detail in models. Neither does the quote from Saksida and McClelland. By the way, quoting a pair of cognitive scientists in order to support the notion of biological details in models should produce just a slight frisson of cognitive dissonance.
this is the relation of that quotation with my idea about detailed modeling: If models were the motivation for experiment and if scientist used models as a common language between them, there would be a common model that evolving and becoming more detailed and complete as the research progressing. which is not the situation in modeling right know.
By the way, do you include any of these complexities in your models, and if not, aren't you just being arbitrary?
--the stochastic and quantal nature of synaptic transmission
--variation of quantal size
--retrograde signaling at synapses
--glial uptake and release of transmitters
--second messengers
--electrodiffusion
--the very irregular geometry of neurons (there are no circles, spheres, cylinders, cones, or even smooth surfaces in the brain)
How we can be sure these thing are unnecessary when we did not include them in the model and evaluate their role systematically? However, I agree with you that there must be a hypothesis behind every detail we include in the model.
Are you familiar with the work of Abbott, Ermentrout, Kopell, Rall, Rinzel, Rubin?
what makes you feel that I am not? but I think they simplify thing without experimental or computational documents some times. + in Data mining it is believe that our brain can consider a maximum 0f 7 component together. this is the idea behind modeling.
--how you would simplify resonance, excitatory effect of GABA inhibition, effect of inhibition on reducing threshold, excitatory effect of H-current combined with M-current on PSPs, and a lot more things I do not remember right know?
--there are a lot people, simplify things just because they are not familiar with the details.
--there was a period of time in neuroscience that we do not know a lot things about details, that time doing imaginary simplifications was logical but what about know?
you may think this topic is entering the realm of "apples vs. oranges" which you do not like. If you want we can stop it here. If not we can continue, because this is a basic problem I have. but answer me in your good mood. :)
I think you answered my second question. but In the first question I wanted to know is there a methodological problem with chemical reaction models in synaptic modeling? If I use them should I tell the reason during publication process? Is it strange?
Well, if that's essential to your hypothesis, go ahead and include detailed synaptic mechanisms. But to discover what "extra" you get from such details, you'll also have to build models that use simplified synaptic mechanisms.
Do you believe that this should be topic of an article before I can use it?