Changing parameters on a density mechanism at the 1 location will set them at the last node of positive area (since density mechanisms cannot exist at points). As a consequence, using for(x) instead of for(x, 0) to loop over segments to setup a conductance gradient within a section (e.g. gmax varying with distance from the soma) would be a logical error.
Yes, and the result is a computational model whose properties differ from the modeler's conceptual model. The Poirazi and Mel pyramidal cell model, so widely reused by others, makes this very mistake.
Changing parameters on a density mechanism at the 1 location will set them at the last node of positive area (since density mechanisms cannot exist at points).
And of course specifying the value of a density mechanism's parameter at the 0 end of a section actually affects that parameter's value in the section's first segment, i.e. the segment that contains the node at 0. But if you are iterating over the section from the 0 end to the 1 end, the erroneous assignment is made first, but then is overridden by assignment of the correct value.
Regarding ziemek's latest question
However both Ih and Ca_LVAst are not PointProcesses, but they did for(x) segments, so 0 and 1 ends included.
Is this an error, and if not - how to recreate this behavior in Python?
Yes, it is an error, and it shouldn't be recreated in Python.
Think about it this way. The situation is directly analogous to wet lab experimentation in which there has been a lapse of proper methodology. If you picked up a wet lab experimental paper and found that the authors didn't use correct experimental methods, what should be your response? "Well, they screwed up experimentally, but we're interested in similar questions, so we're going to follow their example and screw up just like they did."
Really?
My own very strong opinion is
1. The error should be fixed, not recreated in Python. It should be fixed in the original hoc file, and tested to verify that the fixed model produces results that are qualitatively similar to the original buggy model, so the authors' original conclusions are not invalidated.
2. Any new model development, whether in hoc or Python, should use the corrected model specification. Why? If modeling played a significant role in the original paper, it was because the model was useful for evaluating a hypothesis posed by the authors. So the hypothesis was sufficiently complex that the authors didn't rely on their unaided intuition to infer the consequences of their assumptions. That's why they resorted to computational modeling. And that's why it is so important that there be a close match between the authors' conceptual model (hypothesis) and their computational model. Without such a match, results generated with the computational model cannot be relied on as a means for evaluating the hypothesis.
Someone might say, "Well, it's a small mistake, so probably it didn't have much of an effect on simulation results."
To which the reply is: that is an interesting but unsubstantiated assertion. Prove it. Fix the bug, repeat the simulations, and show that the results produced by the corrected model are qualitatively similar to those reported in the paper.
Who should do it? Ideally, the original authors. Will they? They ought to, but they might not. Does anybody in wet lab experimental neuroscience, who made some methodological error, go back years later and repeat their experiments with proper methodology? Maybe, but (1) have you ever seen a report of such work, (2) how will they get credit for their new effort, and (3) where will they get the $$ needed to properly redo the original experiments?