GPUs sound wonderful indeed. But that bright star is a long way in the distance. The chief shortcomings of GPUs are:
lack of support for double precision floating point math
lack of open source development tools
If the excitement about the potential of GPUs becomes persistent and pervasive enough, maybe these limitations will disappear sooner rather than later.
In the meanwhile, there is plenty of low-hanging fruit to be plucked. Parallelization of NEURON with MPI has already been in wide use for the past couple of years, and new, big performance improvements contine to appear--for the latest, see the articles by Hines, Eichner, and Scuermann, and Hines, Markram, and Schuermann at
http://www.neuron.yale.edu/neuron/bib/nrnpubs.html. Suitable for use on supercomputers, workstation clusters, and standalone PCs and Macs with multicore processors.
Development of multithreaded NEURON is moving along nicely. It has two chief advantages: speed improvements on multicore machines without having to revise source code, and ability to use the GUI (can't do that with parallel execution under MPI). This will require a substantial amount of beta testing, because it involves lots of changes to NEURON's internals, especially stuff related to adaptive integration and the event delivery system.
I should also mention a third development area in which there has been significant progress: the use of Python as as an alternative interpreter. This has the potential to benefit all NEURON users, partly by speeding up the development of new NEURON-specific tools, but also partly by making available the enormous Python libraries of scientific and mathematical software that already exist. And that saves programmer time, which is even more important than computer time.