Page 1 of 1

Installing NEURON on ROCKS cluster

Posted: Tue May 17, 2016 1:10 pm
by pascal
I recently installed NEURON 7.3 on our small ROCKS cluster (running CentOS). I can run simulations just fine on the head node, but I cannot get NEURON to work on any other nodes. I get the following error message: /export/apps/neuron-73/x86_64/bin/nrniv: No such file or directory

I believe the issue is that with ROCKS, installing a program in the /export directory on the head node makes it available in a directory called /share on all other nodes. So it seems that the other nodes look for nrniv in the wrong directory. Will this problem be fixed by "cross compiling," as described in http://www.neuron.yale.edu/phpBB/viewto ... 71&start=0 and viewtopic.php?f=6&t=2781?

Re: Installing NEURON on ROCKS cluster

Posted: Wed May 18, 2016 2:32 pm
by pascal
I figured out a simple solution. I just deleted the installation from the /exports/apps directory, and re-installed in the /share/apps directory. Everything works fine now.

Re: Installing NEURON on ROCKS cluster

Posted: Sun May 22, 2016 12:50 pm
by pascal
It turns out I spoke too soon. I *thought* everything was working fine because my simulations seemed to start all right, but then they crashed with the error message

./special: line 13: 18039 Killed “${NRNIV}” –dll “/share_home/cgfink/Documents/hfo/base_code/mod/x86_64/.libs/libnrnmech.so” “$@”

Based on a previous post (http://www.neuron.yale.edu/phpBB/viewto ... 71&start=0), I'm guessing the issue has something to do with different configurations for compute-nodes versus the head node (since my simulations run fine on the head node, but not on the compute nodes. Right now I am trying to compile the mod files on the head node, then run the simulations on the compute nodes.)

It seems the following block of installation code (from viewtopic.php?f=6&t=2781) is key, but I did not use it on installation because I did not understand it:

Code: Select all

#!/bin/sh
../nrn/configure --prefix=`pwd` --with-nmodl-only --without-x
make
make install

../nrn/configure --prefix=`pwd` '--without-nmodl' '--without-x' \
'--without-memacs' '--with-paranrn' 'CC=mpicc' 'CXX=mpicxx' \
'--disable-shared' 'CFLAGS=-g -O0' 'CXXFLAGS=-g -O0'  linux_nrnmech=no
make
make install
First of all, I'm assuming the two different blocks of code apply to 1) the head node and 2) the compute nodes? But if so, I don't see where/how it is specified that one block applies to the head node and the other to the compute nodes.

Second, what does the 'disable_shared' flag do? And third, what does 'linux_nrnmech=no' do? Are either of these flags relevant to the simulation error I'm getting?

Thanks in advance for the help!

Re: Installing NEURON on ROCKS cluster

Posted: Mon May 23, 2016 1:26 pm
by hines
I don't know anything about your machine and it will take several rounds of experiments to figure things out. So let's take this to email. Send the answers to
the following to michael dot hines at yale dot edu.

How do you launch an mpi program on that machine?
Are the compute nodes using the same operating system as your login node.
From the login node can you login to a compute node and use it just like a login node.

what happens if you type
mpiexec -n 4 echo 'hello'

Re: Installing NEURON on ROCKS cluster

Posted: Wed May 25, 2016 11:14 am
by pascal
Thanks for all the help, Michael. Just so everyone else can benefit, the solution was to issue the following installation commands in the /share/apps directory (installing in this directory makes the program available to all compute nodes in Rocks):

Code: Select all

./configure --prefix=/share/apps/neuron-73 --with-nmodl-only --without-iv 
make
make install
./configure --prefix=/share/apps/neuron-73 --with-paranrn --without-nmodl --without-x --without-iv --disabled-shared  linux_nrnmech=no
make
make install
(Note that using --prefix='pwd' did not work for me, so I had to use the absolute path on my computer.)

The reason for the two different steps is that the installation could not find the MPI library, and using this two-step process helps to avoid this problem.