configuring neuron on a cluster with a local install of python - and getting it to use that python with nrniv -python

Post Reply
countfizix
Posts: 5
Joined: Wed Feb 08, 2017 6:22 pm

configuring neuron on a cluster with a local install of python - and getting it to use that python with nrniv -python

Post by countfizix »

I have been running into an issue where nrniv -python is using a different version of python than what it was (allegedly) configured with

The configuration script I have been using is

./configure --without-x --prefix=$HOME/nrnmpi3.8 --with-paranrn --with-nrnpython=$HOME/.localpython/bin/python3.8 --with-mpi=$HOME/opt/openmpi/bin/mpicc --with-iv

where python was installed from a repository to .localpython with no additional options.

with corresponding changes to PATH, PYTHONPATH, and PYTHONHOME variables and an alias python=$HOME/.localpython/bin/python3.8

launching neuron python scripts via python does not result in any errors, however using nrniv -python always reverts back to using the default python in /usr/bin despite the configure option - which is a version with extremely limited utility for headnode operations - which in turn means it is unable to use the associated neuron, mpi, etc libraries. Normally this would not be a problem, but instances of nrniv are launched algorithmically via the netpyne batch commands in code we are trying to replicate and I haven't found a workaround for that.

I initially went through a previous thread with a similar problem from 2008 (viewtopic.php?t=1216) and tried to roughly crib the changes to that configure script to match my directory structure

./configure --without-x --prefix=$HOME/nrnmpi3.8 --with-paranrn --with-nrnpython=$HOME/.localpython/bin/python3.8 --with-mpi=$HOME/opt/openmpi/bin/mpicc --with-iv 'PYLIBDIR=$HOME/.localpython/lib' 'PYINCDIR=$HOME/.localpython/include/python3.8' 'PYVER=python3.8.3' 'PYLIB=-L$HOME/.localpython/lib -lpython3.8'

However this creates an error with the python libraries:

checking for python3... python3
Python binary found (/mnt/beegfs/home/cknow1/.localpython/bin/python3.8)
checking nrnpython configuration... get_config_var('LIBS') '-lcrypt -lpthread -ldl -lutil -lm'
checking if python include files and libraries work... configure: error: could not run a test that used the python library.
Examine config.log to see error details. Something wrong with
PYLIB=-L$HOME/.localpython/lib -lpython3.8
or
PYLIBDIR=$HOME/.localpython/lib
or
PYLIBLINK=
or
PYINCDIR=$HOME/.localpython/include/python3.8
ramcdougal
Posts: 267
Joined: Fri Nov 28, 2008 3:38 pm
Location: Yale School of Public Health

Re: configuring neuron on a cluster with a local install of python - and getting it to use that python with nrniv -pytho

Post by ramcdougal »

NEURON hasn't supported ./configure since the 7.x series over two years ago. Unless there's a specific reason to use an old version, don't.

On a cluster, it's easiest to just do:

Code: Select all

pip3 install --user neuron
In addition to installing the Python libraries, it'll add the binaries (e.g. nrniv, nrnivmodl, ...)... pay attention during the install to where it puts them, just in case it's not already on your path (but it'll be in the standard place for user installed binaries).

This should work with any standard MPI package. If the MPI environment is highly configured to the hardware, then and only then would I consider compiling from scratch... if you need it, recent versions of NEURON now use cmake instead of ./configure; see the documentation at https://nrn.readthedocs.io/en/8.2.2/cma ... tions.html

Also, as an aside, as long as the paths are properly configured, there's no advantage to running "nrniv -python" instead of "python", and in the latter case you definitely know which Python is running.
hines
Site Admin
Posts: 1698
Joined: Wed May 18, 2005 3:32 pm

Re: configuring neuron on a cluster with a local install of python - and getting it to use that python with nrniv -pytho

Post by hines »

Perhaps this is beside the point since the configuration mentioned nothing about what in cmake is -DNRN_ENABLE_PYTHON_DYNAMIC=ON
I mention it because usually it is in that context that multiple python installations allow the wrong choice at runtime for libpython..so

If runtime dynamic loading is NOT involved, then I expect a proper LD_LIBRARY_PATH environment variable will solve the problem. But first look at
the output of

Code: Select all

ldd `which nrniv`
On my linux machine, the relevant output line is

Code: Select all

libpython3.10.so.1.0 => /home/hines/.pyenv/versions/3.10.4/lib/libpython3.10.so.1.0
Does your line seem correct in your context?

If runtime dynamic loading is involved (and I don't see how it can be with your configure line) then, if the proper python is not first in the PATH try

Code: Select all

nrniv -python -pyexe /path/to/python
Note that nrniv on launch executes the nrnpyenv.sh script that goes through some heuristics to determine NRN_PYLIB and PYTHONHOME
In a parallel environment it is probably best to add them to your environment explicitly
countfizix
Posts: 5
Joined: Wed Feb 08, 2017 6:22 pm

Re: configuring neuron on a cluster with a local install of python - and getting it to use that python with nrniv -pytho

Post by countfizix »

We were able to get it working with a pip install of the newest version. It correctly points to the correct python and shows appropriate speedups when evoked under mpi.

The culprit turned out to be that python was not installed with the shared library. I don't know if the more tedious install of the old version would have worked had that been the case from the start.
Post Reply