This is a collection of frequently asked questions about octopus. For other queries, please subscribe to octopus-users mailing list, and ask.
Could not find library...
This is probably the most common error you can get. octopus uses several different libraries, the most important of which are gsl, fftw, and blas/lapack. We assume that you have already installed these libraries but, for some reason, you were not able to compile the code. So, what went wrong?
- Did you pass the correct --with-XXXX (where XXXX is gsl, fftw or lapack in lowercase) to the configure script? If your libraries are installed in a non-standard directory (like /opt/lapack), you will have to pass the script the location of the library (in this example, you could try ./configure --with-lapack='-L/opt/lapack -llapack'.
- If you are working on an alpha station, do not forget that the CXML library includes BLAS and LAPACK, so it can be used by octopus. If needed, just set the correct path with --with-lapack.
- If the configuration script cannot find FFTW, it is probable that you did not compile FFTW with the same Fortran compiler or with the same compiler options. The basic problem is that Fortran sometimes converts the function names to uppercase, at other times to lowercase, and it can add an "_" to them, or even two. Obviously all libraries and the program have to use the same convention, so the best is to compile everything with the same Fortran compiler/options. If you are a power user, you can check the convention used by your compiler using the command nm <library>.
- Unfortunately, the libraries compiled with one Fortran compiler are very often not compatible with file compiled with another Fortran compiler. In order to avoid problems, please make sure that all libraries are compiles using the same Fortran compiler.
To compile the parallel version of of the code, you will also need MPI (mpich or LAM work just fine).
Error while loading shared libraries
Sometimes, when you run octopus you stumble in the following error:
octopus: error while loading shared libraries: libXXX.so: cannot open shared object file: No such file or directory
This is a classical problem of the dynamical linker. Octopus was compiled dynamically, but the dynamical libraries (the .so files) are in a place that is not recognized by the dynamical loader. So, the solution is to tell the dynamical linker where the library is.
The first thing you should do is to find where the library is situated in your own system. Try doing a locate libXXX.so, for example. Let us imagine that the library is in the directory /opt/intel/compiler70/ia32/lib/ (this is where the ifc7 libraries are by default). Now, if you do not have root access to the machine, just type (using the bash shell)
> export LD_LIBRARY_PATH=/opt/intel/compiler70/ia32/lib/:$LD_LIBRARY_PATH > octopus
> LD_LIBRARY_PATH=/opt/intel/fc/9.0/lib/:$LD_LIBRARY_PATH octopus
If you have root control over the machine, you can use a more permanent alternative. Just add a line containing /opt/intel/compiler70/ia32/lib/ to the file /etc/ld.so.conf. This file tells the dynamic linker where to find the .so libraries. Then you have to update the cache of the linker by typing
A third solution is to compile octopus statically. This is quite simple in some systems (just adding -static to LDFLAFS, or something like that), but in some others not (with my home ifc7 setup it's a real mess!)
What is METIS?
When running parallel in "domains", octopus divides the simulation region (the box) into separate regions (domains) and assigns each of these to a different processor. This lets you not only speed up the calculation, but also divide the memory among the different processors. The first step of this process, the splitting of the box, is in general a very complicated process. Note that we are talking about a simulation box of an almost arbitrary shape, and of an arbitrary number of processors. Furthermore, as the communication between processors grows proportionally to the surface of the domains, one should use an algorithm that divides the box such that each domain has the same number of points, and at the same time minimizes the total area of the domains. The METIS library does just that. If you want to run octopus parallel in domains you are required use it (or the similar Zoltan library). Currently METIS and Zoltan are included in the Octopus distribution and are compiled by default when MPI is enabled.
Sometimes I get a segmentation fault when running Octopus
The most typical cause for segmentation faults are caused by a limited stack size. Some compilers, especially the Intel one, use the stack to create temporary arrays, and when running large calculations the default stack size might not be enough. The solution is to remove the limitation in the size of the stack by running the command
ulimit -s unlimited
Segmentation faults can also be caused by other problems, like a wrong compilation (linking with libraries compiled with a different fortran compiler, for example) or a bug in the code.
How do I run parallel in domains?
First of all, you must have a version of octopus compiled with support to the METIS library. This is a very useful library that takes care of the division of the space into domains. Then you just have to run octopus in parallel (this step depends on your actual system, you may have to use mpirun or mpiexec to accomplish it).
In some run modes (e.g., td), you can use multi-level parallelization, i.e., run in parallel in more than one way at the same time. In the td case, you can run parallel in states and in domains at the same time. In order to fine-tune this behavior, please take a look at the variables ParallelizationStrategy and ParallelizationGroupRanks. In order to check if everything is OK, take a look at the output of octopus in section "Parallelization". This is an example:
************************** Parallelization *************************** Octopus will run in *parallel* Info: Number of nodes in par_states group: 8 ( 62) Info: Octopus will waste at least 9.68% of computer time **********************************************************************
In this case, octopus runs in parallel only in states, using 8 processors (for 62 states). Furthermore, octopus some of the processors will be idle for 9.68% of the time (this is not so great, so maybe a different number of processors would be better in this case).
How to center the molecules?
How do I visualize 3D stuff?
Our preferred visualization tool is openDX. This is perhaps the most powerful 3D visualization tool for scientific data, and is highly versatile and sophisticated. However, this does not come for free: openDX is notoriously difficult to learn and to use. Anyway, in our opinion, its advantages clearly compensate for this problem.
The good news is that we made most of the dirty work for you! We have developed a small dx application that takes care of most of the details. If you want to try, start by installing openDX. This is simpler in some machines than in others. For example, in my Fedora Core 6, I simply have to type
yum install dx dx-devel dx-samples
Now generate some files for visualization. These can be either in .dx or .ncdf format (see
[prefix]/octopus/share/util/mf.cfg to your working directory and type
Then in the dx menus, choose
Windows>Open Control Panel by Name>Main. You will see a dialog box with some options. Play with it!
Also see Manual:External_utilities:Visualization.
Out of memory?
Q: From time to time, I obtain the following error when I perform some huge memory-demanding calculations:
**************************** FATAL ERROR ***************************** *** Fatal Error (description follows) *-------------------------------------------------------------------- * Failed to allocate 185115 Kb in file 'gs.F90' line 62 *-------------------------------------------------------------------- * Stack: **********************************************************************
Could it be related to the fact that the calculation demands more memory than available in the computer?
A: Octopus doesn't allocate memory in a big piece, but allocates small chunks as needed. So when you ask for more memory than is available, the allocation can fail even when asking for an innocent amount of memory. So yes, if you see this it is likely that you are running out of memory.
If you compiled on a 32-bit machine, you will be limited to a little more than 2 GB.
How do the version numbers in Octopus work?
Each stable release is identified by three numbers (for example x.y.z). The first two numbers indicates a particular version of Octopus, the third number indicates the revision. An increase in the revision number indicates that this release contains bug fixes and minor changes over the previous version, but that no new features were added (it may happen that we disable some features that are found to contain serious bugs).
Development versions (that you can get from the svn) do not have a version number, they only have a code name. For code names we use the scientific names of Octopus species. These are the code names that we have used so far (this scheme was started after the release of version 3.2):
- Octopus superciliosus (frilled pygmy octopus): current development version
- Octopus nocturnus: 4.1.x
- Octopus vulgaris (common octopus): 4.0.x
How do I cite octopus?
Octopus is a free program, so you have the right to use, change, distribute, and to publish papers with it without citing anyone (for as long as you follow the GPL license). However, the developers of Octopus are also scientists that need citations to bump their CVs. Therefore, we would be very happy if you could cite one or more papers concerning Octopus in your work. The main references are
- X. Andrade, J. Alberdi-Rodriguez, D. A. Strubbe, M. J. T. Oliveira, F. Nogueira, A. Castro, J. Muguerza, A. Arruabarrena, S. G. Louie, A. Aspuru-Guzik, A. Rubio, and M. A. L. Marques, Time-dependent density-functional theory in massively parallel computer architectures: the octopus project, J. Phys.: Cond. Matt. 24 233202 (2012)
- A. Castro, H. Appel, Micael Oliveira, C.A. Rozzi, X. Andrade, F. Lorenzen, M.A.L. Marques, E.K.U. Gross, and A. Rubio, octopus: a tool for the application of time-dependent density functional theory, Phys. Stat. Sol. B 243 2465-2488 (2006)
- M.A.L. Marques, Alberto Castro, George F. Bertsch, and Angel Rubio, octopus: a first-principles tool for excited electron-ion dynamics, Comput. Phys. Commun. 151 60-78 (2003)
There is also a paper describing the propagation methods used in Octopus:
- A. Castro, M.A.L. Marques, and A. Rubio, Propagators for the time-dependent Kohn-Sham equations, J. Chem. Phys 121 3425-3433 (2004),
a paper on the linear-response implementation:
- Xavier Andrade, Silvana Botti, Miguel Marques and Angel Rubio, Time-dependent density functional theory scheme for efficient calculations of dynamic (hyper)polarizabilities, J. Chem. Phys 126 184106 (2007)
and a paper about Libxc, the library used by octopus for the exchange-correlation functionals,
- Miguel A. L. Marques, Micael J. T. Oliveira, and Tobias Burnus, Libxc: a library of exchange and correlation functionals for density functional theory, Comput. Phys. Commun., DOI:10.1016/j.cpc.2012.05.007 (2012) OAI: arXiv:1203.1739
Finally, some general references on TDDFT, written by some of us:
- Fundamentals of time-dependent density functional theory, M.A.L. Marques, N.T. Maitra, F. Nogueira, E.K.U. Gross, and A. Rubio (Eds.), Lecture Notes in Physics, Vol. 837, Springer, Berlin, (2012), ISBN: 978-3-642-23518-4
- Time-dependent density functional theory, M.A.L. Marques, C. Ullrich, F. Nogueira, A. Rubio, K. Burke, and E.K.U. Gross (Eds.), Lecture Notes in Physics, Vol. 706, Springer, Berlin, (2006), ISBN: 978-3-540-35422-2
- Alberto Castro, M.A.L. Marques, Julio A. Alonso, and Angel Rubio, Optical properties of nanostructures from time-dependent density functional theory, J. Comp. Theoret. Nanoscience 1 231-255 (2004)
- M.A.L. Marques and E.K.U. Gross, Time-dependent density functional theory, Annu. Rev. Phys. Chem. 55 427-455 (2004)
You can find a more extensive list of publications here.