[Octopus-users] Any tips for big systems (protein)?
cheng at simulatio.jp
Tue Jul 15 03:53:53 WEST 2008
Dear Nicola and Xavier,
Thank you very much for reply. Now the calculation can pass the Hartree
What I am very curious is that the memory usage seems to be very huge,
the following is the error output of PBS
* O 891.972994 0.000367 4308616 | ..|projector.projector_end
* I 891.973335 1216089039.856874 4308616 |
* O 892.130911 0.157592 4338120 |
* I 892.131247 1216089040.014791 4338120 | ..|projector.projector_init
* O 892.131502 0.000251 4338120 | ..|projector.projector_init
* I 892.131730 1216089040.015268 4338120 |
* O 1011.367826 119.254781 4367624 |
* I 1011.619627 1216089159.503184 4367624 | ..|projector.projector_end
* I 1011.619725 1216089159.503260 4367624 | ..|..|submesh.submesh_end
* O 1011.619778 0.000054 4367624 | ..|..|submesh.submesh_end
* O 1011.619825 0.000181 4367624 | ..|projector.projector_end
* I 1011.619867 1216089159.503402 4367624 | ..|submesh.submesh_copy
/var/spool/pbs/mom_priv/jobs/4667.cudran.SC: line 14: 5582
Killed octopus_mpi >gs.out
I found the memory usage depend on the Boxshape and radius, The radius
is the minimum (16.) I can use, otherwise there are warning messages
that some atoms are out of the radius if the radius are set smaller than
16. I am testing other boxshape and see what happened.
Nicola Spallanzani wrote:
> only now I notice that you have setted a radius too much large.
> I suggest you to find the bigger volume that allow you to run the gs,
> then find the smaller volume that go to convergence.
> I was able to run the gs with a parallelization both in states and in
> domains, and to run the td only with a parall. in domains.
> Do you know why for you it is opposite?
> Xavier Andrade wrote:
>> Actually the input file that Cheng mentions has a lot of input
>> variables that you should not normally use, as the default is fine.
>> In particular, as Nicola said, do not use parallelization in states
>> (for the moment it only works for the time propagation) and leave the
>> default poisson solver (isf). With the default parameters I have been
>> able to make test runs (non-converged spacing) up to one thousand atoms.
>> What could be problematic is the SCF convergency. First of all add
>> some few extra states and if still your SCF iteration is not
>> converging you may want to put a finite electronic temperature.
>> On Mon, 14 Jul 2008, Nicola Spallanzani wrote:
>>> Hi Cheng,
>>> I was able to do a gs/td calculation of a big molecule of 207 atoms. I
>>> can suggest you to use the "isf" PoissonSolver and to make a
>>> parallelization only in domains.
>>> This was right for me... I hope the same for you!
More information about the Octopus-users