Merge pull request #1 from lammps/master

update the forked repository
This commit is contained in:
oywg11
2020-12-10 03:55:35 +02:00
committed by GitHub
11543 changed files with 1537071 additions and 881174 deletions

3
.gitattributes vendored Normal file
View File

@ -0,0 +1,3 @@
.gitattributes export-ignore
.gitignore export-ignore
.github export-ignore

25
.github/CODEOWNERS vendored
View File

@ -10,6 +10,7 @@ lib/molfile/* @akohlmey
lib/qmmm/* @akohlmey
lib/vtk/* @rbberger
lib/kim/* @ellio167
lib/mesont/* @iafoss
# whole packages
src/COMPRESS/* @akohlmey
@ -22,14 +23,16 @@ src/SPIN/* @julient31
src/USER-CGDNA/* @ohenrich
src/USER-CGSDK/* @akohlmey
src/USER-COLVARS/* @giacomofiorin
src/USER-DPD/* @timattox
src/USER-INTEL/* @wmbrownintel
src/USER-MANIFOLD/* @Pakketeretet2
src/USER-MEAMC/* @martok
src/USER-MESONT/* @iafoss
src/USER-MOFFF/* @hheenen
src/USER-MOLFILE/* @akohlmey
src/USER-NETCDF/* @pastewka
src/USER-PLUMED/* @gtribello
src/USER-PHONON/* @lingtikong
src/USER-PTM/* @pmla
src/USER-OMP/* @akohlmey
src/USER-QMMM/* @akohlmey
src/USER-REAXC/* @hasanmetin
@ -44,7 +47,7 @@ src/GPU/pair_vashishta_gpu.* @andeplane
src/KOKKOS/pair_vashishta_kokkos.* @andeplane
src/MANYBODY/pair_vashishta_table.* @andeplane
src/MANYBODY/pair_atm.* @sergeylishchuk
src/USER-MISC/fix_bond_react.* @jrgissing
src/USER-REACTION/fix_bond_react.* @jrgissing
src/USER-MISC/*_grem.* @dstelter92
src/USER-MISC/compute_stress_mop*.* @RomainVermorel
@ -109,18 +112,36 @@ src/exceptions.h @rbberger
src/fix_nh.* @athomps
src/info.* @akohlmey @rbberger
src/timer.* @akohlmey
src/min* @sjplimp @stanmoore1
src/utils.* @akohlmey @rbberger
src/math_eigen_impl.h @jewettaij
# tools
tools/msi2lmp/* @akohlmey
tools/emacs/* @HaoZeke
tools/singularity/* @akohlmey @rbberger
tools/code_standard/* @rbberger
tools/valgrind/* @akohlmey
# tests
unittest/* @akohlmey @rbberger
# cmake
cmake/* @junghans @rbberger
cmake/Modules/Packages/USER-COLVARS.cmake @junghans @rbberger @giacomofiorin
cmake/Modules/Packages/KIM.cmake @junghans @rbberger @ellio167
cmake/presets/*.cmake @junghans @rbberger @akohlmey
# python
python/* @rbberger
# fortran
fortran/* @akohlmey
# docs
doc/utils/*/* @rbberger
doc/Makefile @rbberger
doc/README @rbberger
# for releases
src/version.h @sjplimp

67
.github/CODE_OF_CONDUCT.md vendored Normal file
View File

@ -0,0 +1,67 @@
# Code of Conduct for the LAMMPS Project on GitHub
## Our Pledge
In the interest of fostering an open and welcoming environment, we as LAMMPS
developers, contributors, and maintainers pledge to making participation in
our project a harassment-free experience for everyone.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of explicit language or imagery
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, issues, and other contributions that are not
aligned to this Code of Conduct, or to ban temporarily or permanently any
developer, maintainer, or contributor for this or other behaviors that they
deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies to all public exchanges in the LAMMPS project
on GitHub and in submitted code.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at developer@lammps.org. All
complaints will be reviewed and investigated and will result in a response
that is deemed necessary and appropriate to the circumstances. The project
team is obligated to maintain confidentiality with regard to the reporter
of an incident.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@ -5,8 +5,8 @@ Thank your for considering to contribute to the LAMMPS software project.
The following is a set of guidelines as well as explanations of policies and work flows for contributing to the LAMMPS molecular dynamics software project. These guidelines focus on submitting issues or pull requests on the LAMMPS GitHub project.
Thus please also have a look at:
* [The Section on submitting new features for inclusion in LAMMPS of the Manual](http://lammps.sandia.gov/doc/Section_modify.html#mod-15)
* [The LAMMPS GitHub Tutorial in the Manual](http://lammps.sandia.gov/doc/tutorial_github.html)
* [The Section on submitting new features for inclusion in LAMMPS of the Manual](https://lammps.sandia.gov/doc/Modify_contribute.html)
* [The LAMMPS GitHub Tutorial in the Manual](http://lammps.sandia.gov/doc/Howto_github.html)
## Table of Contents
@ -26,17 +26,17 @@ __
## I don't want to read this whole thing I just have a question!
> **Note:** Please do not file an issue to ask a general question about LAMMPS, its features, how to use specific commands, or how perform simulations or analysis in LAMMPS. Instead post your question to the ['lammps-users' mailing list](http://lammps.sandia.gov/mail.html). You do not need to be subscribed to post to the list (but a mailing list subscription avoids having your post delayed until it is approved by a mailing list moderator). Most posts to the mailing list receive a response within less than 24 hours. Before posting to the mailing list, please read the [mailing list guidelines](http://lammps.sandia.gov/guidelines.html). Following those guidelines will help greatly to get a helpful response. Always mention which LAMMPS version you are using.
> **Note:** Please do not file an issue to ask a general question about LAMMPS, its features, how to use specific commands, or how perform simulations or analysis in LAMMPS. Instead post your question to the ['lammps-users' mailing list](https://lammps.sandia.gov/mail.html). You do not need to be subscribed to post to the list (but a mailing list subscription avoids having your post delayed until it is approved by a mailing list moderator). Most posts to the mailing list receive a response within less than 24 hours. Before posting to the mailing list, please read the [mailing list guidelines](https://lammps.sandia.gov/guidelines.html). Following those guidelines will help greatly to get a helpful response. Always mention which LAMMPS version you are using.
## How Can I Contribute?
There are several ways how you can actively contribute to the LAMMPS project: you can discuss compiling and using LAMMPS, and solving LAMMPS related problems with other LAMMPS users on the lammps-users mailing list, you can report bugs or suggest enhancements by creating issues on GitHub (or posting them to the lammps-users mailing list), and you can contribute by submitting pull requests on GitHub or e-mail your code
to one of the [LAMMPS core developers](http://lammps.sandia.gov/authors.html). As you may see from the aforementioned developer page, the LAMMPS software package includes the efforts of a very large number of contributors beyond the principal authors and maintainers.
to one of the [LAMMPS core developers](https://lammps.sandia.gov/authors.html). As you may see from the aforementioned developer page, the LAMMPS software package includes the efforts of a very large number of contributors beyond the principal authors and maintainers.
### Discussing How To Use LAMMPS
The LAMMPS mailing list is hosted at SourceForge. The mailing list began in 2005, and now includes tens of thousands of messages in thousands of threads. LAMMPS developers try to respond to posted questions in a timely manner, but there are no guarantees. Please consider that people live in different timezone and may not have time to answer e-mails outside of their work hours.
You can post to list by sending your email to lammps-users at lists.sourceforge.net (no subscription required), but before posting, please read the [mailing list guidelines](http://lammps.sandia.gov/guidelines.html) to maximize your chances to receive a helpful response.
You can post to list by sending your email to lammps-users at lists.sourceforge.net (no subscription required), but before posting, please read the [mailing list guidelines](https://lammps.sandia.gov/guidelines.html) to maximize your chances to receive a helpful response.
Anyone can browse/search previous questions/answers in the archives. You do not have to subscribe to the list to post questions, receive answers (to your questions), or browse/search the archives. You **do** need to subscribe to the list if you want emails for **all** the posts (as individual messages or in digest form), or to answer questions yourself. Feel free to sign up and help us out! Answering questions from fellow LAMMPS users is a great way to pay back the community for providing you a useful tool for free, and to pass on the advice you have received yourself to others. It improves your karma and helps you understand your own research better.
@ -44,7 +44,7 @@ If you post a message and you are a subscriber, your message will appear immedia
### Reporting Bugs
While developers writing code for LAMMPS are careful to test their code, LAMMPS is such a large and complex software, that it is impossible to test for all combinations of features under all normal and not so normal circumstances. Thus bugs do happen, and if you suspect, that you have encountered one, please try to document it and report it as an [Issue](https://github.com/lammps/lammps/issues) on the LAMMPS GitHub project web page. However, before reporting a bug, you need to check whether this is something that may have already been corrected. The [Latest Features and Bug Fixes in LAMMPS](http://lammps.sandia.gov/bug.html) web page lists all significant changes to LAMMPS over the years. It also tells you what the current latest development version of LAMMPS is, and you should test whether your issue still applies to that version.
While developers writing code for LAMMPS are careful to test their code, LAMMPS is such a large and complex software, that it is impossible to test for all combinations of features under all normal and not so normal circumstances. Thus bugs do happen, and if you suspect, that you have encountered one, please try to document it and report it as an [Issue](https://github.com/lammps/lammps/issues) on the LAMMPS GitHub project web page. However, before reporting a bug, you need to check whether this is something that may have already been corrected. The [Latest Features and Bug Fixes in LAMMPS](https://lammps.sandia.gov/bug.html) web page lists all significant changes to LAMMPS over the years. It also tells you what the current latest development version of LAMMPS is, and you should test whether your issue still applies to that version.
When you click on the green "New Issue" button, you will be provided with a text field, where you can enter your message. That text field with contain a template with several headlines and some descriptions. Keep the headlines that are relevant to your reported potential bug and replace the descriptions with the information as suggested by the descriptions.
You can also attach small text files (please add the file name extension `.txt` or it will be rejected), images, or small compressed text files (using gzip, do not use RAR or 7-ZIP or similar tools that are uncommon outside of Windows machines). In many cases, bugs are best illustrated by providing a small input deck (do **not** attach your entire production input, but remove everything that is not required to reproduce the issue, and scale down your system size, that the resulting calculation runs fast and can be run on small desktop quickly).
@ -62,19 +62,23 @@ To be able to submit an issue on GitHub, you have to register for an account (fo
We encourage users to submit new features or modifications for LAMMPS to the core developers so they can be added to the LAMMPS distribution. The preferred way to manage and coordinate this is by submitting a pull request at the LAMMPS project on GitHub. For any larger modifications or programming project, you are encouraged to contact the LAMMPS developers ahead of time, in order to discuss implementation strategies and coding guidelines, that will make it easier to integrate your contribution and result in less work for everybody involved. You are also encouraged to search through the list of open issues on GitHub and submit a new issue for a planned feature, so you would not duplicate the work of others (and possibly get scooped by them) or have your work duplicated by others.
How quickly your contribution will be integrated depends largely on how much effort it will cause to integrate and test it, how much it requires changes to the core code base, and of how much interest it is to the larger LAMMPS community. Please see below for a checklist of typical requirements. Once you have prepared everything, see [this tutorial](http://lammps.sandia.gov/doc/tutorial_github.html)
How quickly your contribution will be integrated depends largely on how much effort it will cause to integrate and test it, how much it requires changes to the core code base, and of how much interest it is to the larger LAMMPS community. Please see below for a checklist of typical requirements. Once you have prepared everything, see [this tutorial](https://lammps.sandia.gov/doc/Howto_github.html)
for instructions on how to submit your changes or new files through a GitHub pull request
Here is a checklist of steps you need to follow to submit a single file or user package for our consideration. Following these steps will save both you and us time. See existing files in packages in the source directory for examples. If you are uncertain, please ask on the lammps-users mailing list.
* C++ source code must be compatible with the C++-11 standard. Packages may require a later standard, if justified.
* All source files you provide must compile with the most current version of LAMMPS with multiple configurations. In particular you need to test compiling LAMMPS from scratch with `-DLAMMPS_BIGBIG` set in addition to the default `-DLAMMPS_SMALLBIG` setting. Your code will need to work correctly in serial and in parallel using MPI.
* For consistency with the rest of LAMMPS and especially, if you want your contribution(s) to be added to main LAMMPS code or one of its standard packages, it needs to be written in a style compatible with other LAMMPS source files. This means: 2-character indentation per level, no tabs, no lines over 80 characters. I/O is done via the C-style stdio library, class header files should not import any system headers outside <stdio.h>, STL containers should be avoided in headers, and forward declarations used where possible or needed. All added code should be placed into the LAMMPS_NS namespace or a sub-namespace; global or static variables should be avoided, as they conflict with the modular nature of LAMMPS and the C++ class structure. Header files must not import namespaces with using. This all is so the developers can more easily understand, integrate, and maintain your contribution and reduce conflicts with other parts of LAMMPS. This basically means that the code accesses data structures, performs its operations, and is formatted similar to other LAMMPS source files, including the use of the error class for error and warning messages.
* For consistency with the rest of LAMMPS and especially, if you want your contribution(s) to be added to main LAMMPS code or one of its standard packages, it needs to be written in a style compatible with other LAMMPS source files. This means: 2-character indentation per level, no tabs, no trailing whitespace, no lines over 80 characters. I/O is done via the C-style stdio library, style class header files should not import any system headers, STL containers should be avoided in headers, and forward declarations used where possible or needed. All added code should be placed into the LAMMPS_NS namespace or a sub-namespace; global or static variables should be avoided, as they conflict with the modular nature of LAMMPS and the C++ class structure. There MUST NOT be any "using namespace XXX;" statements in headers. In the implementation file (<name>.cpp) system includes should be placed in angular brackets (<>) and for c-library functions the C++ style header files should be included (<cstdio> instead of <stdio.h>, or <cstring> instead of <string.h>). This all is so the developers can more easily understand, integrate, and maintain your contribution and reduce conflicts with other parts of LAMMPS. This basically means that the code accesses data structures, performs its operations, and is formatted similar to other LAMMPS source files, including the use of the error class for error and warning messages.
* Source, style name, and documentation file should follow the following naming convention: style names should be lowercase and words separated by a forward slash; for a new fix style 'foo/bar', the class should be named FixFooBar, the name of the source files should be 'fix_foo_bar.h' and 'fix_foo_bar.cpp' and the corresponding documentation should be in a file 'fix_foo_bar.rst'.
* If you want your contribution to be added as a user-contributed feature, and it is a single file (actually a `<name>.cpp` and `<name>.h` file) it can be rapidly added to the USER-MISC directory. Include the one-line entry to add to the USER-MISC/README file in that directory, along with the 2 source files. You can do this multiple times if you wish to contribute several individual features.
* If you want your contribution to be added as a user-contribution and it is several related features, it is probably best to make it a user package directory with a name like USER-FOO. In addition to your new files, the directory should contain a README text file. The README should contain your name and contact information and a brief description of what your new package does. If your files depend on other LAMMPS style files also being installed (e.g. because your file is a derived class from the other LAMMPS class), then an Install.sh file is also needed to check for those dependencies. See other README and Install.sh files in other USER directories as examples. Send us a tarball of this USER-FOO directory.
* Your new source files need to have the LAMMPS copyright, GPL notice, and your name and email address at the top, like other user-contributed LAMMPS source files. They need to create a class that is inside the LAMMPS namespace. If the file is for one of the USER packages, including USER-MISC, then we are not as picky about the coding style (see above). I.e. the files do not need to be in the same stylistic format and syntax as other LAMMPS files, though that would be nice for developers as well as users who try to read your code.
* You **must** also create or extend a documentation file for each new command or style you are adding to LAMMPS. For simplicity and convenience, the documentation of groups of closely related commands or styles may be combined into a single file. This will be one file for a single-file feature. For a package, it might be several files. These are simple text files with a specific markup language, that are then auto-converted to HTML and PDF. The tools for this conversion are included in the source distribution, and the translation can be as simple as doing "make html pdf" in the doc folder. Thus the documentation source files must be in the same format and style as other `<name>.txt` files in the lammps/doc/src directory for similar commands and styles; use one or more of them as a starting point. A description of the markup can also be found in `lammps/doc/utils/txt2html/README.html` As appropriate, the text files can include links to equations (see doc/Eqs/*.tex for examples, we auto-create the associated JPG files), or figures (see doc/JPG for examples), or even additional PDF files with further details (see doc/PDF for examples). The doc page should also include literature citations as appropriate; see the bottom of doc/fix_nh.txt for examples and the earlier part of the same file for how to format the cite itself. The "Restrictions" section of the doc page should indicate that your command is only available if LAMMPS is built with the appropriate USER-MISC or USER-FOO package. See other user package doc files for examples of how to do this. The prerequisite for building the HTML format files are Python 3.x and virtualenv, the requirement for generating the PDF format manual is the htmldoc software. Please run at least "make html" and carefully inspect and proofread the resulting HTML format doc page before submitting your code.
* You **must** also create or extend a documentation file for each new command or style you are adding to LAMMPS. For simplicity and convenience, the documentation of groups of closely related commands or styles may be combined into a single file. This will be one file for a single-file feature. For a package, it might be several files. These are files in the [reStructuredText](https://docutils.sourceforge.io/rst.html) markup language, that are then converted to HTML and PDF. The tools for this conversion are included in the source distribution, and the translation can be as simple as doing "make html pdf" in the doc folder. Thus the documentation source files must be in the same format and style as other `<name>.rst` files in the lammps/doc/src directory for similar commands and styles; use one or more of them as a starting point. An introduction to reStructuredText can be found at [https://docutils.sourceforge.io/docs/user/rst/quickstart.html](https://docutils.sourceforge.io/docs/user/rst/quickstart.html). The text files can include mathematical expressions and symbol in ".. math::" sections or ":math:" expressions or figures (see doc/JPG for examples), or even additional PDF files with further details (see doc/PDF for examples). The doc page should also include literature citations as appropriate; see the bottom of doc/fix_nh.rst for examples and the earlier part of the same file for how to format the cite itself. The "Restrictions" section of the doc page should indicate that your command is only available if LAMMPS is built with the appropriate USER-MISC or USER-FOO package. See other user package doc files for examples of how to do this. The prerequisite for building the HTML format files are Python 3.x and virtualenv. Please run at least `make html`, `make pdf` and `make spelling` and carefully inspect and proofread the resulting HTML format doc page as well as the output produced to the screen. Make sure that all spelling errors are fixed or the necessary false positives are added to the `doc/utils/sphinx-config/false_positives.txt` file. For new styles, those usually also need to be added to lists on the respective overview pages. This can be checked for also with `make style_check`.
* For a new package (or even a single command) you should include one or more example scripts demonstrating its use. These should run in no more than a couple minutes, even on a single processor, and not require large data files as input. See directories under examples/USER for examples of input scripts other users provided for their packages. These example inputs are also required for validating memory accesses and testing for memory leaks with valgrind
* If there is a paper of yours describing your feature (either the algorithm/science behind the feature itself, or its initial usage, or its implementation in LAMMPS), you can add the citation to the *.cpp source file. See src/USER-EFF/atom_vec_electron.cpp for an example. A LaTeX citation is stored in a variable at the top of the file and a single line of code that references the variable is added to the constructor of the class. Whenever a user invokes your feature from their input script, this will cause LAMMPS to output the citation to a log.cite file and prompt the user to examine the file. Note that you should only use this for a paper you or your group authored. E.g. adding a cite in the code for a paper by Nose and Hoover if you write a fix that implements their integrator is not the intended usage. That kind of citation should just be in the doc page you provide.
* For new utility functions or class (i.e. anything that does not depend on a LAMMPS object), new unit tests should be added to the unittest tree.
* When adding a new LAMMPS style, a .yaml file with a test configuration and reference data should be added for the styles where a suitable tester program already exists (e.g. pair styles, bond styles, etc.).
* If there is a paper of yours describing your feature (either the algorithm/science behind the feature itself, or its initial usage, or its implementation in LAMMPS), you can add the citation to the <name>.cpp source file. See src/USER-EFF/atom_vec_electron.cpp for an example. A LaTeX citation is stored in a variable at the top of the file and a single line of code that references the variable is added to the constructor of the class. Whenever a user invokes your feature from their input script, this will cause LAMMPS to output the citation to a log.cite file and prompt the user to examine the file. Note that you should only use this for a paper you or your group authored. E.g. adding a cite in the code for a paper by Nose and Hoover if you write a fix that implements their integrator is not the intended usage. That kind of citation should just be in the doc page you provide.
Finally, as a general rule-of-thumb, the more clear and self-explanatory you make your documentation and README files, and the easier you make it for people to get started, e.g. by providing example scripts, the more likely it is that users will try out your new feature.
@ -102,11 +106,11 @@ For bug reports, the next step is that one of the core LAMMPS developers will se
### Pull Requests
For submitting pull requests, there is a [detailed tutorial](http://lammps.sandia.gov/doc/tutorial_github.html) in the LAMMPS manual. Thus only a brief breakdown of the steps is presented here.
For submitting pull requests, there is a [detailed tutorial](https://lammps.sandia.gov/doc/Howto_github.html) in the LAMMPS manual. Thus only a brief breakdown of the steps is presented here. Please note, that the LAMMPS developers are still reviewing and trying to improve the process. If you are unsure about something, do not hesitate to post a question on the lammps-users mailing list or contact one fo the core LAMMPS developers.
Immediately after the submission, the LAMMPS continuing integration server at ci.lammps.org will download your submitted branch and perform a simple compilation test, i.e. will test whether your submitted code can be compiled under various conditions. It will also do a check on whether your included documentation translates cleanly. Whether these tests are successful or fail will be recorded. If a test fails, please inspect the corresponding output on the CI server and take the necessary steps, if needed, so that the code can compile cleanly again. The test will be re-run each the pull request is updated with a push to the remote branch on GitHub.
Next a LAMMPS core developer will self-assign and do an overall technical assessment of the submission. If you are not yet registered as a LAMMPS collaborator, you will receive an invitation for that.
You may also receive comments and suggestions on the overall submission or specific details. If permitted, additional changes may be pushed into your pull request branch or a pull request may be filed in your LAMMPS fork on GitHub to include those changes.
Next a LAMMPS core developer will self-assign and do an overall technical assessment of the submission. If you are not yet registered as a LAMMPS collaborator, you will receive an invitation for that. As part of the assessment, the pull request will be categorized with labels. There are two special labels: `needs_work` (indicates that work from the submitter of the pull request is needed) and `work_in_progress` (indicates, that the assigned LAMMPS developer will make changes, if not done by the contributor who made the submit).
You may also receive comments and suggestions on the overall submission or specific details and on occasion specific requests for changes as part of the review. If permitted, also additional changes may be pushed into your pull request branch or a pull request may be filed in your LAMMPS fork on GitHub to include those changes.
The LAMMPS developer may then decide to assign the pull request to another developer (e.g. when that developer is more knowledgeable about the submitted feature or enhancement or has written the modified code). It may also happen, that additional developers are requested to provide a review and approve the changes. For submissions, that may change the general behavior of LAMMPS, or where a possibility of unwanted side effects exists, additional tests may be requested by the assigned developer.
If the assigned developer is satisfied and considers the submission ready for inclusion into LAMMPS, the pull request will be assigned to the LAMMPS lead developer, Steve Plimpton (@sjplimp), who will then have the final decision on whether the submission will be included, additional changes are required or it will be ultimately rejected. After the pull request is merged, you may delete the pull request branch in your personal LAMMPS fork.
Since the learning curve for git is quite steep for efficiently managing remote repositories, local and remote branches, pull requests and more, do not hesitate to ask questions, if you are not sure about how to do certain steps that are asked of you. Even if the changes asked of you do not make sense to you, they may be important for the LAMMPS developers. Please also note, that these all are guidelines and not set in stone.
If the assigned developer is satisfied and considers the submission ready for inclusion into LAMMPS, the pull request will receive approvals and be merged into the master branch by one of the core LAMMPS developers. After the pull request is merged, you may delete the feature branch used for the pull request in your personal LAMMPS fork.
Since the learning curve for git is quite steep for efficiently managing remote repositories, local and remote branches, pull requests and more, do not hesitate to ask questions, if you are not sure about how to do certain steps that are asked of you. Even if the changes asked of you do not make sense to you, they may be important for the LAMMPS developers. Please also note, that these all are guidelines and nothing set in stone. So depending on the nature of the contribution, the workflow may be adjusted.

View File

@ -1,31 +0,0 @@
## Summary
_Please provide a brief description of the issue_
## Type of Issue
_Is this a 'Bug Report' or a 'Suggestion for an Enhancement'?_
## Detailed Description (Enhancement Suggestion)
_Explain how you would like to see LAMMPS enhanced, what feature(s) you are looking for, provide references to relevant background information, and whether you are willing to implement the enhancement yourself or would like to participate in the implementation_
## LAMMPS Version (Bug Report)
_Please specify which LAMMPS version this issue was detected with. If this is not the latest development version, please stop and test that version, too, and report it here if the bug persists_
## Expected Behavior (Bug Report)
_Describe the expected behavior. Quote from the LAMMPS manual where needed or explain why the expected behavior is meaningful, especially when it differs from the manual_
## Actual Behavior (Bug Report)
_Describe the actual behavior, how it differs from the expected behavior, and how this can be observed. Try to be specific and do **not* use vague terms like "doesn't work" or "wrong result". Do not assume that the person reading this has any experience with or knowledge of your specific research._
## Steps to Reproduce (Bug Report)
_Describe the steps required to quickly reproduce the issue. You can attach (small) files to the section below or add URLs where to download an archive with all necessary files. Please try to create input that are as small as possible and run as fast as possible. NOTE: the less effort and time it takes to reproduce your issue, the more likely, that somebody will look into it._
## Further Information, Files, and Links
_Put any additional information here, attach relevant text or image files and URLs to external sites, e.g. relevant publications_

32
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,32 @@
---
name: Bug report
about: Create a bug report to help us eliminate issues and improve LAMMPS
title: "[BUG] _Replace With Suitable Title_"
labels: bug
assignees: ''
---
**Summary**
<!--Please provide a clear and concise description of what the bug is.-->
**LAMMPS Version and Platform**
<!--Please specify precisely which LAMMPS version this issue was detected with (the first line of the output) and what platform (operating system and its version, hardware) you are running on. If possible, test with the most recent LAMMPS patch version-->
**Expected Behavior**
<!--Describe the expected behavior. Quote from the LAMMPS manual where needed, or explain why the expected behavior is meaningful, especially when it differs from the manual-->
**Actual Behavior**
<!--Describe the actual behavior, how it differs from the expected behavior, and how this can be observed. Try to be specific and do **not** use vague terms like "doesn't work" or "wrong result". Do not assume that the person reading this has any experience with or knowledge of your specific area of research.-->
**Steps to Reproduce**
<!--Describe the steps required to (quickly) reproduce the issue. You can attach (small) files to the section below or add URLs where to download an archive with all necessary files. Please try to create an input set that is as minimal and small as possible and reproduces the bug as quickly as possible. **NOTE:** the less effort and time it takes to reproduce your reported bug, the more likely it becomes, that somebody will look into it and fix the problem.-->
**Further Information, Files, and Links**
<!--Put any additional information here, attach relevant text or image files and URLs to external sites, e.g. relevant publications-->

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Make a suggestion for a new feature or a change to LAMMPS
title: "[Feature Request] _Replace with Title_"
labels: enhancement
assignees: ''
---
**Summary**
<!--Please provide a brief and concise description of the suggested feature or change-->
**Detailed Description**
<!--Please explain how you would like to see LAMMPS enhanced, what feature(s) you are looking for, what specific problems this will solve. If possible, provide references to relevant background information like publications or web pages, and whether you are planning to implement the enhancement yourself or would like to participate in the implementation. If applicable add a reference to an existing bug report or issue that this will address.-->
**Further Information, Files, and Links**
<!--Put any additional information here, attach relevant text or image files and URLs to external sites, e.g. relevant publications-->

21
.github/ISSUE_TEMPLATE/generic.md vendored Normal file
View File

@ -0,0 +1,21 @@
---
name: Generic Issue
about: For issues that do not fit any of the other categories
title: "_Replace With a Descriptive Title_"
labels:
assignees: ''
---
**Summary**
<!--Please provide a clear and concise description of what this issue report is about.-->
**LAMMPS Version and Platform**
<!--Please specify precisely which LAMMPS version this issue was detected with (the first line of the output) and what platform (operating system and its version, hardware) you are running on. If possible, test with the most recent LAMMPS patch version-->
**Details**
<!--Please explain the issue in detail here-->

15
.github/ISSUE_TEMPLATE/help_request.md vendored Normal file
View File

@ -0,0 +1,15 @@
---
name: Request for Help
about: "Don't post help requests here, email the lammps-users mailing list"
title: ""
labels: invalid
assignees: ''
---
Please **do not** post requests for help (e.g. with installing or using LAMMPS) here.
Instead send an e-mail to the lammps-users mailing list.
This issue tracker is for tracking LAMMPS development related issues only.
Thanks for your cooperation.

View File

@ -1,29 +1,45 @@
## Purpose
**Summary**
_Briefly describe the new feature(s), enhancement(s), or bugfix(es) included in this pull request. If this addresses an open GitHub Issue, mention the issue number, e.g. with `fixes #221` or `closes #135`, so that issue will be automatically closed when the pull request is merged_
<!--Briefly describe the new feature(s), enhancement(s), or bugfix(es) included in this pull request.-->
## Author(s)
**Related Issue(s)**
_Please state name and affiliation of the author or authors that should be credited with the changes in this pull request_
<!--If this addresses an open GitHub issue for this project, please mention the issue number here, and describe the relation. Use the phrases `fixes #221` or `closes #135`, when you want an issue to be automatically closed when the pull request is merged-->
## Backward Compatibility
**Author(s)**
_Please state whether any changes in the pull request break backward compatibility for inputs, and - if yes - explain what has been changed and why_
<!--Please state name and affiliation of the author or authors that should be credited with the changes in this pull request. If this pull request adds new files to the distribution, please also provide a suitable "long-lived" e-mail address (ideally something that can outlive your institution's e-mail, in case you change jobs) for the *corresponding* author, i.e. the person the LAMMPS developers can contact directly with questions and requests related to maintenance and support of this contributed code.-->
## Implementation Notes
**Licensing**
_Provide any relevant details about how the changes are implemented, how correctness was verified, how other features - if any - in LAMMPS are affected_
By submitting this pull request, I agree, that my contribution will be included in LAMMPS and redistributed under either the GNU General Public License version 2 (GPL v2) or the GNU Lesser General Public License version 2.1 (LGPL v2.1).
## Post Submission Checklist
**Backward Compatibility**
<!--Please state whether any changes in the pull request will break backward compatibility for inputs, and - if yes - explain what has been changed and why-->
**Implementation Notes**
<!--Provide any relevant details about how the changes are implemented, how correctness was verified, how other features - if any - in LAMMPS are affected-->
**Post Submission Checklist**
<!--Please check the fields below as they are completed **after** the pull request has been submitted. Delete lines that don't apply-->
_Please check the fields below as they are completed_
- [ ] The feature or features in this pull request is complete
- [ ] Suitable new documentation files and/or updates to the existing docs are included
- [ ] One or more example input decks are included
- [ ] Licensing information is complete
- [ ] Corresponding author information is complete
- [ ] The source code follows the LAMMPS formatting guidelines
- [ ] Suitable new documentation files and/or updates to the existing docs are included
- [ ] The added/updated documentation is integrated and tested with the documentation build system
- [ ] The feature has been verified to work with the conventional build system
- [ ] The feature has been verified to work with the CMake based build system
- [ ] Suitable tests have been added to the unittest tree.
- [ ] A package specific README file has been included or updated
- [ ] One or more example input decks are included
## Further Information, Files, and Links
**Further Information, Files, and Links**
_Put any additional information here, attach relevant text or image files, and URLs to external sites (e.g. DOIs or webpages)_
<!--Put any additional information here, attach relevant text or image files, and URLs to external sites (e.g. DOIs or webpages)-->

View File

@ -0,0 +1,45 @@
---
name: Bug fix
about: Submit a pull request that fixes one or more bugs
title: "[BUGFIX] _Replace With Suitable Title_"
labels: bugfix
assignees: ''
---
**Summary**
<!--Briefly describe the bug(s) that are eliminated by this pull request.-->
**Related Issue(s)**
<!--If this addresses an open GitHub issue for this project, please mention the issue number here, and describe the relation. Use the phrases `fixes #221` or `closes #135`, when you want an issue to be automatically closed when the pull request is merged-->
**Author(s)**
<!--Please state name and affiliation of the author or authors that should be credited with the changes in this pull request. If this pull request adds new files to the distribution, please also provide a suitable "long-lived" e-mail address (ideally something that can outlive your institution's e-mail, in case you change jobs) for the *corresponding* author, i.e. the person the LAMMPS developers can contact directly with questions and requests related to maintenance and support of this contributed code.-->
**Licensing**
By submitting this pull request, I agree, that my contribution will be included in LAMMPS and redistributed under either the GNU General Public License version 2 (GPL v2) or the GNU Lesser General Public License version 2.1 (LGPL v2.1).
**Backward Compatibility**
<!--Please state whether any changes in the pull request will break backward compatibility for inputs, and - if yes - explain what has been changed and why-->
**Detailed Description**
<!--Provide any relevant details about how the fixed bug can be reproduced, how the changes are implemented, how correctness was verified, how other features - if any - in LAMMPS are affected-->
**Post Submission Checklist**
<!--Please check the fields below as they are completed **after** the pull request has been submitted. Delete lines that don't apply-->
- [ ] The feature or features in this pull request is complete
- [ ] Licensing information is complete
- [ ] Corresponding author information is complete
- [ ] The source code follows the LAMMPS formatting guidelines
- [ ] The feature has been verified to work with the conventional build system
- [ ] The feature has been verified to work with the CMake based build system
- [ ] Suitable tests have been added to the unittest tree.

View File

@ -0,0 +1,43 @@
---
name: Maintenance or Refactoring
about: Submit a pull request that does code refactoring or other maintenance changes
title: "[MAINTENANCE] _Replace With Suitable Title_"
labels: maintenance
assignees: ''
---
**Summary**
<!--Briefly describe the included changes.-->
**Related Issue(s)**
<!--If this addresses an open GitHub issue for this project, please mention the issue number here, and describe the relation. Use the phrases `fixes #221` or `closes #135`, when you want an issue to be automatically closed when the pull request is merged-->
**Author(s)**
<!--Please state name and affiliation of the author or authors that should be credited with the changes in this pull request. If this pull request adds new files to the distribution, please also provide a suitable "long-lived" e-mail address (ideally something that can outlive your institution's e-mail, in case you change jobs) for the *corresponding* author, i.e. the person the LAMMPS developers can contact directly with questions and requests related to maintenance and support of this contributed code.-->
**Licensing**
By submitting this pull request, I agree, that my contribution will be included in LAMMPS and redistributed under either the GNU General Public License version 2 (GPL v2) or the GNU Lesser General Public License version 2.1 (LGPL v2.1).
**Backward Compatibility**
<!--Please state whether any changes in the pull request will break backward compatibility for inputs, and - if yes - explain what has been changed and why-->
**Detailed Description**
<!--Provide any relevant details about how the changes are implemented, how correctness was verified, how other features - if any - in LAMMPS are affected-->
**Post Submission Checklist**
<!--Please check the fields below as they are completed *after* the pull request is submitted-->
- [ ] The pull request is complete
- [ ] The source code follows the LAMMPS formatting guidelines
- [ ] The feature has been verified to work with the conventional build system
- [ ] The feature has been verified to work with the CMake based build system
- [ ] Suitable tests have been added to the unittest tree.

View File

@ -0,0 +1,54 @@
---
name: New Feature
about: Submit a pull request that adds new Features (complete files) to LAMMPS
title: "[New Feature] _Replace With Suitable Title_"
labels: enhancement
assignees: ''
---
**Summary**
<!--Briefly describe the new feature(s) included in this pull request.-->
**Related Issue(s)**
<!--If this addresses an open GitHub issue for this project, please mention the issue number here, and describe the relation. Use the phrases `fixes #221` or `closes #135`, when you want an issue to be automatically closed when the pull request is merged-->
**Author(s)**
<!--Please state name and affiliation of the author or authors that should be credited with the changes in this pull request. If this pull request adds new files to the distribution, please also provide a suitable "long-lived" e-mail address (ideally something that can outlive your institution's e-mail, in case you change jobs) for the *corresponding* author, i.e. the person the LAMMPS developers can contact directly with questions and requests related to maintenance and support of this contributed code.-->
**Licensing**
By submitting this pull request, I agree, that my contribution will be included in LAMMPS and redistributed under either the GNU General Public License version 2 (GPL v2) or the GNU Lesser General Public License version 2.1 (LGPL v2.1).
**Backward Compatibility**
<!--Please state whether any changes in the pull request will break backward compatibility for inputs, and - if yes - explain what has been changed and why-->
**Implementation Notes**
<!--Provide any relevant details about how the new feature(s) are implemented, how correctness was verified, how other features - if any - in LAMMPS are affected-->
**Post Submission Checklist**
<!--Please check the fields below as they are completed **after** the pull request has been submitted. Delete lines that don't apply-->
- [ ] The feature or features in this pull request is complete
- [ ] Licensing information is complete
- [ ] Corresponding author information is complete
- [ ] The source code follows the LAMMPS formatting guidelines
- [ ] Suitable new documentation files and/or updates to the existing docs are included
- [ ] The added/updated documentation is integrated and tested with the documentation build system
- [ ] The feature has been verified to work with the conventional build system
- [ ] The feature has been verified to work with the CMake based build system
- [ ] Suitable tests have been added to the unittest tree.
- [ ] A package specific README file has been included or updated
- [ ] One or more example input decks are included
**Further Information, Files, and Links**
<!--Put any additional information here, attach relevant text or image files, and URLs to external sites (e.g. DOIs or webpages)-->

View File

@ -0,0 +1,54 @@
---
name: Update or Enhancement
about: Submit a pull request that provides update or enhancements for a package or feature in LAMMPS
title: "[UPDATE] _Replace With Suitable Title_"
labels: enhancement
assignees: ''
---
**Summary**
<!--Briefly describe what kind of updates or enhancements for a package or feature are included. If you are not the original author of the package or feature, please mention, whether your contribution was created independently or in collaboration/cooperation with the original author.-->
**Related Issue(s)**
<!--If this addresses an open GitHub issue for this project, please mention the issue number here, and describe the relation. Use the phrases `fixes #221` or `closes #135`, when you want an issue to be automatically closed when the pull request is merged-->
**Author(s)**
<!--Please state name and affiliation of the author or authors that should be credited with the changes in this pull request-->
**Licensing**
By submitting this pull request, I agree, that my contribution will be included in LAMMPS and redistributed under either the GNU General Public License version 2 (GPL v2) or the GNU Lesser General Public License version 2.1 (LGPL v2.1).
**Backward Compatibility**
<!--Please state whether any changes in the pull request will break backward compatibility for inputs, and - if yes - explain what has been changed and why-->
**Implementation Notes**
<!--Provide any relevant details about how the changes are implemented, how correctness was verified, how other features - if any - in LAMMPS are affected-->
**Post Submission Checklist**
<!--Please check the fields below as they are completed **after** the pull request has been submitted. Delete lines that don't apply-->
- [ ] The feature or features in this pull request is complete
- [ ] Licensing information is complete
- [ ] Corresponding author information is complete
- [ ] The source code follows the LAMMPS formatting guidelines
- [ ] Suitable updates to the existing docs are included
- [ ] The updated documentation is integrated and tested with the documentation build system
- [ ] The feature has been verified to work with the conventional build system
- [ ] The feature has been verified to work with the CMake based build system
- [ ] Suitable tests have been updated or added to the unittest tree.
- [ ] A package specific README file has been updated
- [ ] One or more example input decks are included
**Further Information, Files, and Links**
<!--Put any additional information here, attach relevant text or image files, and URLs to external sites (e.g. DOIs or webpages)-->

29
.github/codecov.yml vendored Normal file
View File

@ -0,0 +1,29 @@
comment: false
coverage:
notify:
slack:
default:
url: "secret:HWZbvgtc6OD7F3v3PfrK3/rzCJvScbh69Fi1CkLwuHK0+wIBIHVR+Q5i7q6F9Ln4OChbiRGtYAEUUsT8/jmBu4qDpIi8mx746codc0z/Z3aafLd24pBrCEPLvdCfIZxqPnw3TuUgGhwmMDZf0+thg8YNUr/MbOZ7Li2L6+ZbYuA="
threshold: 10%
only_pulls: false
branches:
- "master"
flags:
- "unit"
paths:
- "src"
status:
project:
default:
branches:
- "master"
paths:
- "src"
informational: true
patch:
default:
branches:
- "master"
paths:
- "src"
informational: true

4
.gitignore vendored
View File

@ -9,6 +9,7 @@
*.d
*.x
*.exe
*.sif
*.dll
*.pyc
__pycache__
@ -22,9 +23,11 @@ log.cite
.*.swp
*.orig
*.rej
vgcore.*
.vagrant
\#*#
.#*
.vscode
.DS_Store
.DS_Store?
@ -34,6 +37,7 @@ log.cite
ehthumbs.db
Thumbs.db
.clang-format
.lammps_history
#cmake
/build*

22
README
View File

@ -25,25 +25,29 @@ The LAMMPS distribution includes the following files and directories:
README this file
LICENSE the GNU General Public License (GPL)
bench benchmark problems
cmake CMake build system
cmake CMake build files
doc documentation
examples simple test problems
lib libraries LAMMPS can be linked with
fortran Fortran wrapper for LAMMPS
lib additional provided or external libraries
potentials interatomic potential files
python Python wrapper on LAMMPS as a library
python Python wrappers for LAMMPS
src source files
tools pre- and post-processing tools
Point your browser at any of these files to get started:
http://lammps.sandia.gov/doc/Manual.html the LAMMPS manual
http://lammps.sandia.gov/doc/Intro.html hi-level introduction
http://lammps.sandia.gov/doc/Build.html how to build LAMMPS
http://lammps.sandia.gov/doc/Run_head.html how to run LAMMPS
http://lammps.sandia.gov/doc/Developer.pdf LAMMPS developer guide
https://lammps.sandia.gov/doc/Manual.html LAMMPS manual
https://lammps.sandia.gov/doc/Intro.html hi-level introduction
https://lammps.sandia.gov/doc/Build.html how to build LAMMPS
https://lammps.sandia.gov/doc/Run_head.html how to run LAMMPS
https://lammps.sandia.gov/doc/Commands_all.html Table of available commands
https://lammps.sandia.gov/doc/Library.html LAMMPS library interfaces
https://lammps.sandia.gov/doc/Modify.html how to modify and extend LAMMPS
https://lammps.sandia.gov/doc/Developer.html LAMMPS developer info
You can also create these doc pages locally:
% cd doc
% make html # creates HTML pages in doc/html
% make pdf # creates Manual.pdf and Developer.pdf
% make pdf # creates Manual.pdf

View File

@ -1,51 +0,0 @@
These are input scripts used to run versions of several of the
benchmarks in the top-level bench directory using the GPU accelerator
package. The results of running these scripts on two different machines
(a desktop with 2 Tesla GPUs and the ORNL Titan supercomputer) are shown
on the "GPU (Fermi)" section of the Benchmark page of the LAMMPS WWW
site: lammps.sandia.gov/bench.
Examples are shown below of how to run these scripts. This assumes
you have built 3 executables with the GPU package
installed, e.g.
lmp_linux_single
lmp_linux_mixed
lmp_linux_double
------------------------------------------------------------------------
To run on just CPUs (without using the GPU styles),
do something like the following:
mpirun -np 1 lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj
mpirun -np 12 lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.eam
The "xyz" settings determine the problem size. The "t" setting
determines the number of timesteps.
These mpirun commands run on a single node. To run on multiple
nodes, scale up the "-np" setting.
------------------------------------------------------------------------
To run with the GPU package, do something like the following:
mpirun -np 12 lmp_linux_single -sf gpu -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
mpirun -np 8 lmp_linux_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.eam
The "xyz" settings determine the problem size. The "t" setting
determines the number of timesteps. The "np" setting determines how
many MPI tasks (per node) the problem will run on. The numeric
argument to the "-pk" setting is the number of GPUs (per node); 1 GPU
is the default. Note that you can use more MPI tasks than GPUs (per
node) with the GPU package.
These mpirun commands run on a single node. To run on multiple nodes,
scale up the "-np" setting, and control the number of MPI tasks per
node via a "-ppn" setting.
------------------------------------------------------------------------
If the script has "titan" in its name, it was run on the Titan
supercomputer at ORNL.

View File

@ -1,24 +0,0 @@
# bulk Cu lattice
units metal
atom_style atomic
lattice fcc 3.615
region box block 0 $x 0 $y 0 $z
create_box 1 box
create_atoms 1 box
pair_style eam
pair_coeff 1 1 Cu_u3.eam
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify every 1 delay 5 check yes
fix 1 all nve
timestep 0.005
thermo 50
run $t

View File

@ -1,37 +0,0 @@
# bulk Cu lattice
newton off
package gpu force/neigh 0 0 1
processors * * * grid numa
variable x index 1
variable y index 1
variable z index 1
variable xx equal 20*$x
variable yy equal 20*$y
variable zz equal 20*$z
units metal
atom_style atomic
lattice fcc 3.615
region box block 0 ${xx} 0 ${yy} 0 ${zz}
create_box 1 box
create_atoms 1 box
pair_style eam/gpu
pair_coeff 1 1 Cu_u3.eam
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify every 1 delay 5 check yes
fix 1 all nve
timestep 0.005
thermo 50
run 15
run 100

View File

@ -1,22 +0,0 @@
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 $x 0 $y 0 $z
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run $t

View File

@ -1,35 +0,0 @@
# 3d Lennard-Jones melt
newton off
package gpu force/neigh 0 0 1
processors * * * grid numa
variable x index 1
variable y index 1
variable z index 1
variable xx equal 20*$x
variable yy equal 20*$y
variable zz equal 20*$z
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 ${xx} 0 ${yy} 0 ${zz}
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut/gpu 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 15
run 100

View File

@ -1,30 +0,0 @@
# Rhodopsin model
units real
neigh_modify delay 5 every 1
atom_style full
atom_modify map hash
bond_style harmonic
angle_style charmm
dihedral_style charmm
improper_style harmonic
pair_style lj/charmm/coul/long 8.0 10.0
pair_modify mix arithmetic
kspace_style pppm 1e-4
read_data data.rhodo
replicate $x $y $z
fix 1 all shake 0.0001 5 0 m 1.0 a 232
fix 2 all npt temp 300.0 300.0 100.0 &
z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
special_bonds charmm
thermo 50
thermo_style multi
timestep 2.0
run $t

View File

@ -1,39 +0,0 @@
# Rhodopsin model
newton off
package gpu force/neigh 0 0 1
processors * * * grid numa
variable x index 1
variable y index 1
variable z index 1
units real
neigh_modify delay 5 every 1
atom_style full
atom_modify map hash
bond_style harmonic
angle_style charmm
dihedral_style charmm
improper_style harmonic
pair_style lj/charmm/coul/long/gpu 8.0 ${cutoff}
pair_modify mix arithmetic
kspace_style pppm/gpu 1e-4
read_data data.rhodo
replicate $x $y $z
fix 1 all shake 0.0001 5 0 m 1.0 a 232
fix 2 all npt temp 300.0 300.0 100.0 &
z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
special_bonds charmm
thermo 50
# thermo_style multi
timestep 2.0
run 15
run 100

View File

@ -1,42 +0,0 @@
# Rhodopsin model
newton off
package gpu force/neigh 0 0 1
partition yes 1 processors * * * grid twolevel ${grid} * * * &
part 1 2 multiple
partition yes 2 processors * * * part 1 2 multiple
variable x index 1
variable y index 1
variable z index 1
units real
neigh_modify delay 5 every 1
atom_style full
atom_modify map hash
bond_style harmonic
angle_style charmm
dihedral_style charmm
improper_style harmonic
pair_style lj/charmm/coul/long/gpu 8.0 ${cutoff}
pair_modify mix arithmetic
kspace_style pppm/gpu 1e-4
read_data data.rhodo
replicate $x $y $z
fix 1 all shake 0.0001 5 0 m 1.0 a 232
fix 2 all npt temp 300.0 300.0 100.0 &
z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
special_bonds charmm
thermo 50
# thermo_style multi
timestep 2.0
run_style verlet/split
run 15
run 100

View File

@ -1,108 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = icc
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC =
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,108 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = icc
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC =
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,108 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = icc
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC =
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,50 +0,0 @@
# /* ----------------------------------------------------------------------
# Generic Linux Makefile for CUDA
# - Change CUDA_ARCH for your GPU
# ------------------------------------------------------------------------- */
# which file will be copied to Makefile.lammps
EXTRAMAKE = Makefile.lammps.standard
CUDA_HOME = /home/projects/cuda/6.0.37
NVCC = nvcc
# Kepler CUDA
CUDA_ARCH = -arch=sm_35
# Tesla CUDA
#CUDA_ARCH = -arch=sm_21
# newer CUDA
#CUDA_ARCH = -arch=sm_13
# older CUDA
#CUDA_ARCH = -arch=sm_10 -DCUDA_PRE_THREE
# this setting should match LAMMPS Makefile
# one of LAMMPS_SMALLBIG (default), LAMMPS_BIGBIG and LAMMPS_SMALLSMALL
LMP_INC = -DLAMMPS_SMALLBIG
# precision for GPU calculations
# -D_SINGLE_SINGLE # Single precision for all calculations
# -D_DOUBLE_DOUBLE # Double precision for all calculations
# -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double
CUDA_PRECISION = -D_DOUBLE_DOUBLE
CUDA_INCLUDE = -I$(CUDA_HOME)/include
CUDA_LIB = -L$(CUDA_HOME)/lib64
CUDA_OPTS = -DUNIX -O3 -Xptxas -v --use_fast_math
CUDR_CPP = mpic++ -DMPI_GERYON -DUCL_NO_EXIT -DMPICH_IGNORE_CXX_SEEK
CUDR_OPTS = -O2 # -xHost -no-prec-div -ansi-alias
BIN_DIR = ./
OBJ_DIR = ./
LIB_DIR = ./
AR = ar
BSH = /bin/sh
CUDPP_OPT = -DUSE_CUDPP -Icudpp_mini
include Nvidia.makefile

View File

@ -1,50 +0,0 @@
# /* ----------------------------------------------------------------------
# Generic Linux Makefile for CUDA
# - Change CUDA_ARCH for your GPU
# ------------------------------------------------------------------------- */
# which file will be copied to Makefile.lammps
EXTRAMAKE = Makefile.lammps.standard
CUDA_HOME = /home/projects/cuda/6.0.37
NVCC = nvcc
# Kepler CUDA
CUDA_ARCH = -arch=sm_35
# Tesla CUDA
#CUDA_ARCH = -arch=sm_21
# newer CUDA
#CUDA_ARCH = -arch=sm_13
# older CUDA
#CUDA_ARCH = -arch=sm_10 -DCUDA_PRE_THREE
# this setting should match LAMMPS Makefile
# one of LAMMPS_SMALLBIG (default), LAMMPS_BIGBIG and LAMMPS_SMALLSMALL
LMP_INC = -DLAMMPS_SMALLBIG
# precision for GPU calculations
# -D_SINGLE_SINGLE # Single precision for all calculations
# -D_DOUBLE_DOUBLE # Double precision for all calculations
# -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double
CUDA_PRECISION = -D_SINGLE_DOUBLE
CUDA_INCLUDE = -I$(CUDA_HOME)/include
CUDA_LIB = -L$(CUDA_HOME)/lib64
CUDA_OPTS = -DUNIX -O3 -Xptxas -v --use_fast_math
CUDR_CPP = mpic++ -DMPI_GERYON -DUCL_NO_EXIT -DMPICH_IGNORE_CXX_SEEK
CUDR_OPTS = -O2 # -xHost -no-prec-div -ansi-alias
BIN_DIR = ./
OBJ_DIR = ./
LIB_DIR = ./
AR = ar
BSH = /bin/sh
CUDPP_OPT = -DUSE_CUDPP -Icudpp_mini
include Nvidia.makefile

View File

@ -1,50 +0,0 @@
# /* ----------------------------------------------------------------------
# Generic Linux Makefile for CUDA
# - Change CUDA_ARCH for your GPU
# ------------------------------------------------------------------------- */
# which file will be copied to Makefile.lammps
EXTRAMAKE = Makefile.lammps.standard
CUDA_HOME = /home/projects/cuda/6.0.37
NVCC = nvcc
# Kepler CUDA
CUDA_ARCH = -arch=sm_35
# Tesla CUDA
#CUDA_ARCH = -arch=sm_21
# newer CUDA
#CUDA_ARCH = -arch=sm_13
# older CUDA
#CUDA_ARCH = -arch=sm_10 -DCUDA_PRE_THREE
# this setting should match LAMMPS Makefile
# one of LAMMPS_SMALLBIG (default), LAMMPS_BIGBIG and LAMMPS_SMALLSMALL
LMP_INC = -DLAMMPS_SMALLBIG
# precision for GPU calculations
# -D_SINGLE_SINGLE # Single precision for all calculations
# -D_DOUBLE_DOUBLE # Double precision for all calculations
# -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double
CUDA_PRECISION = -D_SINGLE_SINGLE
CUDA_INCLUDE = -I$(CUDA_HOME)/include
CUDA_LIB = -L$(CUDA_HOME)/lib64
CUDA_OPTS = -DUNIX -O3 -Xptxas -v --use_fast_math
CUDR_CPP = mpic++ -DMPI_GERYON -DUCL_NO_EXIT -DMPICH_IGNORE_CXX_SEEK
CUDR_OPTS = -O2 # -xHost -no-prec-div -ansi-alias
BIN_DIR = ./
OBJ_DIR = ./
LIB_DIR = ./
AR = ar
BSH = /bin/sh
CUDPP_OPT = -DUSE_CUDPP -Icudpp_mini
include Nvidia.makefile

View File

@ -1,109 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = icc
CCFLAGS = -O3 -openmp -DLAMMPS_MEMALIGN=64 -no-offload \
-xHost -fno-alias -ansi-alias -restrict -override-limits
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O -openmp
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC = -DLAMMPS_GZIP -DLAMMPS_JPEG
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,113 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = nvcc
CCFLAGS = -O3 -arch=sm_35
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpicxx
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
OMP = yes
CUDA = yes
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC =
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cu
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,110 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = icc
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
OMP = yes
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC =
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,108 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = icc
CCFLAGS = -O3 -openmp -restrict -ansi-alias
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O -openmp
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC =
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,108 +0,0 @@
# linux = Shannon Linux box, Intel icc, OpenMPI, KISS FFTW
SHELL = /bin/sh
# ---------------------------------------------------------------------
# compiler/linker settings
# specify flags and libraries needed for your compiler
CC = icc
CCFLAGS = -O -restrict
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
# specify settings for LAMMPS features you will use
# if you change any -D setting, do full re-compile after "make clean"
# LAMMPS ifdef settings, OPTIONAL
# see possible settings in doc/Section_start.html#2_2 (step 4)
LMP_INC =
# MPI library, REQUIRED
# see discussion in doc/Section_start.html#2_2 (step 5)
# can point to dummy MPI library in src/STUBS as in Makefile.serial
# INC = path for mpi.h, MPI compiler settings
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/include/
MPI_PATH = -L/home/projects/openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37/lib
MPI_LIB = -lmpi
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
# can be left blank to use provided KISS FFT library
# INC = -DFFT setting, e.g. -DFFT_FFTW, FFT compiler settings
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_PATH =
FFT_LIB =
# JPEG and/or PNG library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 7)
# only needed if -DLAMMPS_JPEG or -DLAMMPS_PNG listed with LMP_INC
# INC = path(s) for jpeglib.h and/or png.h
# PATH = path(s) for JPEG library and/or PNG library
# LIB = name(s) of JPEG library and/or PNG library
JPG_INC =
JPG_PATH =
JPG_LIB = -ljpeg
# ---------------------------------------------------------------------
# build rules and dependencies
# no need to edit this section
include Makefile.package.settings
include Makefile.package
EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC) $(JPG_INC) $(PKG_SYSINC)
EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(JPG_PATH) $(PKG_SYSPATH)
EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(JPG_LIB) $(PKG_SYSLIB)
# Path to src files
vpath %.cpp ..
vpath %.h ..
# Link target
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
sinclude $(DEPENDS)

View File

@ -1,68 +0,0 @@
These are build and input and run scripts used to run the LJ benchmark
in the top-level bench directory using all the various accelerator
packages currently available in LAMMPS. The results of running these
benchmarks on a GPU cluster with Kepler GPUs are shown on the "GPU
(Kepler)" section of the Benchmark page of the LAMMPS WWW site:
lammps.sandia.gov/bench.
The specifics of the benchmark machine are as follows:
It is a small GPU cluster at Sandia National Labs called "shannon". It
has 32 nodes, each with two 8-core Sandy Bridge Xeon CPUs (E5-2670,
2.6GHz, HT deactivated), for a total of 512 cores. Twenty-four of the
nodes have two NVIDIA Kepler GPUs (K20x, 2688 732 MHz cores). LAMMPS
was compiled with the Intel icc compiler, using module
openmpi/1.8.1/intel/13.1.SP1.106/cuda/6.0.37.
------------------------------------------------------------------------
You can, of course, build LAMMPS yourself with any of the accelerator
packages installed for your platform.
The build.py script will build LAMMPS for the various accelerlator
packages using the Makefile.* files in this dir, which you can edit if
necessary for your platform. You must set the "lmpdir" variable at
the top of build.py to the home directory of LAMMPS as installed on
your system. Note that the build.py script hardcodes the arch setting
for the USER-CUDA package, which should be matched to the GPUs on your
system, e.g. sm_35 for Kepler GPUs. For the GPU package, this setting
is in the Makefile.gpu.* files, as is the CUDA_HOME variable which
should point to where NVIDIA Cuda software is installed on your
system.
Once the Makefiles are in place, then typing, for example,
python build.py cpu gpu
will build executables for the CPU (no accelerators), and 3 variants
(double, mixed, single precision) of the GPU package. See the list of
possible targets at the top of the build.py script.
Note that the build.py script will un-install all packages in your
LAMMPS directory, then only install the ones needed for the benchmark.
The Makefile.* files in this dir are copied into lammps/src/MAKE, as a
dummy Makefile.foo, so they will not conflict with makefiles that may
already be there. The build.py script also builds the auxiliary GPU
and USER-CUDA library as needed.
LAMMPS executables that are generated by build.py are copied into this
directory when the script finishes each build.
------------------------------------------------------------------------
The in.* files can be run with any of the accelerator packages,
if you specify the appropriate command-line switches. These
include switches to set the problem size and number of timesteps
to run.
The run*.sh scripts have sample mpirun commands for running the input
scripts on a single node or on multiple nodes for the strong and weak
scaling results shown on the benchmark web page. These scripts are
provided for illustration purposes, to show what command-line
arguments are used with each accelerator package.
Note that we generate these run scripts, either for interactive or
batch submission, via Python scripts which often produces a long list
of runs to exercise a combination of options. To perform a quick
benchmark calculation on your platform, you will typically only want
to run a few commands out of any of the run*.sh scripts.

View File

@ -1,187 +0,0 @@
#!/usr/local/bin/python
# Syntax: build.py target1 target2 ...
# targets:
# cpu, opt, omp,
# gpu/double, gpu/mixed, gpu/single,
# cuda/double, cuda/mixed, cuda/single,
# intel/cpu, intel/phi,
# kokkos/omp, kokkos/phi, kokkos/cuda
# gpu = gpu/double + gpu/mixed + gpu/single
# cuda = cuda/double + cuda/mixed + cuda/single
# intel = intel/cpu + intel/phi
# kokkos = kokkos/omp + kokkos/phi + kokkos/cuda
# all = cpu + opt + omp + gpu + cuda + intel + kokkos
# create exectuables for different packages
# MUST set lmpdir to path of LAMMPS home directory
import sys,commands,os
lmpdir = "~/lammps"
# build LAMMPS
# copy makefile into src/MAKE as Makefile.foo, then remove it
def build_lammps(makefile,pkg):
print "Building LAMMPS with %s and %s packages ..." % (makefile,pkg)
commands.getoutput("cp %s %s/src/MAKE/Makefile.foo" % (makefile,lmpdir))
cwd = os.getcwd()
os.chdir(os.path.expanduser(lmpdir + "/src"))
str = "make clean-foo"
txt = commands.getoutput(str)
str = "make no-all"
txt = commands.getoutput(str)
for package in pkg:
str = "make yes-%s" % package
txt = commands.getoutput(str)
print txt
str = "make -j 16 foo"
txt = commands.getoutput(str)
os.remove("MAKE/Makefile.foo")
os.chdir(cwd)
# build GPU library in LAMMPS
# copy makefile into lib/gpu as Makefile.foo, then remove it
def build_gpu(makefile):
print "Building GPU lib with %s ..." % makefile
commands.getoutput("cp %s %s/lib/gpu/Makefile.foo" % (makefile,lmpdir))
cwd = os.getcwd()
os.chdir(os.path.expanduser(lmpdir + "/lib/gpu"))
str = "make -f Makefile.foo clean"
txt = commands.getoutput(str)
str = "make -j 16 -f Makefile.foo"
txt = commands.getoutput(str)
os.remove("Makefile.foo")
os.chdir(cwd)
# build CUDA library in LAMMPS
# set precision and arch explicitly as options to make in lib/cuda
def build_cuda(precision,arch):
print "Building USER-CUDA lib with %s and arch sm_%d ..." % (precision,arch)
cwd = os.getcwd()
os.chdir(os.path.expanduser(lmpdir + "/lib/cuda"))
str = "make clean"
txt = commands.getoutput(str)
if precision == "double": pflag = 2
elif precision == "mixed": pflag = 4
elif precision == "single": pflag = 1
str = "make -j 16 precision=%d arch=%s" % (pflag,arch)
txt = commands.getoutput(str)
os.chdir(cwd)
# main program
# convert target keywords into target flags
cpu = opt = omp = 0
gpu = gpu_double = gpu_mixed = gpu_single = 0
cuda = cuda_double = cuda_mixed = cuda_single = 0
intel = intel_cpu = intel_phi = 0
kokkos = kokkos_omp = kokkos_phi = kokkos_cuda = 0
targets = sys.argv[1:]
for target in targets:
if target == "cpu": cpu = 1
elif target == "opt": opt = 1
elif target == "omp": omp = 1
elif target == "gpu/double": gpu_double = 1
elif target == "gpu/mixed": gpu_mixed = 1
elif target == "gpu/single": gpu_single = 1
elif target == "gpu": gpu = 1
elif target == "cuda/double": cuda_double = 1
elif target == "cuda/mixed": cuda_mixed = 1
elif target == "cuda/single": cuda_single = 1
elif target == "cuda": cuda = 1
elif target == "intel/cpu": intel_cpu = 1
elif target == "intel/phi": intel_phi = 1
elif target == "intel": intel = 1
elif target == "kokkos/omp": kokkos_omp = 1
elif target == "kokkos/phi": kokkos_phi = 1
elif target == "kokkos/cuda": kokkos_cuda = 1
elif target == "kokkos": kokkos = 1
elif target == "all": cpu = omp = gpu = cuda = intel = kokkos = 1
else: print "Target",target,"is unknown"
if gpu: gpu_double = gpu_mixed = gpu_single = 1
if cuda: cuda_double = cuda_mixed = cuda_single = 1
if intel: intel_cpu = intel_phi = 1
if kokkos: kokkos_omp = kokkos_phi = kokkos_cuda = 1
# CPU
if cpu:
build_lammps(makefile = "Makefile.cpu", pkg = [])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_cpu" % lmpdir)
# OPT
if opt:
build_lammps(makefile = "Makefile.opt", pkg = ["opt"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_opt" % lmpdir)
# OMP
if omp:
build_lammps(makefile = "Makefile.omp", pkg = ["user-omp"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_omp" % lmpdir)
# GPU, 3 precisions
if gpu_double:
build_gpu(makefile = "Makefile.gpu.double")
build_lammps(makefile = "Makefile.gpu", pkg = ["gpu"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_gpu_double" % lmpdir)
if gpu_mixed:
build_gpu(makefile = "Makefile.gpu.mixed")
build_lammps(makefile = "Makefile.gpu", pkg = ["gpu"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_gpu_mixed" % lmpdir)
if gpu_single:
build_gpu(makefile = "Makefile.gpu.single")
build_lammps(makefile = "Makefile.gpu", pkg = ["gpu"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_gpu_single" % lmpdir)
# CUDA, 3 precisions
if cuda_double:
build_cuda(precision = "double", arch = 35)
build_lammps(makefile = "Makefile.cuda", pkg = ["kspace","user-cuda"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_cuda_double" % lmpdir)
if cuda_mixed:
build_cuda(precision = "mixed", arch = 35)
build_lammps(makefile = "Makefile.cuda", pkg = ["kspace","user-cuda"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_cuda_mixed" % lmpdir)
if cuda_single:
build_cuda(precision = "single", arch = 35)
build_lammps(makefile = "Makefile.cuda", pkg = ["kspace","user-cuda"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_cuda_single" % lmpdir)
# INTEL, CPU and Phi
if intel_cpu:
build_lammps(makefile = "Makefile.intel.cpu", pkg = ["user-intel"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_intel_cpu" % lmpdir)
if intel_phi:
build_lammps(makefile = "Makefile.intel.phi", pkg = ["user-intel","user-omp"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_intel_phi" % lmpdir)
# KOKKOS, all variants
if kokkos_omp:
build_lammps(makefile = "Makefile.kokkos.omp", pkg = ["kokkos"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_kokkos_omp" % lmpdir)
if kokkos_phi:
build_lammps(makefile = "Makefile.kokkos.phi", pkg = ["kokkos"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_kokkos_phi" % lmpdir)
if kokkos_cuda:
build_lammps(makefile = "Makefile.kokkos.cuda", pkg = ["kokkos"])
print commands.getoutput("mv %s/src/lmp_foo ./lmp_kokkos_cuda" % lmpdir)

View File

@ -1,22 +0,0 @@
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 $x 0 $y 0 $z
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run $t

View File

@ -1,29 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -np 1 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.1
mpirun -np 2 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.2
mpirun -np 4 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.4
mpirun -np 6 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.6
mpirun -np 8 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.8
mpirun -np 10 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.10
mpirun -np 12 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.12
mpirun -np 14 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.14
mpirun -np 16 lmp_cpu -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cpu.128K.16

View File

@ -1,20 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -N 1 lmp_cuda_double -c on -sf cuda -pk cuda 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cuda.double.128K.1
mpirun -N 2 lmp_cuda_double -c on -sf cuda -pk cuda 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cuda.double.128K.2
mpirun -N 1 lmp_cuda_mixed -c on -sf cuda -pk cuda 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cuda.mixed.128K.1
mpirun -N 2 lmp_cuda_mixed -c on -sf cuda -pk cuda 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cuda.mixed.128K.2
mpirun -N 1 lmp_cuda_single -c on -sf cuda -pk cuda 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cuda.single.128K.1
mpirun -N 2 lmp_cuda_single -c on -sf cuda -pk cuda 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.cuda.single.128K.2

View File

@ -1,155 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -np 1 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.1.1
mpirun -np 2 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.2.1
mpirun -np 2 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.2.2
mpirun -np 4 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.4.1
mpirun -np 4 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.4.2
mpirun -np 6 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.6.1
mpirun -np 6 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.6.2
mpirun -np 8 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.8.1
mpirun -np 8 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.8.2
mpirun -np 10 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.10.1
mpirun -np 10 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.10.2
mpirun -np 12 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.12.1
mpirun -np 12 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.12.2
mpirun -np 14 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.14.1
mpirun -np 14 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.14.2
mpirun -np 16 lmp_gpu_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.16.1
mpirun -np 16 lmp_gpu_single -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.single.128K.16.2
mpirun -np 1 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.1.1
mpirun -np 2 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.2.1
mpirun -np 2 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.2.2
mpirun -np 4 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.4.1
mpirun -np 4 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.4.2
mpirun -np 6 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.6.1
mpirun -np 6 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.6.2
mpirun -np 8 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.8.1
mpirun -np 8 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.8.2
mpirun -np 10 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.10.1
mpirun -np 10 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.10.2
mpirun -np 12 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.12.1
mpirun -np 12 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.12.2
mpirun -np 14 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.14.1
mpirun -np 14 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.14.2
mpirun -np 16 lmp_gpu_mixed -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.16.1
mpirun -np 16 lmp_gpu_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.mixed.128K.16.2
mpirun -np 1 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.1.1
mpirun -np 2 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.2.1
mpirun -np 2 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.2.2
mpirun -np 4 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.4.1
mpirun -np 4 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.4.2
mpirun -np 6 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.6.1
mpirun -np 6 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.6.2
mpirun -np 8 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.8.1
mpirun -np 8 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.8.2
mpirun -np 10 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.10.1
mpirun -np 10 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.10.2
mpirun -np 12 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.12.1
mpirun -np 12 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.12.2
mpirun -np 14 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.14.1
mpirun -np 14 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.14.2
mpirun -np 16 lmp_gpu_double -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.16.1
mpirun -np 16 lmp_gpu_double -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.gpu.double.128K.16.2

View File

@ -1,83 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -np 1 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.1
mpirun -np 2 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.2
mpirun -np 4 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.4
mpirun -np 6 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.6
mpirun -np 8 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.8
mpirun -np 10 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.10
mpirun -np 12 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.12
mpirun -np 14 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.14
mpirun -np 16 lmp_intel_cpu -sf intel -pk intel 1 prec single -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.single.128K.16
mpirun -np 1 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.1
mpirun -np 2 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.2
mpirun -np 4 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.4
mpirun -np 6 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.6
mpirun -np 8 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.8
mpirun -np 10 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.10
mpirun -np 12 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.12
mpirun -np 14 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.14
mpirun -np 16 lmp_intel_cpu -sf intel -pk intel 1 prec mixed -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.mixed.128K.16
mpirun -np 1 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.1
mpirun -np 2 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.2
mpirun -np 4 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.4
mpirun -np 6 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.6
mpirun -np 8 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.8
mpirun -np 10 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.10
mpirun -np 12 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.12
mpirun -np 14 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.14
mpirun -np 16 lmp_intel_cpu -sf intel -pk intel 1 prec double -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.intel.cpu.double.128K.16

View File

@ -1,74 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 1 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.1
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 2 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.2
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 3 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.3
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 4 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.4
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 5 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.5
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 6 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.6
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 7 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.7
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 8 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.8
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 9 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.9
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 10 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.10
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 11 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.11
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 12 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.12
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 13 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.13
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 14 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.14
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 15 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.15
mpirun -np 1 lmp_kokkos_cuda -k on g 1 t 16 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.1.16
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 1 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.1
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 2 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.2
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 3 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.3
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 4 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.4
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 5 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.5
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 6 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.6
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 7 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.7
mpirun -np 2 lmp_kokkos_cuda -k on g 2 t 8 -sf kk -pk kokkos binsize 2.8 comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.cuda.128K.2.8

View File

@ -1,17 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -np full -bind-to socket -map-by socket -x KMP_AFFINITY=scatter lmp_kokkos_omp -k on t 16 -sf kk -pk kokkos neigh full newton off comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.omp.128K.1.16
mpirun -np full -bind-to socket -map-by socket -x KMP_AFFINITY=scatter lmp_kokkos_omp -k on t 8 -sf kk -pk kokkos neigh full newton off comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.omp.128K.2.8
mpirun -np full -bind-to socket -map-by socket -x KMP_AFFINITY=scatter lmp_kokkos_omp -k on t 4 -sf kk -pk kokkos neigh full newton off comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.omp.128K.4.4
mpirun -np full -bind-to socket -map-by socket -x KMP_AFFINITY=scatter lmp_kokkos_omp -k on t 2 -sf kk -pk kokkos neigh full newton off comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.omp.128K.8.2
mpirun -np half -bind-to socket -map-by socket -x KMP_AFFINITY=scatter lmp_kokkos_omp -k on t 1 -sf kk -pk kokkos neigh half newton on comm device -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.kokkos.omp.128K.16.1

View File

@ -1,17 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -np 1 lmp_omp -sf omp -pk omp 16 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.omp.128K.1.16
mpirun -np 2 lmp_omp -sf omp -pk omp 8 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.omp.128K.2.8
mpirun -np 4 lmp_omp -sf omp -pk omp 4 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.omp.128K.4.4
mpirun -np 8 lmp_omp -sf omp -pk omp 2 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.omp.128K.8.2
mpirun -np 16 lmp_omp -sf omp -pk omp 1 -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.omp.128K.16.1

View File

@ -1,29 +0,0 @@
#!/bin/bash
#SBATCH -N 1 --time=12:00:00
mpirun -np 1 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.1
mpirun -np 2 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.2
mpirun -np 4 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.4
mpirun -np 6 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.6
mpirun -np 8 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.8
mpirun -np 10 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.10
mpirun -np 12 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.12
mpirun -np 14 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.14
mpirun -np 16 lmp_opt -sf opt -v x 32 -v y 32 -v z 32 -v t 100 < in.lj
mv log.lammps log.10Sep14.lj.opt.128K.16

View File

@ -1,20 +0,0 @@
#!/bin/bash
#SBATCH -N 16 --time=12:00:00
mpirun -npernode 16 lmp_cpu -v x 64 -v y 64 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.cpu.2048K.16.16
mpirun -npernode 16 lmp_omp -sf omp -pk omp 1 -v x 64 -v y 64 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.omp.2048K.16.1.16
mpirun -npernode 2 lmp_cuda -c on -sf cuda -pk cuda 2 -v x 64 -v y 64 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.cuda.2048K.2.16
mpirun -npernode 14 lmp_gpu -sf gpu -pk gpu 2 -v x 64 -v y 64 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.gpu.2048K.2.14.16
mpirun -npernode 2 lmp_kokkos_cuda -k on g 2 t 1 -sf kk -pk kokkos comm device -v x 64 -v y 64 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.kokkos.cuda.2048K.2.1.16
mpirun -np 256 -bind-to core -map-by core -x KMP_AFFINITY=scatter lmp_kokkos_omp -k on t 1 -sf kk -pk kokkos comm device -v x 64 -v y 64 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.kokkos.omp.2048K.16.1.16

View File

@ -1,20 +0,0 @@
#!/bin/bash
#SBATCH -N 16 --time=12:00:00
mpirun -npernode 16 lmp_cpu -v x 128 -v y 128 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.cpu.512K.16.16
mpirun -npernode 16 lmp_omp -sf omp -pk omp 1 -v x 128 -v y 128 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.omp.512K.16.1.16
mpirun -npernode 2 lmp_cuda -c on -sf cuda -pk cuda 2 -v x 128 -v y 128 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.cuda.512K.2.16
mpirun -npernode 14 lmp_gpu -sf gpu -pk gpu 2 -v x 128 -v y 128 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.gpu.512K.2.14.16
mpirun -npernode 2 lmp_kokkos_cuda -k on g 2 t 1 -sf kk -pk kokkos comm device -v x 128 -v y 128 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.kokkos.cuda.512K.2.1.16
mpirun -np 256 -bind-to core -map-by core -x KMP_AFFINITY=scatter lmp_kokkos_omp -k on t 1 -sf kk -pk kokkos comm device -v x 128 -v y 128 -v z 128 -v t 100 < in.lj
mv log.lammps log.28Jun14.lj.kokkos.omp.512K.16.1.16

File diff suppressed because it is too large Load Diff

1
bench/POTENTIALS/CH.airebo Symbolic link
View File

@ -0,0 +1 @@
../../potentials/CH.airebo

1
bench/POTENTIALS/CH.rebo Symbolic link
View File

@ -0,0 +1 @@
../../potentials/CH.rebo

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
../../potentials/CdTe.bop.table

View File

@ -1,305 +0,0 @@
Cu functions (universal 3), SM Foiles et al, PRB, 33, 7983 (1986)
29 63.550 3.6150 FCC
500 5.0100200400801306e-04 500 1.0000000000000009e-02 4.9499999999999886e+00
0. -3.1561636903424350e-01 -5.2324876182494506e-01 -6.9740831416804383e-01 -8.5202525457518519e-01
-9.9329216586042435e-01 -1.1246331970890324e+00 -1.2481882647347859e+00 -1.3654054700363645e+00 -1.4773214276236644e+00
-1.5847099936904741e+00 -1.6865851873526410e+00 -1.7843534091637920e+00 -1.8790616476576076e+00 -1.9710188604521761e+00
-2.0604838665854572e+00 -2.1476762477372944e+00 -2.2327843595560068e+00 -2.3159713409697673e+00 -2.3973797031286352e+00
-2.4771348895887826e+00 -2.5553480773272810e+00 -2.6321184083774227e+00 -2.7075347880408458e+00 -2.7816773487592030e+00
-2.8546186529652005e+00 -2.9264246898861899e+00 -2.9971557080624507e+00 -3.0668669157065978e+00 -3.1356090736776849e+00
-3.2034290008357829e+00 -3.2703700069757247e+00 -3.3364722658277230e+00 -3.4017731379735778e+00 -3.4663074517059016e+00
-3.5301077484029122e+00 -3.5932044977085980e+00 -3.6556262870729199e+00 -3.7173999892229403e+00 -3.7785509106421671e+00
-3.8391029237823773e+00 -3.8990785849196925e+00 -3.9584992397079333e+00 -4.0173851179270912e+00 -4.0744518500210916e+00
-4.1306733564032641e+00 -4.1864034067843932e+00 -4.2416582335814326e+00 -4.2964533268445280e+00 -4.3508034838872618e+00
-4.4047228547107977e+00 -4.4582249835318351e+00 -4.5113228468570128e+00 -4.5640288884490872e+00 -4.6163550514904443e+00
-4.6683128082199232e+00 -4.7199131872767452e+00 -4.7711667990036801e+00 -4.8220838587683374e+00 -4.8726742087289665e+00
-4.9229473379113813e+00 -4.9729124009208192e+00 -5.0225782353423369e+00 -5.0719533779533492e+00 -5.1210460798461668e+00
-5.1698643205481289e+00 -5.2184158212228908e+00 -5.2667080570261362e+00 -5.3147482686812282e+00 -5.3625434733324937e+00
-5.4101004747367369e+00 -5.4574258728391953e+00 -5.5045260727784751e+00 -5.5514072933650311e+00 -5.5980755750691458e+00
-5.6445367875538750e+00 -5.6907966367860183e+00 -5.7368606717507191e+00 -5.7827342908000219e+00 -5.8284227476608805e+00
-5.8739311571204382e+00 -5.9192645004390272e+00 -5.9644276303605182e+00 -6.0094252761103064e+00 -6.0542620478988169e+00
-6.0989424413057520e+00 -6.1434708414539330e+00 -6.1878515269578429e+00 -6.2320886736884802e+00 -6.2761863583589275e+00
-6.3201485619430571e+00 -6.3639791729330000e+00 -6.4076819904493902e+00 -6.4512607272098990e+00 -6.4947190123648113e+00
-6.5380603942065250e+00 -6.5812883427622069e+00 -6.6243939095620874e+00 -6.6670830925929181e+00 -6.7096660473058591e+00
-6.7521459135001862e+00 -6.7945257643836499e+00 -6.8368086085521611e+00 -6.8789973918942735e+00 -6.9210949994162263e+00
-6.9631042569970703e+00 -7.0050279330721992e+00 -7.0468687402560874e+00 -7.0886293368973554e+00 -7.1303123285804020e+00
-7.1719202695651916e+00 -7.2134556641788095e+00 -7.2549209681507421e+00 -7.2963185899023415e+00 -7.3376508917899628e+00
-7.3789201913012903e+00 -7.4201287622117036e+00 -7.4612788356982946e+00 -7.5023726014152032e+00 -7.5434122085331978e+00
-7.5843997667427345e+00 -7.6253373472216595e+00 -7.6662269835740062e+00 -7.7070706727342895e+00 -7.7478703758424388e+00
-7.7886280190928119e+00 -7.8293454945503811e+00 -7.8700246609474789e+00 -7.9106673444489104e+00 -7.9512753393968865e+00
-7.9918504090315139e+00 -8.0323942861870705e+00 -8.0729086739704030e+00 -8.1133952464140293e+00 -8.1538556491162808e+00
-8.1942914998523975e+00 -8.2347043891773524e+00 -8.2750958810033808e+00 -8.3154675131659701e+00 -8.3558207979692725e+00
-8.3961572227176475e+00 -8.4364782502312892e+00 -8.4767853193496308e+00 -8.5170798454139458e+00 -8.5573632207473906e+00
-8.5976368151087286e+00 -8.6379019761436666e+00 -8.6781600298199919e+00 -8.7184122808490656e+00 -8.7586600130993020e+00
-8.7989044899963460e+00 -8.8391469549140993e+00 -8.8793886315543773e+00 -8.9196307243150841e+00 -8.9598744186541239e+00
-9.0001208814363167e+00 -9.0403712612778122e+00 -9.0806266888772029e+00 -9.1208882773446476e+00 -9.1611571225108719e+00
-9.2014343032440138e+00 -9.2417208817437881e+00 -9.2820179038447463e+00 -9.3223263992829857e+00 -9.3626473819958278e+00
-9.4029818503831279e+00 -9.4433307875392529e+00 -9.4836951616705960e+00 -9.5237840547885071e+00 -9.5637918926951784e+00
-9.6038142178817338e+00 -9.6438519061474608e+00 -9.6839058194810832e+00 -9.7239768064614509e+00 -9.7640657024289226e+00
-9.8041733297054634e+00 -9.8443004978059889e+00 -9.8844480036373170e+00 -9.9246166317080906e+00 -9.9648071543198853e+00
-1.0005020331762637e+01 -1.0045256912501884e+01 -1.0085517633366123e+01 -1.0125803219723423e+01 -1.0166114385662183e+01
-1.0206451834160134e+01 -1.0246816257258331e+01 -1.0287208336224353e+01 -1.0327628741713852e+01 -1.0368078133934148e+01
-1.0408557162795717e+01 -1.0449066468066974e+01 -1.0489606679525650e+01 -1.0530178417100558e+01 -1.0570782291022510e+01
-1.0611418901960292e+01 -1.0652088841158786e+01 -1.0692792690577562e+01 -1.0733531023022920e+01 -1.0774304402276016e+01
-1.0815113383222808e+01 -1.0855958511980305e+01 -1.0896840326017184e+01 -1.0937759354276295e+01 -1.0978716117290730e+01
-1.1019711127305925e+01 -1.1060744888386239e+01 -1.1101817896531486e+01 -1.1142930639787664e+01 -1.1184083598352004e+01
-1.1225277244679319e+01 -1.1266512043589387e+01 -1.1307788452364719e+01 -1.1349106920870327e+01 -1.1390467891550486e+01
-1.1431871799781504e+01 -1.1473319073642074e+01 -1.1514810134213008e+01 -1.1556345395619132e+01 -1.1597925265115521e+01
-1.1639550143177303e+01 -1.1681220423591583e+01 -1.1722936493536452e+01 -1.1764698733669888e+01 -1.1806507518187232e+01
-1.1848363215029394e+01 -1.1890266185706139e+01 -1.1932216785634637e+01 -1.1974215364086319e+01 -1.2016262264291129e+01
-1.2058357823507606e+01 -1.2100502373105996e+01 -1.2142696238631970e+01 -1.2184939739884385e+01 -1.2227233190982815e+01
-1.2269576900438324e+01 -1.2311971171220080e+01 -1.2354416300827552e+01 -1.2396912581348374e+01 -1.2439460299532641e+01
-1.2482059736851909e+01 -1.2524711169562636e+01 -1.2567414868772744e+01 -1.2610171100495961e+01 -1.2652980125719694e+01
-1.2695842200459083e+01 -1.2738757575819193e+01 -1.2781726498053729e+01 -1.2824749208615117e+01 -1.2867825944219817e+01
-1.2910956936899197e+01 -1.2954142414054047e+01 -1.2997382598508125e+01 -1.3040677708563408e+01 -1.3084027958052218e+01
-1.3127433556386677e+01 -1.3170894708610035e+01 -1.3214411615448739e+01 -1.3257984473359954e+01 -1.3301613474583519e+01
-1.3345298807190659e+01 -1.3389040655121903e+01 -1.3432839198243016e+01 -1.3476694612386723e+01 -1.3520607069407617e+01
-1.3564576737214225e+01 -1.3608603779754390e+01 -1.3652688357330362e+01 -1.3696830626228689e+01 -1.3741030739041094e+01
-1.3785288844633044e+01 -1.3829605088192579e+01 -1.3873979611263849e+01 -1.3918412551792358e+01 -1.3962904044165157e+01
-1.4007454219246995e+01 -1.4052063204422609e+01 -1.4096731123636516e+01 -1.4141458097424390e+01 -1.4186244242962175e+01
-1.4231089674089560e+01 -1.4275994501358696e+01 -1.4320958832063411e+01 -1.4365982770278379e+01 -1.4411066416893846e+01
-1.4456209869649911e+01 -1.4501413223171539e+01 -1.4546676569005058e+01 -1.4591999995647598e+01 -1.4637383588581656e+01
-1.4682827430315228e+01 -1.4728331600403862e+01 -1.4773896175488971e+01 -1.4819521229330235e+01 -1.4865206832833337e+01
-1.4910953054084985e+01 -1.4956759958383259e+01 -1.5002627608264334e+01 -1.5048556063539081e+01 -1.5094545381317744e+01
-1.5140595616041765e+01 -1.5186706819511983e+01 -1.5232879040916600e+01 -1.5279112326867676e+01 -1.5325406721414765e+01
-1.5371762266086876e+01 -1.5418178999911675e+01 -1.5464656959446415e+01 -1.5511196178805903e+01 -1.5557796689685119e+01
-1.5604458521389688e+01 -1.5651181700861002e+01 -1.5697966252703509e+01 -1.5744812199205967e+01 -1.5791719560374304e+01
-1.5838688353945599e+01 -1.5885718595428898e+01 -1.5932810298111235e+01 -1.5979963473102316e+01 -1.6027178129340314e+01
-1.6074454273625634e+01 -1.6121791910645470e+01 -1.6169191042992907e+01 -1.6216651671189425e+01 -1.6264173793714576e+01
-1.6311757407021901e+01 -1.6359402505566209e+01 -1.6407109081822910e+01 -1.6454877126310635e+01 -1.6502706627614998e+01
-1.6550597572407241e+01 -1.6598549945469813e+01 -1.6646563729715353e+01 -1.6694638906205682e+01 -1.6742775454176012e+01
-1.6790973351056778e+01 -1.6839232572488413e+01 -1.6887553092348412e+01 -1.6935934882766333e+01 -1.6984377914146876e+01
-1.7032882155186826e+01 -1.7081447572897673e+01 -1.7130074132623690e+01 -1.7178761798061373e+01 -1.7227510531275698e+01
-1.7276320292724563e+01 -1.7325191041271864e+01 -1.7374122734215121e+01 -1.7423115327299456e+01 -1.7472168774711918e+01
-1.7521283029136725e+01 -1.7570458041655343e+01 -1.7619693762170868e+01 -1.7668990138814479e+01 -1.7718347118374936e+01
-1.7767764646209685e+01 -1.7817242666259403e+01 -1.7866781121071881e+01 -1.7916379951810882e+01 -1.7966039098283659e+01
-1.8015758498943796e+01 -1.8065538090918608e+01 -1.8115377810021755e+01 -1.8165277590764617e+01 -1.8215237366381530e+01
-1.8265257068836149e+01 -1.8315336628844307e+01 -1.8365475975885602e+01 -1.8415675038220570e+01 -1.8465933742903644e+01
-1.8516252015799409e+01 -1.8566629781600568e+01 -1.8617066963838965e+01 -1.8667563484898778e+01 -1.8718119266039025e+01
-1.8768734227397317e+01 -1.8819408288014415e+01 -1.8870141365839345e+01 -1.8920933377750998e+01 -1.8971784239569388e+01
-1.9022693866067016e+01 -1.9073662170983084e+01 -1.9124689067045438e+01 -1.9175774465969539e+01 -1.9226918278483254e+01
-1.9278120414338218e+01 -1.9329380782317116e+01 -1.9380699290257098e+01 -1.9432075845048644e+01 -1.9483510352663075e+01
-1.9535002718153464e+01 -1.9586552845676124e+01 -1.9638160638497766e+01 -1.9689825999008235e+01 -1.9741548828738019e+01
-1.9793329028359494e+01 -1.9845166497711489e+01 -1.9897061135804051e+01 -1.9949012840833348e+01 -2.0001021510188707e+01
-2.0053087040468540e+01 -2.0105209327494322e+01 -2.0157388266314911e+01 -2.0209623751249865e+01 -2.0261915675825890e+01
-2.0314263932714312e+01 -2.0366668414255741e+01 -2.0419129011700647e+01 -2.0471645615726288e+01 -2.0524218116314501e+01
-2.0576846402769888e+01 -2.0629530363722893e+01 -2.0682269887147754e+01 -2.0735064860369221e+01 -2.0787915170073120e+01
-2.0840820702317274e+01 -2.0893781342541502e+01 -2.0946796975575580e+01 -2.0999867485656864e+01 -2.1052992756428125e+01
-2.1106172670961428e+01 -2.1159407111702421e+01 -2.1212695960751944e+01 -2.1266039099329419e+01 -2.1319436408360275e+01
-2.1372887768154328e+01 -2.1426393058473991e+01 -2.1479952158748461e+01 -2.1533564947619766e+01 -2.1587231303431395e+01
-2.1640951103995235e+01 -2.1694724226644553e+01 -2.1748550548245930e+01 -2.1802429945213817e+01 -2.1856362293508028e+01
-2.1910347468648524e+01 -2.1964385345728829e+01 -2.2018475799410339e+01 -2.2072618703948137e+01 -2.2126813933181779e+01
-2.2181061360561898e+01 -2.2235360859143157e+01 -2.2289712301596296e+01 -2.2344115560361388e+01 -2.2398570507087584e+01
-2.2453077013515781e+01 -2.2507634950890292e+01 -2.2562244190064348e+01 -2.2616904601590250e+01 -2.2671616055687764e+01
-2.2726378422261405e+01 -2.2781191570901910e+01 -2.2836055370890790e+01 -2.2890969691219198e+01 -2.2945934400583837e+01
-2.3000949367399926e+01 -2.3056014459808921e+01 -2.3111129545678523e+01 -2.3166294492618363e+01 -2.3221509167983868e+01
-2.3276773438880355e+01 -2.3332087172173260e+01 -2.3387450234495873e+01 -2.3442862492249787e+01 -2.3498323811618320e+01
-2.3553834058571510e+01 -2.3609393098863848e+01 -2.3665000798062465e+01 -2.3720657021526677e+01 -2.3776361634436626e+01
-2.3832114501780552e+01 -2.3887915488378439e+01 -2.3943764458878377e+01 -2.3999661277761106e+01 -2.4055605809352301e+01
-2.4111597917826657e+01 -2.4167637467209488e+01 -2.4223724321393092e+01 -2.4279858344124932e+01 -2.4336039399030597e+01
-2.4392267349614485e+01 -2.4448542059257761e+01 -2.4504863391234494e+01 -2.4561231208711206e+01 -2.4617645374753693e+01
-2.4674105752332935e+01 -2.4730612204329191e+01 -2.4787164593538137e+01 -2.4843762782677913e+01 -2.4900406634392539e+01
-2.4957096011252133e+01 -2.5013830775771112e+01 -2.5070610790396586e+01 -2.5127435917366029e+01 -2.5184306019355063e+01
-2.5241220958503845e+01 -2.5298180597080318e+01 -2.5355184797285347e+01 -2.5412233421340488e+01 -2.5469326331427965e+01
1.0000000000000000e+01 1.0801534951171448e+01 1.0617375158244670e+01 1.0436688151228793e+01 1.0259403283230313e+01
1.0085451405601304e+01 9.9147648356938589e+00 9.7472773253084029e+00 9.5829240298195373e+00 9.4216414779654656e+00
9.2633675422888473e+00 9.1080414102110012e+00 8.9556035557302494e+00 8.8059957117284853e+00 8.6591608428743143e+00
8.5150431191084976e+00 8.3735878897014118e+00 8.2347416578681987e+00 8.0984520559319435e+00 7.9646678210201571e+00
7.8333387712866624e+00 7.7044157826449009e+00 7.5778507660022569e+00 7.4535966449878401e+00 7.3316073341564731e+00
7.2118377176659578e+00 7.0942436284134374e+00 6.9787818276207929e+00 6.8654099848621115e+00 6.7540866585212882e+00
6.6447712766712357e+00 6.5374241183666584e+00 6.4320062953403578e+00 6.3284797340946000e+00 6.2268071583795574e+00
6.1269520720505000e+00 6.0288787422946655e+00 5.9325521832211621e+00 5.8379381398054591e+00 5.7450030721804524e+00
5.6537141402680220e+00 5.5640391887418730e+00 5.4759467323160322e+00 5.3894059413519244e+00 5.3043866277758980e+00
5.2208592313018016e+00 5.1387948059520454e+00 5.0581650068698707e+00 4.9789420774166615e+00 4.9010988365496075e+00
4.8246086664712777e+00 4.7494455005478358e+00 4.6755838114879396e+00 4.6029985997776066e+00 4.5316653823665547e+00
4.4615601815980312e+00 4.3926595143797726e+00 4.3249403815888456e+00 4.2583802577058805e+00 4.1929570806747449e+00
4.1286492419807814e+00 4.0654355769448500e+00 4.0032953552278059e+00 3.9422082715398403e+00 3.8821544365521561e+00
3.8231143680053350e+00 3.7650689820101348e+00 3.7079995845373759e+00 3.6518878630917868e+00 3.5967158785670392e+00
3.5424660572764992e+00 3.4891211831576925e+00 3.4366643901451397e+00 3.3850791547089756e+00 3.3343492885547761e+00
3.2844589314827459e+00 3.2353925444006251e+00 3.1871349024889781e+00 3.1396710885139782e+00 3.0929864862859660e+00
3.0470667742591075e+00 3.0018979192706325e+00 2.9574661704151453e+00 2.9137580530522627e+00 2.8707603629438552e+00
2.8284601605189152e+00 2.7868447652620318e+00 2.7459017502243626e+00 2.7056189366531243e+00 2.6659843887374848e+00
2.6269864084689516e+00 2.5886135306124487e+00 2.5508545177868598e+00 2.5136983556521244e+00 2.4771342482006986e+00
2.4411516131510069e+00 2.4057400774406830e+00 2.3708894728175807e+00 2.3365898315265383e+00 2.3028313820887689e+00
2.2696045451740474e+00 2.2368999295609058e+00 2.2047083281853901e+00 2.1730207142748128e+00 2.1418282375653348e+00
2.1111222206016862e+00 2.0808941551166384e+00 2.0511356984892615e+00 2.0218386702793651e+00 1.9929950488372441e+00
1.9645969679867363e+00 1.9366367137799969e+00 1.9091067213223525e+00 1.8819995716660998e+00 1.8553079887710169e+00
1.8290248365311754e+00 1.8031431158652609e+00 1.7776559618705363e+00 1.7525566410377422e+00 1.7278385485262007e+00
1.7034952054980579e+00 1.6795202565098251e+00 1.6559074669601728e+00 1.6326507205929630e+00 1.6097440170540054e+00
1.5871814695006066e+00 1.5649573022624637e+00 1.5430658485530984e+00 1.5215015482308161e+00 1.5002589456071576e+00
1.4793326873036463e+00 1.4587175201534635e+00 1.4384082891492156e+00 1.4183999354343300e+00 1.3986874943378140e+00
1.3792660934511431e+00 1.3601309507466510e+00 1.3412773727360872e+00 1.3227007526689576e+00 1.3043965687692420e+00
1.2863603825102174e+00 1.2685878369261090e+00 1.2510746549598935e+00 1.2338166378466084e+00 1.2168096635312082e+00
1.2000496851203266e+00 1.1835327293670588e+00 1.1672548951882362e+00 1.1512123522134416e+00 1.1354013393647548e+00
1.1198181634671940e+00 1.1044591978884952e+00 1.0893208812080033e+00 1.0743997159140335e+00 1.0596922671287743e+00
1.0451951613605601e+00 1.0309050852825337e+00 1.0168187845373140e+00 1.0029330625671378e+00 9.8924477946872713e-01
9.7575085087259694e-01 9.6244824684604424e-01 9.4933399081931213e-01 9.3640515853477169e-01 9.2365887701803118e-01
9.1109232357100112e-01 8.9870272478628266e-01 8.8648735558209424e-01 8.7444353825798160e-01 8.6256864157006774e-01
8.5086007982605949e-01 8.3931531199913678e-01 8.2793184086057892e-01 8.1670721213066955e-01 8.0563901364725510e-01
7.9472487455206675e-01 7.8396246449372953e-01 7.7334949284779597e-01 7.6288370795296245e-01 7.5256289636327622e-01
7.4238488211596021e-01 7.3234752601463171e-01 7.2244872492728618e-01 7.1268641109915265e-01 7.0305855147956464e-01
6.9356314706317335e-01 6.8419823224459719e-01 6.7496187418651843e-01 6.6585217220099224e-01 6.5686725714346750e-01
6.4800529081937697e-01 6.3926446540306614e-01 6.3064300286859520e-01 6.2213915443241774e-01 6.1375120000748140e-01
6.0547744766850542e-01 5.9731623312840654e-01 5.8926591922531912e-01 5.8132489542033028e-01 5.7349157730523359e-01
5.6576440612064971e-01 5.5814184828379609e-01 5.5062239492602316e-01 5.4320456143964790e-01 5.3588688703414888e-01
5.2866793430138515e-01 5.2154628878946241e-01 5.1452055858552015e-01 5.0758937390678227e-01 5.0075138669987496e-01
4.9400527024841523e-01 4.8734971878830358e-01 4.8078344713093557e-01 4.7430519029390972e-01 4.6791370313911962e-01
4.6160776001828552e-01 4.5538615442535857e-01 4.4924769865602876e-01 4.4319122347399365e-01 4.3721557778390086e-01
4.3131962831075654e-01 4.2550225928575891e-01 4.1976237213834899e-01 4.1409888519439697e-01 4.0851073338028954e-01
4.0299686793291478e-01 3.9755625611540779e-01 3.9218788093843493e-01 3.8689074088692443e-01 3.8166384965228239e-01
3.7650623586976018e-01 3.7141694286095728e-01 3.6639502838144544e-01 3.6143956437320846e-01 3.5654963672189943e-01
3.5172434501901328e-01 3.4696280232829579e-01 3.4226413495707497e-01 3.3762748223177219e-01 3.3305199627774762e-01
3.2853684180349596e-01 3.2408119588894380e-01 3.1968424777773841e-01 3.1534519867361155e-01 3.1106326154055530e-01
3.0683766090688813e-01 3.0266763267296426e-01 2.9855242392259740e-01 2.9449129273803010e-01 2.9048350801842027e-01
2.8652834930171167e-01 2.8262510658997009e-01 2.7877308017785829e-01 2.7497158048439907e-01 2.7121992788793392e-01
2.6751745256412462e-01 2.6386349432690004e-01 2.6025740247248841e-01 2.5669853562631850e-01 2.5318626159266877e-01
2.4971995720718354e-01 2.4629900819206618e-01 2.4292280901402563e-01 2.3959076274464408e-01 2.3630228092351846e-01
2.3305678342376535e-01 2.2985369832002167e-01 2.2669246175884616e-01 2.2357251783148069e-01 2.2049331844890929e-01
2.1745432321916880e-01 2.1445499932688783e-01 2.1149482141498144e-01 2.0857327146848004e-01 2.0568983870040114e-01
2.0284401943976604e-01 2.0003531702142130e-01 1.9726324167804599e-01 1.9452731043391402e-01 1.9182704700056608e-01
1.8916198167437770e-01 1.8653165123588344e-01 1.8393559885088084e-01 1.8137337397327791e-01 1.7884453224959973e-01
1.7634863542523593e-01 1.7388525125224241e-01 1.7145395339876757e-01 1.6905432136008169e-01 1.6668594037109052e-01
1.6434840132036665e-01 1.6204130066570688e-01 1.5976424035106618e-01 1.5751682772493769e-01 1.5529867546015819e-01
1.5310940147503249e-01 1.5094862885580707e-01 1.4881598578045718e-01 1.4671110544379484e-01 1.4463362598375351e-01
1.4258319040899092e-01 1.4055944652768915e-01 1.3856204687748974e-01 1.3659064865666881e-01 1.3464491365640630e-01
1.3272450819420012e-01 1.3082910304837103e-01 1.2895837339364213e-01 1.2711199873781265e-01 1.2528966285941134e-01
1.2349105374641756e-01 1.2171586353596986e-01 1.1996378845505173e-01 1.1823452876211782e-01 1.1652778868972380e-01
1.1484327638801961e-01 1.1318070386919254e-01 1.1153978695277944e-01 1.0992024521187505e-01 1.0832180192018548e-01
1.0674418399992769e-01 1.0518712197055757e-01 1.0365034989832456e-01 1.0213360534659532e-01 1.0063662932698936e-01
9.9159166251264974e-02 9.7700963883974534e-02 9.6261773295835962e-02 9.4841348817873428e-02 9.3439447996227276e-02
9.2055831547688260e-02 9.0690263315935660e-02 8.9342510228411331e-02 8.8012342253891429e-02 8.6699532360706044e-02
8.5403856475584128e-02 8.4125093443141896e-02 8.2863024985984080e-02 8.1617435665412685e-02 8.0388112842733062e-02
7.9174846641143493e-02 7.7977429908209661e-02 7.6795658178889781e-02 7.5629329639115728e-02 7.4478245089953710e-02
7.3342207912248103e-02 7.2221024031827064e-02 7.1114501885225945e-02 7.0022452385910761e-02 6.8944688890991479e-02
6.7881027168450458e-02 6.6831285364849169e-02 6.5795283973477225e-02 6.4772845803028556e-02 6.3763795946680801e-02
6.2767961751651669e-02 6.1785172789201148e-02 6.0815260825057393e-02 5.9858059790287577e-02 5.8913405752569759e-02
5.7981136887894191e-02 5.7061093452682510e-02 5.6153117756271964e-02 5.5257054133826422e-02 5.4372748919636837e-02
5.3500050420772105e-02 5.2638808891131372e-02 5.1788876505864945e-02 5.0950107336147354e-02 5.0122357324306366e-02
4.9305484259319243e-02 4.8499347752635869e-02 4.7703809214351578e-02 4.6918731829721727e-02 4.6143980535982010e-02
4.5379421999521163e-02 4.4624924593352100e-02 4.3880358374905226e-02 4.3145595064128850e-02 4.2420508021892900e-02
4.1704972228691739e-02 4.0998864263647405e-02 4.0302062283785300e-02 3.9614446003616965e-02 3.8935896674993531e-02
3.8266297067221844e-02 3.7605531447481688e-02 3.6953485561492139e-02 3.6310046614435487e-02 3.5675103252157392e-02
3.5048545542616605e-02 3.4430264957581835e-02 3.3820154354582632e-02 3.3218107959093635e-02 3.2624021346983278e-02
3.2037791427166340e-02 3.1459316424514716e-02 3.0888495862994469e-02 3.0325230549015147e-02 2.9769422555015357e-02
2.9220975203265720e-02 2.8679793049885216e-02 2.8145781869070463e-02 2.7618848637539717e-02 2.7098901519172047e-02
2.6585849849867671e-02 2.6079604122596356e-02 2.5580075972643668e-02 2.5087178163056167e-02 2.4600824570288671e-02
2.4120930170012267e-02 2.3647411023137499e-02 2.3180184262011627e-02 2.2719168076792418e-02 2.2264281702001121e-02
2.1815445403263078e-02 2.1372580464206647e-02 2.0935609173537761e-02 2.0504454812290795e-02 2.0079041641240414e-02
1.9659294888467183e-02 1.9245140737102040e-02 1.8836506313223755e-02 1.8433319673904158e-02 1.8035509795416238e-02
1.7643006561603891e-02 1.7255740752380899e-02 1.6873644032391555e-02 1.6496648939823388e-02 1.6124688875347792e-02
1.5757698091213634e-02 1.5395611680482646e-02 1.5038365566394485e-02 1.4685896491875350e-02 1.4338142009180710e-02
1.3995040469664266e-02 1.3656531013687800e-02 1.3322553560652262e-02 1.2993048799157525e-02 1.2667958177290606e-02
1.2347223893038994e-02 1.2030788884814458e-02 1.1718596822117511e-02 1.1410592096299910e-02 1.1106719811460941e-02
1.0806925775450060e-02 1.0511156490982998e-02 1.0219359146882878e-02 9.9314816094114855e-03 9.6474724137328716e-03
9.3672807554677773e-03 9.0908564823645177e-03 8.8181500860711193e-03 8.5491126940134832e-03 8.2836960613733579e-03
8.0218525631707838e-03 7.7635351864465685e-03 7.5086975225370223e-03 7.2572937594544973e-03 7.0092786743605195e-03
6.7646076261301813e-03 6.5232365480138998e-03 6.2851219403949887e-03 6.0502208636273869e-03 5.8184909309735300e-03
5.5898903016277091e-03 5.3643776738254711e-03 5.1419122780385074e-03 4.9224538702609122e-03 4.7059627253757674e-03
4.4923996305976099e-03 4.2817258790122659e-03 4.0739032631877392e-03 3.8688940688609841e-03 3.6666610687164924e-03
3.4671675162341598e-03 3.2703771396105918e-03 3.0762541357672313e-03 2.8847631644254856e-03 2.6958693422570179e-03
2.5095382371091990e-03 2.3257358623008373e-03 2.1444286709895732e-03 1.9655835506104946e-03 1.7891678173820869e-03
1.6151492108847365e-03 1.4434958887007410e-03 1.2741764211267048e-03 1.1071597859496629e-03 9.4241536328815156e-04
7.7991293049733956e-04 6.1962265713921827e-04 4.6151510001329887e-04 3.0556119825198014e-04 1.5173226847375876e-04
0. 0. 0. 0. 0.
0. 5.4383329664155645e-05 9.3944898415945083e-04 4.3251847212615047e-03 1.2334244035325348e-02
2.7137722173468548e-02 5.0697119791449641e-02 8.4607638668976470e-02 1.3001641279549414e-01 1.8759487452762702e-01
2.5754900895683441e-01 3.3965493779430744e-01 4.3331024634064264e-01 5.3759384878832961e-01 6.5132908316254046e-01
7.7314622535699939e-01 9.0154178511424377e-01 1.0349328562818201e+00 1.1717054897399350e+00 1.3102565818166738e+00
1.4490291582473986e+00 1.5865412121263560e+00 1.7214084470448441e+00 1.8523614026473965e+00 1.9782575145276269e+00
2.0980886961566938e+00 2.2109850373516764e+00 2.3162151996095730e+00 2.4131840597491703e+00 2.5014281146549706e+00
2.5806091153285706e+00 2.6505063508648590e+00 2.7110079545661563e+00 2.7621015568249447e+00 2.8038645637913220e+00
2.8364542979766156e+00 2.8600981973448825e+00 2.8750842333755031e+00 2.8817516761559574e+00 2.8804823057701157e+00
2.8716921439699092e+00 2.8558237581894161e+00 2.8333391711552594e+00 2.8047133934346959e+00 2.7704285829676252e+00
2.7309688247181469e+00 2.6868155147671331e+00 2.6384433262347358e+00 2.5863167291097398e+00 2.5308870321738226e+00
2.4725899125317596e+00 2.4118433966060167e+00 2.3490462556752334e+00 2.2845767789603002e+00 2.2187918877813502e+00
2.1520265552815943e+00 2.0845934975626363e+00 2.0167831036919637e+00 1.9488635738636404e+00 1.8810812369508270e+00
1.8136610207193371e+00 1.7468070500507196e+00 1.6807033505858371e+00 1.6155146372447149e+00 1.5513871690559142e+00
1.4884496536383409e+00 1.4268141864958608e+00 1.3665772120042590e+00 1.3078204945836447e+00 1.2506120900523854e+00
1.1950073085502879e+00 1.1410496616995687e+00 1.0887717878420631e+00 1.0381963502565981e+00 9.8933690422003551e-01
9.4219872964247031e-01 8.9677962677415124e-01 8.5307067316958651e-01 8.1105694069385592e-01 7.7071817188505065e-01
7.3202941544290212e-01 6.9496162100761794e-01 6.5948219372701189e-01 6.2555550939233484e-01 5.9314339115629977e-01
5.6220554903693554e-01 5.3269998356387660e-01 5.0458335504023211e-01 4.7781131998032222e-01 4.5233883634534777e-01
4.2812043923464138e-01 4.0511048870905242e-01 3.8326339142174781e-01 3.6253379771729577e-01 3.4287677583286325e-01
3.2424796479760154e-01 3.0660370758054967e-01 2.8990116598452254e-01 2.7409841872609064e-01 2.5915454407883409e-01
2.4502968839369110e-01 2.3168512174254197e-01 2.1908328186436687e-01 2.0718780752542632e-01 1.9596356233750800e-01
1.8537665001230508e-01 1.7539442196444632e-01 1.6598547811304609e-01 1.5711966166996927e-01 1.4876804864444715e-01
1.4090293273673637e-01 1.3349780623990259e-01 1.2652733751724909e-01 1.1996734557434463e-01 1.1379477219856060e-01
1.0798765209582406e-01 1.0252508141368288e-01 9.7387185001678311e-02 9.2555082724584015e-02 8.8010855111109620e-02
8.3737508589961873e-02 7.9718940536826377e-02 7.5939904329596963e-02 7.2385974585237101e-02 6.9043512729294765e-02
6.5899633029043336e-02 6.2942169202580001e-02 6.0159641699440547e-02 5.7541225732930634e-02 5.5076720130546430e-02
5.2756517056398833e-02 5.0571572648238083e-02 4.8513378601664936e-02 4.6573934725081756e-02 4.4745722480991068e-02
4.3021679522073253e-02 4.1395175224364866e-02 3.9859987214311721e-02 3.8410278881708670e-02 3.7040577866510604e-02
3.5745755503880039e-02 3.4521007208912380e-02 3.3361833779917971e-02 3.2264023597108116e-02 3.1223635691821294e-02
3.0236983660070216e-02 2.9300620393215571e-02 2.8411323597772320e-02 2.7566082075896281e-02 2.6762082737777249e-02
2.5996698317105604e-02 2.5267475760840985e-02 2.4572125264713973e-02 2.3908509926274246e-02 2.3274635987705516e-02
2.2668643641204911e-02 2.2088798370316409e-02 2.1533482801290083e-02 2.1001189039288493e-02 2.0490511464994254e-02
2.0000139967999431e-02 1.9528853594166895e-02 1.9075514584991349e-02 1.8639062787818239e-02 1.8218510416650235e-02
1.7812937144080498e-02 1.7421485505751177e-02 1.7043356599549031e-02 1.6677806062561751e-02 1.6324140309613155e-02
1.5981713017976018e-02 1.5649921843605585e-02 1.5328205354974755e-02 1.5016040171312250e-02 1.4712938292708366e-02
1.4418444610242331e-02 1.4132134584901757e-02 1.3853612084676337e-02 1.3582507369821917e-02 1.3318475216818060e-02
1.3061193172097418e-02 1.2810359927147186e-02 1.2565693807050415e-02 1.2326931365025051e-02 1.2093826075940506e-02
1.1866147122233661e-02 1.1643678266026136e-02 1.1426216801644407e-02 1.1213572583084475e-02 1.1005567121320226e-02
1.0802032746662471e-02 1.0602811831688208e-02 1.0407756070544782e-02 1.0216725810699157e-02 1.0029589433467268e-02
9.8462227798860602e-03 9.6665086187306404e-03 9.4903361536790021e-03 9.3176005668363371e-03 9.1482025960089031e-03
8.9820481433065535e-03 8.8190479128032462e-03 8.6591170751522117e-03 8.5021749571883021e-03 8.3481447546937537e-03
8.1969532666261724e-03 8.0485306492223962e-03 7.9028101885199598e-03 7.7597280899136256e-03 7.6192232834934315e-03
7.4812372439735375e-03 7.3457138241272979e-03 7.2125991007052359e-03 7.0818412319012813e-03 6.9533903254870300e-03
6.8271983168139705e-03 6.7032188559211503e-03 6.5814072030662141e-03 6.4617201320263939e-03 6.3441158405819764e-03
6.2285538676237207e-03 6.1149950163802147e-03 6.0034012832899109e-03 5.8937357920846312e-03 5.7859627326801166e-03
5.6800473044990030e-03 5.5759556638887986e-03 5.4736548753111791e-03 5.3731128660109428e-03 5.2742983838981461e-03
5.1771809583849582e-03 5.0817308639591330e-03 4.9879190862693046e-03 4.8957172905357560e-03 4.8050977921015592e-03
4.7160335289582467e-03 4.6284980360953021e-03 4.5424654215287241e-03 4.4579103438822931e-03 4.3748079913988880e-03
4.2931340622749670e-03 4.2128647462132407e-03 4.1339767071033873e-03 4.0564470667446839e-03 3.9802533895282599e-03
3.9053736680121076e-03 3.8317863093158128e-03 3.7594701222811860e-03 3.6884043053326127e-03 3.6185684349951674e-03
3.5499424550168301e-03 3.4825066660512660e-03 3.4162417158645347e-03 3.3511285900229004e-03 3.2871486030347646e-03
3.2242833899080170e-03 3.1625148980992668e-03 3.1018253798278661e-03 3.0421973847258310e-03 2.9836137528083811e-03
2.9260576077371064e-03 2.8695123503632708e-03 2.8139616525287708e-03 2.7593894511106498e-03 2.7057799422959966e-03
2.6531175760685227e-03 2.6013870509009052e-03 2.5505733086344240e-03 2.5006615295404683e-03 2.4516371275501436e-03
2.4034857456453340e-03 2.3561932514012535e-03 2.3097457326723414e-03 2.2641294934160616e-03 2.2193310496436136e-03
2.1753371254977782e-03 2.1321346494441173e-03 2.0897107505768314e-03 2.0480527550303662e-03 2.0071481824917164e-03
1.9669847428123305e-03 1.9275503327108034e-03 1.8888330325659355e-03 1.8508211032951805e-03 1.8135029833145980e-03
1.7768672855772646e-03 1.7409027946878666e-03 1.7055984640891586e-03 1.6709434133182904e-03 1.6369269253308227e-03
1.6035384438881917e-03 1.5707675710093030e-03 1.5386040644797400e-03 1.5070378354209296e-03 1.4760589459142243e-03
1.4456576066784674e-03 1.4158241748004133e-03 1.3865491515145517e-03 1.3578231800324136e-03 1.3296370434173130e-03
1.3019816625059188e-03 1.2748480938728074e-03 1.2482275278369870e-03 1.2221112865106742e-03 1.1964908218862064e-03
1.1713577139624703e-03 1.1467036689077198e-03 1.1225205172586891e-03 1.0988002121543120e-03 1.0755348276031765e-03
1.0527165567835728e-03 1.0303377103750150e-03 1.0083907149206553e-03 9.8686811121878604e-04 9.6576255274356815e-04
9.4506680409354657e-04 9.2477373946662708e-04 9.0487634116191706e-04 8.8536769810608137e-04 8.6624100440530968e-04
8.4748955791986991e-04 8.2910675886310736e-04 8.1108610842155551e-04 7.9342120739794852e-04 7.7610575487466887e-04
7.5913354689786591e-04 7.4249847518158968e-04 7.2619452583109687e-04 7.1021577808524222e-04 6.9455640307671332e-04
6.7921066261025093e-04 6.6417290795844214e-04 6.4943757867335500e-04 6.3499920141575628e-04 6.2085238879914031e-04
6.0699183824991856e-04 5.9341233088238896e-04 5.8010873038847818e-04 5.6707598194186137e-04 5.5430911111587280e-04
5.4180322281523891e-04 5.2955350022104025e-04 5.1755520374872563e-04 5.0580367001857793e-04 4.9429431083891986e-04
4.8302261220136561e-04 4.7198413328763435e-04 4.6117450548847222e-04 4.5058943143359842e-04 4.4022468403297037e-04
4.3007610552883886e-04 4.2013960655883260e-04 4.1041116522908330e-04 4.0088682619821882e-04 3.9156269977118005e-04
3.8243496100300207e-04 3.7349984881274514e-04 3.6475366510662147e-04 3.5619277391102898e-04 3.4781360051482253e-04
3.3961263062063513e-04 3.3158640950565685e-04 3.2373154119109092e-04 3.1604468762060252e-04 3.0852256784754707e-04
3.0116195723081836e-04 2.9395968663908575e-04 2.8691264166377101e-04 2.8001776184017647e-04 2.7327203987681688e-04
2.6667252089326854e-04 2.6021630166557681e-04 2.5390052988028163e-04 2.4772240339593181e-04 2.4167916951265550e-04
2.3576812424967210e-04 2.2998661163024531e-04 2.2433202297460642e-04 2.1880179620031078e-04 2.1339341513026532e-04
2.0810440880823181e-04 2.0293235082175821e-04 1.9787485863260665e-04 1.9292959291436311e-04 1.8809425689761319e-04
1.8336659572205580e-04 1.7874439579616125e-04 1.7422548416372047e-04 1.6980772787763936e-04 1.6548903338088530e-04
1.6126734589430591e-04 1.5714064881157744e-04 1.5310696310104604e-04 1.4916434671449329e-04 1.4531089400280153e-04
1.4154473513841234e-04 1.3786403554466153e-04 1.3426699533172857e-04 1.3075184873951283e-04 1.2731686358694039e-04
1.2396034072819674e-04 1.2068061351527565e-04 1.1747604726729168e-04 1.1434503874632306e-04 1.1128601563955686e-04
1.0829743604811193e-04 1.0537778798212988e-04 1.0252558886227753e-04 9.9739385027582898e-05 9.7017751249615057e-05
9.4359290252773662e-05 9.1762632240957511e-05 8.9226434430383569e-05 8.6749380588361721e-05 8.4330180578390864e-05
8.1967569911181246e-05 7.9660309301724484e-05 7.7407184232279429e-05 7.5207004521348451e-05 7.3058603898526649e-05
7.0960839585107720e-05 6.8912591880629977e-05 6.6912763755002085e-05 6.4960280446513426e-05 6.3054089065330086e-05
6.1193158202771814e-05 5.9376477546041213e-05 5.7603057498502742e-05 5.5871928805544500e-05 5.4182142185708361e-05
5.2532767967318744e-05 5.0922895730446966e-05 4.9351633954125953e-05 4.7818109668823321e-05 4.6321468114150300e-05
4.4860872401664663e-05 4.3435503182825573e-05 4.2044558321957873e-05 4.0687252574273750e-05 3.9362817268785450e-05
3.8070499996214428e-05 3.6809564301621984e-05 3.5579289382025496e-05 3.4378969788611451e-05 3.3207915133769052e-05
3.2065449802711312e-05 3.0950912669766876e-05 2.9863656819185611e-05 2.8803049270468119e-05 2.7768470708167169e-05
2.6759315216115260e-05 2.5774990015931323e-05 2.4814915209964844e-05 2.3878523528387922e-05 2.2965260080560611e-05
2.2074582110528148e-05 2.1205958756658535e-05 2.0358870815317476e-05 1.9532810508535560e-05 1.8727281255713447e-05
1.7941797449145505e-05 1.7175884233475961e-05 1.6429077288930018e-05 1.5700922618341645e-05 1.4990976337865471e-05
1.4298804471386687e-05 1.3623982748522034e-05 1.2966096406226424e-05 1.2324739993882115e-05 1.1699517181902770e-05
1.1090040573734860e-05 1.0495931521266495e-05 9.9168199435395021e-06 9.3523441487842465e-06 8.8021506596591475e-06
8.2658940417265321e-06 7.7432367350197678e-06 7.2338488887770244e-06 6.7374081991923703e-06 6.2535997501888662e-06
5.7821158571569505e-06 5.3226559136389283e-06 4.8749262408651290e-06 4.4386399401326240e-06 4.0135167480073166e-06
3.5992828942305738e-06 3.1956709623667747e-06 2.8024197531120341e-06 2.4192741502208947e-06 2.0459849890155880e-06
1.6823089274468580e-06 1.3280083196495871e-06 9.8285109196557868e-07 6.4661062138351467e-07 3.1906561636122974e-07
0. 0. 0. 0. 0.

1
bench/POTENTIALS/Cu_u3.eam Symbolic link
View File

@ -0,0 +1 @@
../../potentials/Cu_u3.eam

File diff suppressed because it is too large Load Diff

1
bench/POTENTIALS/Ni.adp Symbolic link
View File

@ -0,0 +1 @@
../../potentials/Ni.adp

View File

@ -1,2 +0,0 @@
rc = 4.0
delr = 0.1

1
bench/POTENTIALS/Ni.meam Symbolic link
View File

@ -0,0 +1 @@
../../potentials/Ni.meam

View File

@ -1,17 +0,0 @@
# Stillinger-Weber parameters for various elements and mixtures
# multiple entries can be added to this file, LAMMPS reads the ones it needs
# these entries are in LAMMPS "metal" units:
# epsilon = eV; sigma = Angstroms
# other quantities are unitless
# format of a single entry (one or more lines):
# element 1, element 2, element 3,
# epsilon, sigma, a, lambda, gamma, costheta0, A, B, p, q, tol
# Here are the original parameters in metal units, for Silicon from:
#
# Stillinger and Weber, Phys. Rev. B, v. 31, p. 5262, (1985)
#
Si Si Si 2.1683 2.0951 1.80 21.0 1.20 -0.333333333333
7.049556277 0.6022245584 4.0 0.0 0.0

1
bench/POTENTIALS/Si.sw Symbolic link
View File

@ -0,0 +1 @@
../../potentials/Si.sw

View File

@ -1,16 +0,0 @@
# Tersoff parameters for various elements and mixtures
# multiple entries can be added to this file, LAMMPS reads the ones it needs
# these entries are in LAMMPS "metal" units:
# A,B = eV; lambda1,lambda2,lambda3 = 1/Angstroms; R,D = Angstroms
# other quantities are unitless
# This is the Si parameterization from a particular Tersoff paper:
# J. Tersoff, PRB, 37, 6991 (1988)
# See the SiCGe.tersoff file for different Si variants.
# format of a single entry (one or more lines):
# element 1, element 2, element 3,
# m, gamma, lambda3, c, d, costheta0, n, beta, lambda2, B, R, D, lambda1, A
Si Si Si 3.0 1.0 1.3258 4.8381 2.0417 0.0000 22.956
0.33675 1.3258 95.373 3.0 0.2 3.2394 3264.7

1
bench/POTENTIALS/Si.tersoff Symbolic link
View File

@ -0,0 +1 @@
../../potentials/Si.tersoff

File diff suppressed because it is too large Load Diff

View File

@ -1,24 +0,0 @@
# bulk Ni in MEAM
units metal
atom_style atomic
lattice fcc 3.52
region box block 0 20 0 20 0 20
create_box 1 box
create_atoms 1 box
pair_style meam
pair_coeff * * library.meam Ni4 Ni.meam Ni4
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
thermo 50
run 100

View File

@ -1,22 +0,0 @@
# ReaxFF benchmark: simulation of PETN crystal, replicated unit cell
units real
atom_style charge
read_data data.reax
#replicate 7 8 10
replicate 7 8 5
velocity all create 300.0 9999
pair_style reax
pair_coeff * * ffield.reax 1 2 3 4
timestep 0.1
fix 2 all nve
thermo 10
thermo_style custom step temp ke pe pxx pyy pzz etotal
run 100

View File

@ -11,7 +11,7 @@ neighbor 0.5 bin
neigh_modify delay 5 every 1
pair_style rebo
pair_coeff * * CH.airebo C H
pair_coeff * * CH.rebo C H
velocity all create 300.0 761341

View File

@ -1,162 +0,0 @@
# meam data from vax files fcc,bcc,dia 11/4/92
# elt lat z ielement atwt
# alpha b0 b1 b2 b3 alat esub asub
# t0 t1 t2 t3 rozero ibar
'Sn5' 'dia' 4. 50 118.
5.09 5.00 16.0 04.0 5.0 6.483 3.14 1.00
1.0 2.00 5.756 -0.30 1. 0
'Sn' 'dia' 4. 50 118.
5.09 5.42 8.0 5.0 6.0 6.483 3.14 1.12
1.0 3.0 5.707 +0.30 1. 0
'Cu' 'fcc' 12. 29 63.54
5.10570729 3.634 2.20 6 2.20 3.62 3.54 1.07
1.0 3.13803254 2.49438711 2.95269237 1. 0
'Ag' 'fcc' 12. 47 107.870
5.89222008 4.456 2.20 6 2.20 4.08 2.85 1.06
1.0 5.54097609 2.45015783 1.28843988 1. 0
'Au' 'fcc' 12. 79 196.967
6.34090112 5.449 2.20 6 2.20 4.07 3.93 1.04
1.0 1.58956328 1.50776392 2.60609758 1. 0
'Ni1' 'fcc' 12. 28 58.71
4.99 2.45 2.20 6 2.20 3.52 4.45 1.10
1.0 3.57 1.60 3.70 1.0 0
'Ni2' 'fcc' 12. 28 58.71
4.99 2.45 2.20 6 2.20 3.52 4.45 1.10
1.0 3.57 1.60 3.70 1.0 3
'Ni3' 'fcc' 12. 28 58.71
4.99 2.45 1.50 6 1.50 3.52 4.45 1.10
1.0 3.57 1.60 3.70 1.0 3
'Ni4' 'fcc' 12. 28 58.71
4.99 2.45 1.50 6 1.50 3.52 4.45 1.10
1.0 3.57 1.60 3.70 1.0 0
'Ni' 'fcc' 12. 28 58.71
4.99 2.64 1.50 4.50 1.50 3.52 4.45 1.10
1.0 1.692 4.987 3.683 1.0 1
'Nix' 'fcc' 12. 28 58.71
4.99 2.64 1.50 4.50 1.50 3.52 4.45 1.10
1.0 0.00 0.000 3.683 1.0 1
'Ni' 'fcc' 12. 28 58.71
4.99 3.25 0.80 4 1.50 3.52 4.45 1.07
1.0 -4.052 13.14 3.786 1.0 1
'Pd' 'fcc' 12. 46 106.4
6.43230473 4.975 2.20 6 2.20 3.89 3.91 1.01
1.0 2.33573516 1.38343023 4.47989049 1. 0
'Pt' 'fcc' 12. 78 195.09
6.44221724 4.673 2.20 6 2.20 3.92 5.77 1.04
1.0 2.73335406 -1.3759593 3.29322278 1. 0
'Al' 'fcc' 12. 13 26.9815
4.61 2.21 2.20 6.0 2.20 4.05 3.58 1.07
1.0 -1.78 -2.21 8.01 0.6 0
'Al' 'fcc' 12. 13 26.9815
4.69 1.56 4.00 5.5 0.60 4.05 3.36 1.09
1.0 -0.251 -3.450 8.298 0.6 1
'Al' 'fcc' 12. 13 26.9815
4.69 1.58 1.00 6.0 0.60 4.05 3.36 1.09
1.0 -0.808 -2.614 8.298 0.6 1
'Pb' 'fcc' 12. 82 207.19
6.0564428 5.306 2.20 6 2.20 4.95 2.04 1.01
1.0 2.74022352 3.06323991 1.2 1. 0
'Rh' 'fcc' 12. 45 102.905
6.0045385 1.131 1.00 2 1.00 3.8 5.75 1.05
1.0 2.9900 4.60231784 4.8 1. 0
'Ir' 'fcc' 12. 77 192.2
6.52315787 1.13 1.00 2 1.00 3.84 6.93 1.05
1.0 1.50000 8.09942666 4.8 1. 0
'Li' 'bcc' 8. 3 6.939
2.97244804 1.425 1.00 1.00169907 1.00 3.509 1.65 0.87
1.0 0.26395017 0.44431129 -0.2 1. 0
'Na' 'bcc' 8. 11 22.9898
3.64280541 2.313 1.00 1.00173951 1.00 4.291 1.13 0.9
1.0 3.55398839 0.68807569 -0.2 1. 0
'K' 'bcc' 8. 19 39.102
3.90128376 2.687 1.00 1.00186667 1.00 5.344 0.94 0.92
1.0 5.09756981 0.69413264 -0.2 1. 0
'V' 'bcc' 8. 23 50.942
4.83265262 4.113 1.00 1.00095022 1.00 3.04 5.3 1
1.0 4.20161301 4.09946561 -1 1. 0
'Nb' 'bcc' 8. 41 92.906
4.79306197 4.374 1.00 1.00101441 1.00 3.301 7.47 1
1.0 3.75762849 3.82514598 -1 1. 0
'Ta' 'bcc' 8. 73 180.948
4.89528669 3.709 1.00 1.00099783 1.00 3.303 8.09 0.99
1.0 6.08617812 3.35255804 -2.9 1. 0
'Cr' 'bcc' 8. 24 51.996
5.12169218 3.224 1.00 1.00048646 1.00 2.885 4.1 0.94
1.0 -0.207535 12.2600006 -1.9 1. 0
'Mo' 'bcc' 8. 42 95.94
5.84872871 4.481 1.00 1.00065204 1.00 3.15 6.81 0.99
1.0 3.47727181 9.48582009 -2.9 1. 0
'W' 'bcc' 8. 74 183.85
5.62777409 3.978 1.00 1.00065894 1.00 3.165 8.66 0.98
1.0 3.16353338 8.24586928 -2.7 1. 0
'WL' 'bcc' 8 74 183.85
5.6831 6.54 1 1 1 3.1639 8.66 0.4
1 -0.6 0.3 -8.7 1 3
'Fe' 'bcc' 8. 26 55.847
5.07292627 2.935 1.00 1.00080073 1.00 2.866 4.29 0.89
1.0 5.13579244 4.12042448 -2.7 1. 0
'Si' 'dia' 4. 14 28.086
4.87 4.8 4.8 4.8 4.8 5.431 4.63 1.
1.0 3.30 5.105 -0.80 1. 1
'Si97' 'dia' 4. 14 28.086
4.87 4.4 5.5 5.5 5.5 5.431 4.63 1.
1.0 3.13 4.47 -1.80 2.05 0
'Si92' 'dia' 4. 14 28.086
4.87 4.4 5.5 5.5 5.5 5.431 4.63 1.
1.0 3.13 4.47 -1.80 2.35 0
'Six' 'dia' 4 14 28.086
4.87 4.4 5.5 5.5 5.5 5.431 4.63 1.0
1.0 2.05 4.47 -1.8 2.05 0
'Sixb' 'dia' 4 14 28.086
4.87 4.4 5.5 5.5 5.5 5.431 4.63 1.0
1.0 2.05 4.47 -1.8 2.5 0
'Mg' 'hcp' 12. 12 24.305
5.45 2.70 0.0 0.35 3.0 3.20 1.55 1.11
1.0 8.00 04.1 -02.0 1.0 0
'C' 'dia' 4. 6 12.0111
4.38 4.10 4.200 5.00 3.00 3.567 7.37 1.000
1.0 5.0 9.34 -1.00 2.25 1
'C' 'dia' 4. 6 12.0111
4.38 5.20 3.87 4.00 4.50 3.567 7.37 1.278
1.0 15. 2.09 -6.00 2.5 1
'C' 'dia' 4. 6 12.0111
4.38 4.50 4.00 3.50 4.80 3.567 7.37 1.00
1.0 10.5 1.54 -8.75 3.2 1
'C' 'dia' 4. 6 12.0111
4.38 3.30 2.80 1.50 3.20 3.567 7.37 1.00
1.0 10.3 1.54 -8.80 2.5 1
'C' 'dia' 4. 6 12.0111
4.38 4.60 3.45 4.00 4.20 3.567 7.37 1.061
1.0 15.0 1.74 -8.00 2.5 1
'C' 'dia' 4. 6 12.0111
4.38 4.50 4.00 3.50 4.80 3.567 7.37 1.00
1.0 10.5 1.54 -8.75 3.2 1
'h' 'dim' 1. 1 1.0079
2.96 2.70 3.5 3.4 3.4 0.74 2.235 2.27
1.0 0.19 0.00 0.00 20.00 0
'h' 'dim' 1. 1 1.0079
2.96 2.00 4.0 4.0 0.0 0.74 2.235 1.00
1.0 -0.60 -0.80 -0.0 01.0 1
'H' 'dim' 1. 1 1.0079
2.96 2.96 3.0 3.0 3.0 0.74 2.235 2.50
1.0 0.20 -0.10 0.0 0.5 0
'H' 'dim' 1. 1 1.0079
2.96 2.0 3.0 4.0 0.0 0.74 2.225 1.00
1.0 -0.5 -1.00 0.0 0.15 1
'H' 'dim' 1. 1 1.0079
2.96 2.00 2.0 2.0 2.0 0.74 2.235 1.00
1.0 -0.60 -0.80 -0.0 01.0 2
'Hni' 'dim' 1. 1 1.0079
2.96 2.96 3.0 3.0 3.0 0.74 2.235 2.50
1.0 0.2 -0.1 0.0 0.5 0
'Hni' 'dim' 1. 1 1.0079
2.96 2.96 3.0 2.0 3.0 0.74 2.235 36.4
1.0 0.2 6.0 0.0 22.8 0
'Vac' 'fcc' 12. 1 1.
0 0 0.0 0 0.0 1E+08 0 1
0 0 0 0 1. 0
'zz' 'zzz' 99. 1 1.
0 0 0.0 0 0.0 0. 0. 0.
0 0 0 0 1. 0

View File

@ -0,0 +1 @@
../../potentials/library.meam

View File

@ -1,75 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Ni in ADP
units metal
atom_style atomic
lattice fcc 3.52
Lattice spacing in x,y,z = 3.52 3.52 3.52
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (70.4 70.4 70.4)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.00184107 secs
pair_style adp
pair_coeff * * Ni.adp Ni
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.168
ghost atom cutoff = 6.168
binsize = 3.084, bins = 23 23 23
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair adp, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 27.56 | 27.56 | 27.56 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -142400 0 -135782.09 20259.105
100 793.05485 -139023.13 0 -135742.9 32175.694
Loop time of 11.9854 on 1 procs for 100 steps with 32000 atoms
Performance: 3.604 ns/day, 6.659 hours/ns, 8.344 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 11.447 | 11.447 | 11.447 | 0.0 | 95.51
Neigh | 0.48465 | 0.48465 | 0.48465 | 0.0 | 4.04
Comm | 0.019317 | 0.019317 | 0.019317 | 0.0 | 0.16
Output | 0.00011063 | 0.00011063 | 0.00011063 | 0.0 | 0.00
Modify | 0.025319 | 0.025319 | 0.025319 | 0.0 | 0.21
Other | | 0.009125 | | | 0.08
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 19911 ave 19911 max 19911 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 1.33704e+06 ave 1.33704e+06 max 1.33704e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 1337035
Ave neighs/atom = 41.7823
Neighbor list builds = 13
Dangerous builds = 0
Total wall time: 0:00:12

View File

@ -1,75 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Ni in ADP
units metal
atom_style atomic
lattice fcc 3.52
Lattice spacing in x,y,z = 3.52 3.52 3.52
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (70.4 70.4 70.4)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.000586033 secs
pair_style adp
pair_coeff * * Ni.adp Ni
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.168
ghost atom cutoff = 6.168
binsize = 3.084, bins = 23 23 23
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair adp, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 12.45 | 12.45 | 12.45 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -142400 0 -135782.09 20259.105
100 793.05485 -139023.13 0 -135742.9 32175.694
Loop time of 3.49752 on 4 procs for 100 steps with 32000 atoms
Performance: 12.352 ns/day, 1.943 hours/ns, 28.592 timesteps/s
99.1% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 3.3203 | 3.3261 | 3.3317 | 0.3 | 95.10
Neigh | 0.12544 | 0.12594 | 0.12634 | 0.1 | 3.60
Comm | 0.024059 | 0.03001 | 0.035574 | 2.9 | 0.86
Output | 4.8161e-05 | 6.8128e-05 | 0.00011802 | 0.0 | 0.00
Modify | 0.010666 | 0.010841 | 0.011109 | 0.2 | 0.31
Other | | 0.00457 | | | 0.13
Nlocal: 8000 ave 8044 max 7960 min
Histogram: 1 0 0 1 0 1 0 0 0 1
Nghost: 9131 ave 9171 max 9087 min
Histogram: 1 0 0 0 1 0 1 0 0 1
Neighs: 334259 ave 336108 max 332347 min
Histogram: 1 0 0 1 0 0 1 0 0 1
Total # of neighbors = 1337035
Ave neighs/atom = 41.7823
Neighbor list builds = 13
Dangerous builds = 0
Total wall time: 0:00:03

View File

@ -1,87 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# AIREBO polyethelene benchmark
units metal
atom_style atomic
read_data data.airebo
orthogonal box = (-2.1 -2.1 0) to (2.1 2.1 25.579)
1 by 1 by 1 MPI processor grid
reading atoms ...
60 atoms
replicate 17 16 2
orthogonal box = (-2.1 -2.1 0) to (69.3 65.1 51.158)
1 by 1 by 1 MPI processor grid
32640 atoms
Time spent = 0.00154901 secs
neighbor 0.5 bin
neigh_modify delay 5 every 1
pair_style airebo 3.0 1 1
pair_coeff * * CH.airebo C H
velocity all create 300.0 761341
fix 1 all nve
timestep 0.0005
thermo 10
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 10.7
ghost atom cutoff = 10.7
binsize = 5.35, bins = 14 13 10
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair airebo, perpetual
attributes: full, newton on, ghost
pair build: full/bin/ghost
stencil: full/ghost/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 106.4 | 106.4 | 106.4 Mbytes
Step Temp E_pair E_mol TotEng Press
0 300 -139299.7 0 -138034.03 7998.7287
10 161.33916 -138711.85 0 -138031.17 33242.273
20 208.59505 -138911.77 0 -138031.73 -3199.2371
30 139.73485 -138617.76 0 -138028.23 10890.529
40 142.15332 -138628.03 0 -138028.3 14614.022
50 114.21945 -138509.87 0 -138027.98 24700.885
60 164.9432 -138725.08 0 -138029.19 35135.722
70 162.14928 -138714.86 0 -138030.77 5666.4609
80 157.17575 -138694.81 0 -138031.7 19838.161
90 196.16354 -138859.65 0 -138032.05 -7942.9718
100 178.30378 -138783.8 0 -138031.55 31012.15
Loop time of 60.9424 on 1 procs for 100 steps with 32640 atoms
Performance: 0.071 ns/day, 338.569 hours/ns, 1.641 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 55.299 | 55.299 | 55.299 | 0.0 | 90.74
Neigh | 5.5777 | 5.5777 | 5.5777 | 0.0 | 9.15
Comm | 0.027658 | 0.027658 | 0.027658 | 0.0 | 0.05
Output | 0.0011463 | 0.0011463 | 0.0011463 | 0.0 | 0.00
Modify | 0.024684 | 0.024684 | 0.024684 | 0.0 | 0.04
Other | | 0.012 | | | 0.02
Nlocal: 32640 ave 32640 max 32640 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 48190 ave 48190 max 48190 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 2.22179e+07 ave 2.22179e+07 max 2.22179e+07 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 22217870
Ave neighs/atom = 680.695
Neighbor list builds = 8
Dangerous builds = 0
Total wall time: 0:01:02

View File

@ -1,87 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# AIREBO polyethelene benchmark
units metal
atom_style atomic
read_data data.airebo
orthogonal box = (-2.1 -2.1 0) to (2.1 2.1 25.579)
1 by 1 by 4 MPI processor grid
reading atoms ...
60 atoms
replicate 17 16 2
orthogonal box = (-2.1 -2.1 0) to (69.3 65.1 51.158)
2 by 2 by 1 MPI processor grid
32640 atoms
Time spent = 0.00070262 secs
neighbor 0.5 bin
neigh_modify delay 5 every 1
pair_style airebo 3.0 1 1
pair_coeff * * CH.airebo C H
velocity all create 300.0 761341
fix 1 all nve
timestep 0.0005
thermo 10
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 10.7
ghost atom cutoff = 10.7
binsize = 5.35, bins = 14 13 10
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair airebo, perpetual
attributes: full, newton on, ghost
pair build: full/bin/ghost
stencil: full/ghost/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 29.37 | 29.75 | 30.13 Mbytes
Step Temp E_pair E_mol TotEng Press
0 300 -139299.7 0 -138034.03 7998.7287
10 161.33916 -138711.85 0 -138031.17 33242.273
20 208.59505 -138911.77 0 -138031.73 -3199.2371
30 139.73485 -138617.76 0 -138028.23 10890.529
40 142.15332 -138628.03 0 -138028.3 14614.022
50 114.21945 -138509.87 0 -138027.98 24700.885
60 164.9432 -138725.08 0 -138029.19 35135.722
70 162.14928 -138714.86 0 -138030.77 5666.4609
80 157.17575 -138694.81 0 -138031.7 19838.161
90 196.16354 -138859.65 0 -138032.05 -7942.9718
100 178.30378 -138783.8 0 -138031.55 31012.15
Loop time of 16.768 on 4 procs for 100 steps with 32640 atoms
Performance: 0.258 ns/day, 93.156 hours/ns, 5.964 timesteps/s
99.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 14.538 | 14.589 | 14.654 | 1.3 | 87.00
Neigh | 1.8853 | 1.8992 | 1.9159 | 0.8 | 11.33
Comm | 0.18073 | 0.25896 | 0.31361 | 10.6 | 1.54
Output | 0.00050807 | 0.0040419 | 0.0077746 | 5.6 | 0.02
Modify | 0.0094635 | 0.0096973 | 0.0099616 | 0.2 | 0.06
Other | | 0.007481 | | | 0.04
Nlocal: 8160 ave 8174 max 8146 min
Histogram: 1 0 1 0 0 0 0 1 0 1
Nghost: 22614.5 ave 22629 max 22601 min
Histogram: 1 1 0 0 0 0 0 1 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 4 0 0 0 0 0 0 0 0 0
FullNghs: 5.55447e+06 ave 5.56557e+06 max 5.54193e+06 min
Histogram: 1 0 0 1 0 0 0 1 0 1
Total # of neighbors = 22217870
Ave neighs/atom = 680.695
Neighbor list builds = 8
Dangerous builds = 0
Total wall time: 0:00:17

View File

@ -1,82 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk CdTe via BOP
units metal
atom_style atomic
lattice custom 6.82884 basis 0.0 0.0 0.0 basis 0.25 0.25 0.25 basis 0.0 0.5 0.5 basis 0.25 0.75 0.75 basis 0.5 0.0 0.5 basis 0.75 0.25 0.75 basis 0.5 0.5 0.0 basis 0.75 0.75 0.25
Lattice spacing in x,y,z = 6.82884 6.82884 6.82884
region box block 0 20 0 20 0 10
create_box 2 box
Created orthogonal box = (0 0 0) to (136.577 136.577 68.2884)
1 by 1 by 1 MPI processor grid
create_atoms 1 box basis 2 2 basis 4 2 basis 6 2 basis 8 2
Created 32000 atoms
Time spent = 0.00191426 secs
pair_style bop
pair_coeff * * CdTe.bop.table Cd Te
Reading potential file CdTe.bop.table with DATE: 2012-06-25
Reading potential file CdTe.bop.table with DATE: 2012-06-25
mass 1 112.4
mass 2 127.6
comm_modify cutoff 14.7
velocity all create 1000.0 376847 loop geom
neighbor 0.1 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.001
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5
ghost atom cutoff = 14.7
binsize = 2.5, bins = 55 55 28
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair bop, perpetual
attributes: full, newton on, ghost
pair build: full/bin/ghost
stencil: full/ghost/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 19.39 | 19.39 | 19.39 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1000 -69539.487 0 -65403.292 3473.2595
100 572.16481 -67769.936 0 -65403.35 1838.6993
Loop time of 24.1696 on 1 procs for 100 steps with 32000 atoms
Performance: 0.357 ns/day, 67.138 hours/ns, 4.137 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 23.355 | 23.355 | 23.355 | 0.0 | 96.63
Neigh | 0.7545 | 0.7545 | 0.7545 | 0.0 | 3.12
Comm | 0.026978 | 0.026978 | 0.026978 | 0.0 | 0.11
Output | 0.0001111 | 0.0001111 | 0.0001111 | 0.0 | 0.00
Modify | 0.024145 | 0.024145 | 0.024145 | 0.0 | 0.10
Other | | 0.009326 | | | 0.04
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 35071 ave 35071 max 35071 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 141288 ave 141288 max 141288 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 141288
Ave neighs/atom = 4.41525
Neighbor list builds = 14
Dangerous builds = 0
Total wall time: 0:00:24

View File

@ -1,82 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk CdTe via BOP
units metal
atom_style atomic
lattice custom 6.82884 basis 0.0 0.0 0.0 basis 0.25 0.25 0.25 basis 0.0 0.5 0.5 basis 0.25 0.75 0.75 basis 0.5 0.0 0.5 basis 0.75 0.25 0.75 basis 0.5 0.5 0.0 basis 0.75 0.75 0.25
Lattice spacing in x,y,z = 6.82884 6.82884 6.82884
region box block 0 20 0 20 0 10
create_box 2 box
Created orthogonal box = (0 0 0) to (136.577 136.577 68.2884)
2 by 2 by 1 MPI processor grid
create_atoms 1 box basis 2 2 basis 4 2 basis 6 2 basis 8 2
Created 32000 atoms
Time spent = 0.000597477 secs
pair_style bop
pair_coeff * * CdTe.bop.table Cd Te
Reading potential file CdTe.bop.table with DATE: 2012-06-25
Reading potential file CdTe.bop.table with DATE: 2012-06-25
mass 1 112.4
mass 2 127.6
comm_modify cutoff 14.7
velocity all create 1000.0 376847 loop geom
neighbor 0.1 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.001
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5
ghost atom cutoff = 14.7
binsize = 2.5, bins = 55 55 28
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair bop, perpetual
attributes: full, newton on, ghost
pair build: full/bin/ghost
stencil: full/ghost/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 8.497 | 8.497 | 8.497 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1000 -69539.487 0 -65403.292 3473.2595
100 572.16481 -67769.936 0 -65403.35 1838.6993
Loop time of 6.50033 on 4 procs for 100 steps with 32000 atoms
Performance: 1.329 ns/day, 18.056 hours/ns, 15.384 timesteps/s
99.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.7879 | 5.975 | 6.1607 | 5.4 | 91.92
Neigh | 0.27603 | 0.27621 | 0.27647 | 0.0 | 4.25
Comm | 0.049869 | 0.23531 | 0.42241 | 27.2 | 3.62
Output | 4.9829e-05 | 5.9724e-05 | 8.5592e-05 | 0.0 | 0.00
Modify | 0.0089927 | 0.0090921 | 0.0092406 | 0.1 | 0.14
Other | | 0.004665 | | | 0.07
Nlocal: 8000 ave 8006 max 7994 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Nghost: 15171 ave 15177 max 15165 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Neighs: 0 ave 0 max 0 min
Histogram: 4 0 0 0 0 0 0 0 0 0
FullNghs: 35322 ave 35412 max 35267 min
Histogram: 1 0 1 1 0 0 0 0 0 1
Total # of neighbors = 141288
Ave neighs/atom = 4.41525
Neighbor list builds = 14
Dangerous builds = 0
Total wall time: 0:00:06

View File

@ -1,94 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# SiO2 for COMB potential
units metal
atom_style charge
read_data data.comb
triclinic box = (0 0 0) to (74.58 74.58 83.064) with tilt (0 0 0)
1 by 1 by 1 MPI processor grid
reading atoms ...
32400 atoms
mass 1 28.0855
group type1 type 1
10800 atoms in group type1
compute charge1 type1 property/atom q
compute q1 type1 reduce ave c_charge1
mass 2 16.00
group type2 type 2
21600 atoms in group type2
compute charge2 type2 property/atom q
compute q2 type2 reduce ave c_charge2
pair_style comb
pair_coeff * * ffield.comb Si O
neighbor 0.5 bin
neigh_modify every 10 delay 0 check yes
timestep 0.0002
thermo_style custom step temp etotal pe evdwl ecoul c_q1 c_q2 press vol
thermo_modify norm yes
velocity all create 300.0 3482028
fix 1 all nvt temp 300.0 300.0 0.1
fix 2 all qeq/comb 10 0.001 file fq.out
thermo 10
run 100
Neighbor list info ...
update every 10 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 12.5
ghost atom cutoff = 12.5
binsize = 6.25, bins = 12 12 14
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair comb, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 106.2 | 106.2 | 106.2 Mbytes
Step Temp TotEng PotEng E_vdwl E_coul c_q1 c_q2 Press Volume
0 300 -6.8032038 -6.8419806 4.6274455 -11.469426 2.8875895 -1.4437947 13386.415 462016.62
10 273.21913 -6.8032489 -6.8385642 4.6221303 -11.460695 2.8872353 -1.4436176 13076.442 462016.62
20 242.0051 -6.803367 -6.8346477 4.6208311 -11.455479 2.8870176 -1.4435087 12080.423 462016.62
30 214.5618 -6.8034588 -6.8311922 4.620067 -11.451259 2.8870575 -1.4435287 10307.876 462016.62
40 198.14521 -6.8035174 -6.8291289 4.6202931 -11.449422 2.8874526 -1.4437263 7765.732 462016.62
50 197.15561 -6.8035468 -6.8290303 4.6219602 -11.450991 2.8883366 -1.4441683 4432.7134 462016.62
60 212.04532 -6.8035584 -6.8309666 4.6260476 -11.457014 2.8896425 -1.4448212 324.71226 462016.62
70 239.37999 -6.8035665 -6.8345078 4.6322984 -11.466806 2.8912723 -1.4456361 -4497.0492 462016.62
80 272.98301 -6.803583 -6.8388677 4.6404093 -11.479277 2.8932784 -1.4466392 -9896.1704 462016.62
90 305.77651 -6.8036184 -6.8431419 4.6512736 -11.494415 2.8953109 -1.4476554 -15675.983 462016.62
100 331.58255 -6.8036753 -6.8465344 4.662727 -11.509261 2.897273 -1.4486365 -21675.515 462016.62
Loop time of 517.206 on 1 procs for 100 steps with 32400 atoms
Performance: 0.003 ns/day, 7183.417 hours/ns, 0.193 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 114.18 | 114.18 | 114.18 | 0.0 | 22.08
Neigh | 0.47558 | 0.47558 | 0.47558 | 0.0 | 0.09
Comm | 0.030611 | 0.030611 | 0.030611 | 0.0 | 0.01
Output | 0.0024922 | 0.0024922 | 0.0024922 | 0.0 | 0.00
Modify | 402.51 | 402.51 | 402.51 | 0.0 | 77.82
Other | | 0.006137 | | | 0.00
Nlocal: 32400 ave 32400 max 32400 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 42518 ave 42518 max 42518 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 1.85317e+07 ave 1.85317e+07 max 1.85317e+07 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 18531740
Ave neighs/atom = 571.967
Neighbor list builds = 1
Dangerous builds = 0
Total wall time: 0:09:18

View File

@ -1,94 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# SiO2 for COMB potential
units metal
atom_style charge
read_data data.comb
triclinic box = (0 0 0) to (74.58 74.58 83.064) with tilt (0 0 0)
1 by 2 by 2 MPI processor grid
reading atoms ...
32400 atoms
mass 1 28.0855
group type1 type 1
10800 atoms in group type1
compute charge1 type1 property/atom q
compute q1 type1 reduce ave c_charge1
mass 2 16.00
group type2 type 2
21600 atoms in group type2
compute charge2 type2 property/atom q
compute q2 type2 reduce ave c_charge2
pair_style comb
pair_coeff * * ffield.comb Si O
neighbor 0.5 bin
neigh_modify every 10 delay 0 check yes
timestep 0.0002
thermo_style custom step temp etotal pe evdwl ecoul c_q1 c_q2 press vol
thermo_modify norm yes
velocity all create 300.0 3482028
fix 1 all nvt temp 300.0 300.0 0.1
fix 2 all qeq/comb 10 0.001 file fq.out
thermo 10
run 100
Neighbor list info ...
update every 10 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 12.5
ghost atom cutoff = 12.5
binsize = 6.25, bins = 12 12 14
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair comb, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 32.11 | 32.11 | 32.11 Mbytes
Step Temp TotEng PotEng E_vdwl E_coul c_q1 c_q2 Press Volume
0 300 -6.8032038 -6.8419806 4.6274455 -11.469426 2.8875895 -1.4437947 13386.415 462016.62
10 273.21913 -6.8032489 -6.8385642 4.6221303 -11.460695 2.8872353 -1.4436176 13076.442 462016.62
20 242.0051 -6.803367 -6.8346477 4.6208311 -11.455479 2.8870176 -1.4435087 12080.423 462016.62
30 214.5618 -6.8034588 -6.8311922 4.620067 -11.451259 2.8870575 -1.4435287 10307.876 462016.62
40 198.14521 -6.8035174 -6.8291289 4.6202931 -11.449422 2.8874526 -1.4437263 7765.732 462016.62
50 197.15561 -6.8035468 -6.8290303 4.6219602 -11.450991 2.8883366 -1.4441683 4432.7134 462016.62
60 212.04532 -6.8035584 -6.8309666 4.6260476 -11.457014 2.8896425 -1.4448212 324.71226 462016.62
70 239.37999 -6.8035665 -6.8345078 4.6322984 -11.466806 2.8912723 -1.4456361 -4497.0492 462016.62
80 272.98301 -6.803583 -6.8388677 4.6404093 -11.479277 2.8932784 -1.4466392 -9896.1704 462016.62
90 305.77651 -6.8036184 -6.8431419 4.6512736 -11.494415 2.8953109 -1.4476554 -15675.983 462016.62
100 331.58255 -6.8036753 -6.8465344 4.662727 -11.509261 2.897273 -1.4486365 -21675.515 462016.62
Loop time of 131.437 on 4 procs for 100 steps with 32400 atoms
Performance: 0.013 ns/day, 1825.518 hours/ns, 0.761 timesteps/s
99.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 28.847 | 28.874 | 28.913 | 0.5 | 21.97
Neigh | 0.10981 | 0.11084 | 0.11145 | 0.2 | 0.08
Comm | 0.28924 | 0.32866 | 0.3556 | 4.5 | 0.25
Output | 0.0010426 | 0.0011656 | 0.0015302 | 0.6 | 0.00
Modify | 102.12 | 102.12 | 102.12 | 0.0 | 77.69
Other | | 0.003455 | | | 0.00
Nlocal: 8100 ave 8110 max 8090 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Nghost: 20725.2 ave 20772 max 20694 min
Histogram: 1 1 0 0 1 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 4 0 0 0 0 0 0 0 0 0
FullNghs: 4.63294e+06 ave 4.63866e+06 max 4.62736e+06 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Total # of neighbors = 18531740
Ave neighs/atom = 571.967
Neighbor list builds = 1
Dangerous builds = 0
Total wall time: 0:02:21

View File

@ -1,75 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# DPD benchmark
units lj
atom_style atomic
comm_modify mode single vel yes
lattice fcc 3.0
Lattice spacing in x,y,z = 1.10064 1.10064 1.10064
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (22.0128 22.0128 22.0128)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.0018332 secs
mass 1 1.0
velocity all create 1.0 87287 loop geom
pair_style dpd 1.0 1.0 928948
pair_coeff 1 1 25.0 4.5
neighbor 0.5 bin
neigh_modify delay 0 every 1
fix 1 all nve
timestep 0.04
run 100
Neighbor list info ...
update every 1 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 1.5
ghost atom cutoff = 1.5
binsize = 0.75, bins = 30 30 30
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair dpd, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 11.32 | 11.32 | 11.32 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1 3.6872574 0 5.1872105 28.880274
100 1.0246036 4.5727353 0 6.1095927 23.859969
Loop time of 3.09286 on 1 procs for 100 steps with 32000 atoms
Performance: 111741.340 tau/day, 32.333 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.5326 | 1.5326 | 1.5326 | 0.0 | 49.55
Neigh | 1.4771 | 1.4771 | 1.4771 | 0.0 | 47.76
Comm | 0.044292 | 0.044292 | 0.044292 | 0.0 | 1.43
Output | 0.00011039 | 0.00011039 | 0.00011039 | 0.0 | 0.00
Modify | 0.022322 | 0.022322 | 0.022322 | 0.0 | 0.72
Other | | 0.01648 | | | 0.53
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 14981 ave 14981 max 14981 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 660587 ave 660587 max 660587 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 660587
Ave neighs/atom = 20.6433
Neighbor list builds = 50
Dangerous builds = 0
Total wall time: 0:00:03

View File

@ -1,75 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# DPD benchmark
units lj
atom_style atomic
comm_modify mode single vel yes
lattice fcc 3.0
Lattice spacing in x,y,z = 1.10064 1.10064 1.10064
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (22.0128 22.0128 22.0128)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.000589132 secs
mass 1 1.0
velocity all create 1.0 87287 loop geom
pair_style dpd 1.0 1.0 928948
pair_coeff 1 1 25.0 4.5
neighbor 0.5 bin
neigh_modify delay 0 every 1
fix 1 all nve
timestep 0.04
run 100
Neighbor list info ...
update every 1 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 1.5
ghost atom cutoff = 1.5
binsize = 0.75, bins = 30 30 30
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair dpd, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.874 | 3.874 | 3.874 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1 3.6872574 0 5.1872105 28.911346
100 1.0219182 4.5817845 0 6.1146139 23.803115
Loop time of 0.83904 on 4 procs for 100 steps with 32000 atoms
Performance: 411899.440 tau/day, 119.184 timesteps/s
99.3% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.39605 | 0.40101 | 0.40702 | 0.6 | 47.79
Neigh | 0.38186 | 0.38494 | 0.38738 | 0.4 | 45.88
Comm | 0.032073 | 0.039688 | 0.045953 | 2.9 | 4.73
Output | 4.4823e-05 | 5.4002e-05 | 7.844e-05 | 0.0 | 0.01
Modify | 0.0056572 | 0.0056887 | 0.0057547 | 0.1 | 0.68
Other | | 0.007655 | | | 0.91
Nlocal: 8000 ave 8014 max 7986 min
Histogram: 1 1 0 0 0 0 0 0 1 1
Nghost: 6744 ave 6764 max 6726 min
Histogram: 1 0 0 1 0 1 0 0 0 1
Neighs: 165107 ave 166433 max 163419 min
Histogram: 1 0 1 0 0 0 0 0 0 2
Total # of neighbors = 660428
Ave neighs/atom = 20.6384
Neighbor list builds = 50
Dangerous builds = 0
Total wall time: 0:00:00

View File

@ -1,74 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Cu in EAM
units metal
atom_style atomic
lattice fcc 3.615
Lattice spacing in x,y,z = 3.615 3.615 3.615
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (72.3 72.3 72.3)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.00185037 secs
pair_style eam
pair_coeff 1 1 Cu_u3.eam
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5.95
ghost atom cutoff = 5.95
binsize = 2.975, bins = 25 25 25
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair eam, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 16.83 | 16.83 | 16.83 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -113280 0 -106662.09 18703.573
100 801.832 -109957.3 0 -106640.77 51322.821
Loop time of 3.92295 on 1 procs for 100 steps with 32000 atoms
Performance: 11.012 ns/day, 2.179 hours/ns, 25.491 timesteps/s
99.6% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 3.3913 | 3.3913 | 3.3913 | 0.0 | 86.45
Neigh | 0.48107 | 0.48107 | 0.48107 | 0.0 | 12.26
Comm | 0.01729 | 0.01729 | 0.01729 | 0.0 | 0.44
Output | 0.00011253 | 0.00011253 | 0.00011253 | 0.0 | 0.00
Modify | 0.024349 | 0.024349 | 0.024349 | 0.0 | 0.62
Other | | 0.008847 | | | 0.23
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 19909 ave 19909 max 19909 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 1.20778e+06 ave 1.20778e+06 max 1.20778e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 1207784
Ave neighs/atom = 37.7433
Neighbor list builds = 13
Dangerous builds = 0
Total wall time: 0:00:03

View File

@ -1,74 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Cu in EAM
units metal
atom_style atomic
lattice fcc 3.615
Lattice spacing in x,y,z = 3.615 3.615 3.615
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (72.3 72.3 72.3)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.000595331 secs
pair_style eam
pair_coeff 1 1 Cu_u3.eam
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5.95
ghost atom cutoff = 5.95
binsize = 2.975, bins = 25 25 25
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair eam, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 7.381 | 7.381 | 7.381 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -113280 0 -106662.09 18703.573
100 801.832 -109957.3 0 -106640.77 51322.821
Loop time of 1.04497 on 4 procs for 100 steps with 32000 atoms
Performance: 41.341 ns/day, 0.581 hours/ns, 95.697 timesteps/s
99.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.88513 | 0.88724 | 0.89191 | 0.3 | 84.91
Neigh | 0.12418 | 0.12458 | 0.12511 | 0.1 | 11.92
Comm | 0.015654 | 0.020543 | 0.022984 | 2.0 | 1.97
Output | 4.8637e-05 | 5.8711e-05 | 8.6546e-05 | 0.0 | 0.01
Modify | 0.0085199 | 0.0085896 | 0.0086446 | 0.1 | 0.82
Other | | 0.003959 | | | 0.38
Nlocal: 8000 ave 8008 max 7993 min
Histogram: 2 0 0 0 0 0 0 0 1 1
Nghost: 9130.25 ave 9138 max 9122 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Neighs: 301946 ave 302392 max 301360 min
Histogram: 1 0 0 0 1 0 0 0 1 1
Total # of neighbors = 1207784
Ave neighs/atom = 37.7433
Neighbor list builds = 13
Dangerous builds = 0
Total wall time: 0:00:01

View File

@ -1,97 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# eFF benchmark of H plasma
units electron
atom_style electron
read_data data.eff
orthogonal box = (0 0 0) to (41.9118 41.9118 41.9118)
1 by 1 by 1 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
pair_style eff/cut 12
pair_coeff * *
neigh_modify one 6000 page 60000
comm_modify vel yes
compute effTemp all temp/eff
thermo 5
thermo_style custom step etotal pe ke temp press
thermo_modify temp effTemp
fix 1 all nve/eff
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 6000, page size: 60000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair eff/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 370.9 | 370.9 | 370.9 Mbytes
Step TotEng PotEng KinEng Temp Press
0 4046.5854 796.63785 3249.9475 42763.133 4.4764483e+12
5 4046.5854 796.95799 3249.6274 42758.92 4.4728546e+12
10 4046.5854 797.71165 3248.8737 42749.004 4.4690821e+12
15 4046.5854 798.8949 3247.6905 42733.435 4.4651331e+12
20 4046.5854 800.50332 3246.0821 42712.271 4.4610102e+12
25 4046.5854 802.53206 3244.0534 42685.577 4.456716e+12
30 4046.5855 804.97579 3241.6097 42653.422 4.4522535e+12
35 4046.5855 807.82873 3238.7567 42615.883 4.4476257e+12
40 4046.5855 811.08467 3235.5008 42573.041 4.4428357e+12
45 4046.5855 814.73696 3231.8485 42524.984 4.437887e+12
50 4046.5855 818.77851 3227.807 42471.806 4.432783e+12
55 4046.5855 823.20183 3223.3837 42413.603 4.4275273e+12
60 4046.5856 827.99901 3218.5866 42350.482 4.4221238e+12
65 4046.5856 833.16176 3213.4238 42282.55 4.4165764e+12
70 4046.5856 838.68137 3207.9042 42209.923 4.4108891e+12
75 4046.5856 844.54877 3202.0369 42132.719 4.4050662e+12
80 4046.5857 850.75454 3195.8311 42051.064 4.399112e+12
85 4046.5857 857.28886 3189.2968 41965.085 4.393031e+12
90 4046.5857 864.14162 3182.4441 41874.916 4.3868277e+12
95 4046.5857 871.30234 3175.2834 41780.695 4.3805068e+12
100 4046.5858 878.76023 3167.8255 41682.563 4.3740731e+12
Loop time of 323.031 on 1 procs for 100 steps with 32000 atoms
Performance: 26.747 fs/day, 0.897 hours/fs, 0.310 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 322.78 | 322.78 | 322.78 | 0.0 | 99.92
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.1876 | 0.1876 | 0.1876 | 0.0 | 0.06
Output | 0.0027025 | 0.0027025 | 0.0027025 | 0.0 | 0.00
Modify | 0.032475 | 0.032475 | 0.032475 | 0.0 | 0.01
Other | | 0.02538 | | | 0.01
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 114349 ave 114349 max 114349 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 8.10572e+07 ave 8.10572e+07 max 8.10572e+07 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 81057159
Ave neighs/atom = 2533.04
Neighbor list builds = 0
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:05:27

View File

@ -1,97 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# eFF benchmark of H plasma
units electron
atom_style electron
read_data data.eff
orthogonal box = (0 0 0) to (41.9118 41.9118 41.9118)
1 by 2 by 2 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
pair_style eff/cut 12
pair_coeff * *
neigh_modify one 6000 page 60000
comm_modify vel yes
compute effTemp all temp/eff
thermo 5
thermo_style custom step etotal pe ke temp press
thermo_modify temp effTemp
fix 1 all nve/eff
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 6000, page size: 60000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair eff/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 101.4 | 104.8 | 108.3 Mbytes
Step TotEng PotEng KinEng Temp Press
0 4046.5854 796.63785 3249.9475 42763.133 4.4764483e+12
5 4046.5854 796.95799 3249.6274 42758.92 4.4728546e+12
10 4046.5854 797.71165 3248.8737 42749.004 4.4690821e+12
15 4046.5854 798.8949 3247.6905 42733.435 4.4651331e+12
20 4046.5854 800.50332 3246.0821 42712.271 4.4610102e+12
25 4046.5854 802.53206 3244.0534 42685.577 4.456716e+12
30 4046.5855 804.97579 3241.6097 42653.422 4.4522535e+12
35 4046.5855 807.82873 3238.7567 42615.883 4.4476257e+12
40 4046.5855 811.08467 3235.5008 42573.041 4.4428357e+12
45 4046.5855 814.73696 3231.8485 42524.984 4.437887e+12
50 4046.5855 818.77851 3227.807 42471.806 4.432783e+12
55 4046.5855 823.20183 3223.3837 42413.603 4.4275273e+12
60 4046.5856 827.99901 3218.5866 42350.482 4.4221238e+12
65 4046.5856 833.16176 3213.4238 42282.55 4.4165764e+12
70 4046.5856 838.68137 3207.9042 42209.923 4.4108891e+12
75 4046.5856 844.54877 3202.0369 42132.719 4.4050662e+12
80 4046.5857 850.75454 3195.8311 42051.064 4.399112e+12
85 4046.5857 857.28886 3189.2968 41965.085 4.393031e+12
90 4046.5857 864.14162 3182.4441 41874.916 4.3868277e+12
95 4046.5857 871.30234 3175.2834 41780.695 4.3805068e+12
100 4046.5858 878.76023 3167.8255 41682.563 4.3740731e+12
Loop time of 90.1636 on 4 procs for 100 steps with 32000 atoms
Performance: 95.826 fs/day, 0.250 hours/fs, 1.109 timesteps/s
99.1% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 83.772 | 86.516 | 89.593 | 29.5 | 95.95
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.51677 | 3.5934 | 6.3368 | 144.6 | 3.99
Output | 0.0012872 | 0.0018208 | 0.0024981 | 1.0 | 0.00
Modify | 0.017231 | 0.018405 | 0.01983 | 0.8 | 0.02
Other | | 0.03431 | | | 0.04
Nlocal: 8000 ave 8112 max 7875 min
Histogram: 1 1 0 0 0 0 0 0 0 2
Nghost: 65589 ave 66004 max 65177 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Neighs: 2.02643e+07 ave 2.11126e+07 max 1.94058e+07 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Total # of neighbors = 81057159
Ave neighs/atom = 2533.04
Neighbor list builds = 0
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:01:31

View File

@ -1,77 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# EIM benchmark
# if run long enough (e.g. 1M steps), the unstable CsCl form of a NaCl single
# crystal can be annealed to the correct NaCl type of NaCl polycrystals
units metal
atom_style atomic
read_data data.eim
orthogonal box = (-0.5 -0.5 -0.5) to (71.58 143.66 71.58)
1 by 1 by 1 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
pair_style eim
pair_coeff * * Na Cl ffield.eim Na Cl
neighbor 0.3 bin
neigh_modify delay 0 every 1
timestep 0.0005
thermo_style custom step pe pxx pyy pzz temp
velocity all create 1400.0 43454 dist gaussian mom yes
fix int all npt temp 1400.0 1400.0 0.1 aniso 0.0 0.0 0.1
# anneal in much longer run
#fix int all npt temp 1400.0 300.0 0.1 aniso 0.0 0.0 0.1
run 100
Neighbor list info ...
update every 1 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 7.906
ghost atom cutoff = 7.906
binsize = 3.953, bins = 19 37 19
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair eim, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 17.72 | 17.72 | 17.72 Mbytes
Step PotEng Pxx Pyy Pzz Temp
0 -90567.58 -117883.6 -118039.81 -117894.07 1400
100 -91997.012 -4104.7052 -4138.276 -4145.8936 944.10136
Loop time of 11.4536 on 1 procs for 100 steps with 32000 atoms
Performance: 0.377 ns/day, 63.631 hours/ns, 8.731 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 9.8277 | 9.8277 | 9.8277 | 0.0 | 85.80
Neigh | 1.484 | 1.484 | 1.484 | 0.0 | 12.96
Comm | 0.028584 | 0.028584 | 0.028584 | 0.0 | 0.25
Output | 0.00023127 | 0.00023127 | 0.00023127 | 0.0 | 0.00
Modify | 0.09791 | 0.09791 | 0.09791 | 0.0 | 0.85
Other | | 0.0152 | | | 0.13
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 21505 ave 21505 max 21505 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 1.5839e+06 ave 1.5839e+06 max 1.5839e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 1583901
Ave neighs/atom = 49.4969
Neighbor list builds = 37
Dangerous builds = 12
Total wall time: 0:00:11

View File

@ -1,77 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# EIM benchmark
# if run long enough (e.g. 1M steps), the unstable CsCl form of a NaCl single
# crystal can be annealed to the correct NaCl type of NaCl polycrystals
units metal
atom_style atomic
read_data data.eim
orthogonal box = (-0.5 -0.5 -0.5) to (71.58 143.66 71.58)
1 by 4 by 1 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
pair_style eim
pair_coeff * * Na Cl ffield.eim Na Cl
neighbor 0.3 bin
neigh_modify delay 0 every 1
timestep 0.0005
thermo_style custom step pe pxx pyy pzz temp
velocity all create 1400.0 43454 dist gaussian mom yes
fix int all npt temp 1400.0 1400.0 0.1 aniso 0.0 0.0 0.1
# anneal in much longer run
#fix int all npt temp 1400.0 300.0 0.1 aniso 0.0 0.0 0.1
run 100
Neighbor list info ...
update every 1 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 7.906
ghost atom cutoff = 7.906
binsize = 3.953, bins = 19 37 19
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair eim, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 7.285 | 7.285 | 7.285 Mbytes
Step PotEng Pxx Pyy Pzz Temp
0 -90567.58 -117883.6 -118039.81 -117894.07 1400
100 -91997.012 -4104.7052 -4138.276 -4145.8936 944.10136
Loop time of 3.12061 on 4 procs for 100 steps with 32000 atoms
Performance: 1.384 ns/day, 17.337 hours/ns, 32.045 timesteps/s
98.8% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 2.6504 | 2.6583 | 2.6685 | 0.5 | 85.18
Neigh | 0.36996 | 0.37847 | 0.39396 | 1.5 | 12.13
Comm | 0.037041 | 0.040586 | 0.04504 | 1.4 | 1.30
Output | 7.081e-05 | 8.75e-05 | 0.00012994 | 0.0 | 0.00
Modify | 0.029286 | 0.035978 | 0.047942 | 3.9 | 1.15
Other | | 0.007206 | | | 0.23
Nlocal: 8000 ave 8000 max 8000 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Nghost: 9460.25 ave 9469 max 9449 min
Histogram: 1 0 0 0 0 1 0 1 0 1
Neighs: 395975 ave 397239 max 394616 min
Histogram: 1 0 0 1 0 0 0 1 0 1
Total # of neighbors = 1583901
Ave neighs/atom = 49.4969
Neighbor list builds = 37
Dangerous builds = 12
Total wall time: 0:00:03

View File

@ -1,84 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# FENE beadspring benchmark
units lj
atom_style bond
special_bonds fene
read_data data.fene
orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
scanning bonds ...
1 = max bonds/atom
reading bonds ...
31680 bonds
2 = max # of 1-2 neighbors
2 = max # of special neighbors
neighbor 0.4 bin
neigh_modify delay 5 every 1
bond_style fene
bond_coeff 1 30.0 1.5 1.0 1.0
pair_style lj/cut 1.12
pair_modify shift yes
pair_coeff 1 1 1.0 1.0 1.12
fix 1 all nve
fix 2 all langevin 1.0 1.0 10.0 904297
timestep 0.012
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 1.52
ghost atom cutoff = 1.52
binsize = 0.76, bins = 45 45 45
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 13.18 | 13.18 | 13.18 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0.97029772 0.44484087 20.494523 22.394765 4.6721833
100 0.9729966 0.4361122 20.507698 22.40326 4.6548819
Loop time of 0.66285 on 1 procs for 100 steps with 32000 atoms
Performance: 156415.445 tau/day, 150.864 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.13075 | 0.13075 | 0.13075 | 0.0 | 19.73
Bond | 0.046363 | 0.046363 | 0.046363 | 0.0 | 6.99
Neigh | 0.3172 | 0.3172 | 0.3172 | 0.0 | 47.85
Comm | 0.016553 | 0.016553 | 0.016553 | 0.0 | 2.50
Output | 0.00010395 | 0.00010395 | 0.00010395 | 0.0 | 0.02
Modify | 0.14515 | 0.14515 | 0.14515 | 0.0 | 21.90
Other | | 0.006728 | | | 1.02
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 9493 ave 9493 max 9493 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 155873 ave 155873 max 155873 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 155873
Ave neighs/atom = 4.87103
Ave special neighs/atom = 1.98
Neighbor list builds = 20
Dangerous builds = 20
Total wall time: 0:00:00

View File

@ -1,84 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# FENE beadspring benchmark
units lj
atom_style bond
special_bonds fene
read_data data.fene
orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
scanning bonds ...
1 = max bonds/atom
reading bonds ...
31680 bonds
2 = max # of 1-2 neighbors
2 = max # of special neighbors
neighbor 0.4 bin
neigh_modify delay 5 every 1
bond_style fene
bond_coeff 1 30.0 1.5 1.0 1.0
pair_style lj/cut 1.12
pair_modify shift yes
pair_coeff 1 1 1.0 1.0 1.12
fix 1 all nve
fix 2 all langevin 1.0 1.0 10.0 904297
timestep 0.012
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 1.52
ghost atom cutoff = 1.52
binsize = 0.76, bins = 45 45 45
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 4.605 | 4.605 | 4.606 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0.97029772 0.44484087 20.494523 22.394765 4.6721833
100 0.9736748 0.44378481 20.502389 22.40664 4.7809557
Loop time of 0.184782 on 4 procs for 100 steps with 32000 atoms
Performance: 561093.346 tau/day, 541.178 timesteps/s
98.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.033747 | 0.034391 | 0.035036 | 0.3 | 18.61
Bond | 0.012475 | 0.012579 | 0.012812 | 0.1 | 6.81
Neigh | 0.083916 | 0.083953 | 0.084022 | 0.0 | 45.43
Comm | 0.012409 | 0.01363 | 0.014534 | 0.7 | 7.38
Output | 4.1246e-05 | 5.9545e-05 | 0.00010443 | 0.0 | 0.03
Modify | 0.036675 | 0.037876 | 0.038357 | 0.4 | 20.50
Other | | 0.002294 | | | 1.24
Nlocal: 8000 ave 8023 max 7978 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Nghost: 4158.75 ave 4175 max 4145 min
Histogram: 1 0 1 0 0 0 1 0 0 1
Neighs: 38940 ave 39184 max 38640 min
Histogram: 1 0 0 0 0 1 1 0 0 1
Total # of neighbors = 155760
Ave neighs/atom = 4.8675
Ave special neighs/atom = 1.98
Neighbor list builds = 20
Dangerous builds = 20
Total wall time: 0:00:00

View File

@ -1,103 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# Gay-Berne benchmark
# biaxial ellipsoid mesogens in isotropic phase
# shape: 2 1.5 1
# cutoff 4.0 with skin 0.8
# NPT, T=2.4, P=8.0
units lj
atom_style ellipsoid
# creation
#lattice sc 0.22
#region box block 0 32 0 32 0 32
#create_box 1 box
#create_atoms 1 box
#set group all quat/random 982381
read_data data.gb
orthogonal box = (2.19575 2.19575 2.19575) to (50.8124 50.8124 50.8124)
1 by 1 by 1 MPI processor grid
reading atoms ...
32768 atoms
reading velocities ...
32768 velocities
32768 ellipsoids
compute rot all temp/asphere
group spheroid type 1
32768 atoms in group spheroid
variable dof equal count(spheroid)+3
compute_modify rot extra ${dof}
compute_modify rot extra 32771
velocity all create 2.4 41787 loop geom
pair_style gayberne 1.0 3.0 1.0 4.0
pair_coeff 1 1 1.0 1.0 1.0 0.5 0.2 1.0 0.5 0.2
neighbor 0.8 bin
timestep 0.002
thermo 20
# equilibration
#fix 1 all npt/asphere temp 2.4 2.4 0.1 iso 5.0 8.0 0.1
#compute_modify 1_temp extra ${dof}
#run 100
#write_restart tmp.restart
fix 1 all npt/asphere temp 2.4 2.4 0.2 iso 8.0 8.0 0.2
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 4.8
ghost atom cutoff = 4.8
binsize = 2.4, bins = 21 21 21
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair gayberne, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 28.91 | 28.91 | 28.91 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 2.4 0.50438568 0 4.1042758 6.7818168 114909.09
20 2.7357818 0.26045557 0 4.364003 6.8299368 111715.16
40 2.9201296 0.22570735 0 4.605768 7.0767907 109473.23
60 2.9820039 0.19733812 0 4.6702075 7.1507065 108393.77
80 3.0148529 0.15114819 0 4.6732895 7.1699502 107672.24
100 3.0206703 0.10567623 0 4.6365433 7.154345 107184.83
Loop time of 43.7894 on 1 procs for 100 steps with 32768 atoms
Performance: 394.616 tau/day, 2.284 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 42.881 | 42.881 | 42.881 | 0.0 | 97.93
Neigh | 0.35071 | 0.35071 | 0.35071 | 0.0 | 0.80
Comm | 0.065153 | 0.065153 | 0.065153 | 0.0 | 0.15
Output | 0.00054383 | 0.00054383 | 0.00054383 | 0.0 | 0.00
Modify | 0.47852 | 0.47852 | 0.47852 | 0.0 | 1.09
Other | | 0.01337 | | | 0.03
Nlocal: 32768 ave 32768 max 32768 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 25669 ave 25669 max 25669 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 2.30433e+06 ave 2.30433e+06 max 2.30433e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 2304332
Ave neighs/atom = 70.3226
Neighbor list builds = 6
Dangerous builds = 3
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:00:44

View File

@ -1,103 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# Gay-Berne benchmark
# biaxial ellipsoid mesogens in isotropic phase
# shape: 2 1.5 1
# cutoff 4.0 with skin 0.8
# NPT, T=2.4, P=8.0
units lj
atom_style ellipsoid
# creation
#lattice sc 0.22
#region box block 0 32 0 32 0 32
#create_box 1 box
#create_atoms 1 box
#set group all quat/random 982381
read_data data.gb
orthogonal box = (2.19575 2.19575 2.19575) to (50.8124 50.8124 50.8124)
1 by 2 by 2 MPI processor grid
reading atoms ...
32768 atoms
reading velocities ...
32768 velocities
32768 ellipsoids
compute rot all temp/asphere
group spheroid type 1
32768 atoms in group spheroid
variable dof equal count(spheroid)+3
compute_modify rot extra ${dof}
compute_modify rot extra 32771
velocity all create 2.4 41787 loop geom
pair_style gayberne 1.0 3.0 1.0 4.0
pair_coeff 1 1 1.0 1.0 1.0 0.5 0.2 1.0 0.5 0.2
neighbor 0.8 bin
timestep 0.002
thermo 20
# equilibration
#fix 1 all npt/asphere temp 2.4 2.4 0.1 iso 5.0 8.0 0.1
#compute_modify 1_temp extra ${dof}
#run 100
#write_restart tmp.restart
fix 1 all npt/asphere temp 2.4 2.4 0.2 iso 8.0 8.0 0.2
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 4.8
ghost atom cutoff = 4.8
binsize = 2.4, bins = 21 21 21
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair gayberne, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 11.78 | 11.78 | 11.78 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 2.4 0.50438568 0 4.1042758 6.7818168 114909.09
20 2.7357818 0.26045557 0 4.364003 6.8299368 111715.16
40 2.9201296 0.22570735 0 4.605768 7.0767907 109473.23
60 2.9820039 0.19733812 0 4.6702075 7.1507065 108393.77
80 3.0148529 0.15114819 0 4.6732895 7.1699502 107672.24
100 3.0206703 0.10567623 0 4.6365433 7.154345 107184.83
Loop time of 11.3124 on 4 procs for 100 steps with 32768 atoms
Performance: 1527.522 tau/day, 8.840 timesteps/s
99.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 10.778 | 10.849 | 10.934 | 2.0 | 95.90
Neigh | 0.088265 | 0.08871 | 0.089238 | 0.1 | 0.78
Comm | 0.1384 | 0.22518 | 0.29662 | 14.1 | 1.99
Output | 0.00020599 | 0.00024837 | 0.00036836 | 0.0 | 0.00
Modify | 0.13828 | 0.13899 | 0.13984 | 0.2 | 1.23
Other | | 0.01053 | | | 0.09
Nlocal: 8192 ave 8215 max 8166 min
Histogram: 1 1 0 0 0 0 0 0 0 2
Nghost: 11972.5 ave 11984 max 11959 min
Histogram: 1 0 0 0 1 0 1 0 0 1
Neighs: 576083 ave 579616 max 572161 min
Histogram: 1 1 0 0 0 0 0 0 0 2
Total # of neighbors = 2304332
Ave neighs/atom = 70.3226
Neighbor list builds = 6
Dangerous builds = 3
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:00:11

View File

@ -1,85 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# granular chute flow
units lj
atom_style sphere
boundary p p fs
newton off
comm_modify vel yes
read_data data.granular
orthogonal box = (0 0 0) to (40 20 37.2886)
1 by 1 by 1 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0
pair_coeff * *
neighbor 0.1 bin
neigh_modify delay 5 every 1
timestep 0.0001
group bottom type 2
912 atoms in group bottom
group active subtract all bottom
31088 atoms in group active
neigh_modify exclude group bottom bottom
fix 1 all gravity 1.0 chute 26.0
fix 2 bottom freeze
fix 3 active nve/sphere
compute 1 all erotate/sphere
thermo_style custom step atoms ke c_1 vol
thermo_modify norm no
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 1.1
ghost atom cutoff = 1.1
binsize = 0.55, bins = 73 37 68
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair gran/hooke/history, perpetual
attributes: half, newton off, size, history
pair build: half/size/bin/newtoff
stencil: half/bin/3d/newtoff
bin: standard
Per MPI rank memory allocation (min/avg/max) = 23.36 | 23.36 | 23.36 Mbytes
Step Atoms KinEng c_1 Volume
0 32000 784139.13 1601.1263 29833.783
100 32000 784292.08 1571.0968 29834.707
Loop time of 0.292816 on 1 procs for 100 steps with 32000 atoms
Performance: 2950.657 tau/day, 341.511 timesteps/s
99.3% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.17449 | 0.17449 | 0.17449 | 0.0 | 59.59
Neigh | 0.031927 | 0.031927 | 0.031927 | 0.0 | 10.90
Comm | 0.010195 | 0.010195 | 0.010195 | 0.0 | 3.48
Output | 0.00019121 | 0.00019121 | 0.00019121 | 0.0 | 0.07
Modify | 0.064463 | 0.064463 | 0.064463 | 0.0 | 22.01
Other | | 0.01155 | | | 3.94
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5463 ave 5463 max 5463 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 115133 ave 115133 max 115133 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 115133
Ave neighs/atom = 3.59791
Neighbor list builds = 2
Dangerous builds = 0
Total wall time: 0:00:00

View File

@ -1,85 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# granular chute flow
units lj
atom_style sphere
boundary p p fs
newton off
comm_modify vel yes
read_data data.granular
orthogonal box = (0 0 0) to (40 20 37.2886)
2 by 1 by 2 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0
pair_coeff * *
neighbor 0.1 bin
neigh_modify delay 5 every 1
timestep 0.0001
group bottom type 2
912 atoms in group bottom
group active subtract all bottom
31088 atoms in group active
neigh_modify exclude group bottom bottom
fix 1 all gravity 1.0 chute 26.0
fix 2 bottom freeze
fix 3 active nve/sphere
compute 1 all erotate/sphere
thermo_style custom step atoms ke c_1 vol
thermo_modify norm no
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 1.1
ghost atom cutoff = 1.1
binsize = 0.55, bins = 73 37 68
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair gran/hooke/history, perpetual
attributes: half, newton off, size, history
pair build: half/size/bin/newtoff
stencil: half/bin/3d/newtoff
bin: standard
Per MPI rank memory allocation (min/avg/max) = 10.41 | 10.42 | 10.42 Mbytes
Step Atoms KinEng c_1 Volume
0 32000 784139.13 1601.1263 29833.783
100 32000 784292.08 1571.0968 29834.707
Loop time of 0.0903978 on 4 procs for 100 steps with 32000 atoms
Performance: 9557.751 tau/day, 1106.221 timesteps/s
98.3% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.046331 | 0.049088 | 0.052195 | 1.2 | 54.30
Neigh | 0.0090401 | 0.0091327 | 0.0091863 | 0.1 | 10.10
Comm | 0.0073855 | 0.0080023 | 0.0086699 | 0.6 | 8.85
Output | 7.1049e-05 | 0.00010067 | 0.00012088 | 0.0 | 0.11
Modify | 0.017226 | 0.017449 | 0.01803 | 0.3 | 19.30
Other | | 0.006625 | | | 7.33
Nlocal: 8000 ave 8008 max 7992 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Nghost: 2439 ave 2450 max 2428 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Neighs: 29500.5 ave 30488 max 28513 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Total # of neighbors = 118002
Ave neighs/atom = 3.68756
Neighbor list builds = 2
Dangerous builds = 0
Total wall time: 0:00:00

View File

@ -1,73 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.00183916 secs
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 5 every 1
fix 1 all nve
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 24 24 24
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 15.82 | 15.82 | 15.82 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.75745998 -5.7584998 0 -4.6223453 0.20729996
Loop time of 1.721 on 1 procs for 100 steps with 32000 atoms
Performance: 25101.720 tau/day, 58.106 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.2551 | 1.2551 | 1.2551 | 0.0 | 72.93
Neigh | 0.41825 | 0.41825 | 0.41825 | 0.0 | 24.30
Comm | 0.015347 | 0.015347 | 0.015347 | 0.0 | 0.89
Output | 0.00010729 | 0.00010729 | 0.00010729 | 0.0 | 0.01
Modify | 0.023436 | 0.023436 | 0.023436 | 0.0 | 1.36
Other | | 0.008766 | | | 0.51
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 19669 ave 19669 max 19669 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 1.20318e+06 ave 1.20318e+06 max 1.20318e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 1203176
Ave neighs/atom = 37.5992
Neighbor list builds = 11
Dangerous builds = 0
Total wall time: 0:00:01

View File

@ -1,73 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.000587225 secs
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 5 every 1
fix 1 all nve
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 24 24 24
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 6.88 | 6.88 | 6.88 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.75745998 -5.7584998 0 -4.6223453 0.20729996
Loop time of 0.469936 on 4 procs for 100 steps with 32000 atoms
Performance: 91927.316 tau/day, 212.795 timesteps/s
99.1% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.32713 | 0.32917 | 0.33317 | 0.4 | 70.05
Neigh | 0.10836 | 0.10931 | 0.11007 | 0.2 | 23.26
Comm | 0.015526 | 0.020355 | 0.022399 | 2.0 | 4.33
Output | 4.2439e-05 | 5.8353e-05 | 0.00010061 | 0.0 | 0.01
Modify | 0.0071156 | 0.0072448 | 0.007309 | 0.1 | 1.54
Other | | 0.003793 | | | 0.81
Nlocal: 8000 ave 8041 max 7958 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Nghost: 9011 ave 9065 max 8961 min
Histogram: 1 1 0 0 0 0 0 1 0 1
Neighs: 300794 ave 304843 max 297317 min
Histogram: 1 0 0 1 1 0 0 0 0 1
Total # of neighbors = 1203176
Ave neighs/atom = 37.5992
Neighbor list builds = 11
Dangerous builds = 0
Total wall time: 0:00:00

View File

@ -1,84 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Ni in MEAM
units metal
atom_style atomic
lattice fcc 3.52
Lattice spacing in x,y,z = 3.52 3.52 3.52
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (70.4 70.4 70.4)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.00186539 secs
pair_style meam
WARNING: The pair_style meam command is unsupported. Please use pair_style meam/c instead (../pair_meam.cpp:51)
pair_coeff * * library.meam Ni4 Ni.meam Ni4
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
thermo 50
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5
ghost atom cutoff = 5
binsize = 2.5, bins = 29 29 29
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 55.91 | 55.91 | 55.91 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -142400 0 -135782.09 20259.18
50 885.10702 -139411.51 0 -135750.54 32425.433
100 895.5097 -139454.3 0 -135750.3 31804.187
Loop time of 30.6278 on 1 procs for 100 steps with 32000 atoms
Performance: 1.410 ns/day, 17.015 hours/ns, 3.265 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 30.088 | 30.088 | 30.088 | 0.0 | 98.24
Neigh | 0.48914 | 0.48914 | 0.48914 | 0.0 | 1.60
Comm | 0.015916 | 0.015916 | 0.015916 | 0.0 | 0.05
Output | 0.00022554 | 0.00022554 | 0.00022554 | 0.0 | 0.00
Modify | 0.025481 | 0.025481 | 0.025481 | 0.0 | 0.08
Other | | 0.009055 | | | 0.03
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 13576 ave 13576 max 13576 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 780360 ave 780360 max 780360 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 1.56072e+06 ave 1.56072e+06 max 1.56072e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 1560720
Ave neighs/atom = 48.7725
Neighbor list builds = 8
Dangerous builds = 0
Total wall time: 0:00:30

View File

@ -1,84 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Ni in MEAM
units metal
atom_style atomic
lattice fcc 3.52
Lattice spacing in x,y,z = 3.52 3.52 3.52
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (70.4 70.4 70.4)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.000587463 secs
pair_style meam
WARNING: The pair_style meam command is unsupported. Please use pair_style meam/c instead (../pair_meam.cpp:51)
pair_coeff * * library.meam Ni4 Ni.meam Ni4
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
thermo 50
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5
ghost atom cutoff = 5
binsize = 2.5, bins = 29 29 29
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 17.41 | 17.41 | 17.41 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -142400 0 -135782.09 20259.18
50 885.10702 -139411.51 0 -135750.54 32425.433
100 895.5097 -139454.3 0 -135750.3 31804.187
Loop time of 8.21941 on 4 procs for 100 steps with 32000 atoms
Performance: 5.256 ns/day, 4.566 hours/ns, 12.166 timesteps/s
99.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 8.0277 | 8.0384 | 8.0504 | 0.3 | 97.80
Neigh | 0.12555 | 0.12645 | 0.12713 | 0.2 | 1.54
Comm | 0.024279 | 0.036776 | 0.048389 | 4.5 | 0.45
Output | 9.4414e-05 | 0.00011903 | 0.00018597 | 0.0 | 0.00
Modify | 0.01252 | 0.012608 | 0.012795 | 0.1 | 0.15
Other | | 0.005028 | | | 0.06
Nlocal: 8000 ave 8045 max 7947 min
Histogram: 1 0 0 1 0 0 0 1 0 1
Nghost: 6066.75 ave 6120 max 6021 min
Histogram: 1 0 1 0 0 0 1 0 0 1
Neighs: 195090 ave 196403 max 193697 min
Histogram: 1 0 0 1 0 0 0 1 0 1
FullNghs: 390180 ave 392616 max 387490 min
Histogram: 1 0 0 1 0 0 0 1 0 1
Total # of neighbors = 1560720
Ave neighs/atom = 48.7725
Neighbor list builds = 8
Dangerous builds = 0
Total wall time: 0:00:08

View File

@ -1,83 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Ni in MEAM
units metal
atom_style atomic
lattice fcc 3.52
Lattice spacing in x,y,z = 3.52 3.52 3.52
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (70.4 70.4 70.4)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.00184226 secs
pair_style meam/c
pair_coeff * * library.meam Ni4 Ni.meam Ni4
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
thermo 50
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5
ghost atom cutoff = 5
binsize = 2.5, bins = 29 29 29
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/c, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/c, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 55.91 | 55.91 | 55.91 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -142400 0 -135782.09 20259.18
50 885.10702 -139411.51 0 -135750.54 32425.431
100 895.50973 -139454.3 0 -135750.3 31804.185
Loop time of 22.9343 on 1 procs for 100 steps with 32000 atoms
Performance: 1.884 ns/day, 12.741 hours/ns, 4.360 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 22.397 | 22.397 | 22.397 | 0.0 | 97.66
Neigh | 0.48781 | 0.48781 | 0.48781 | 0.0 | 2.13
Comm | 0.013967 | 0.013967 | 0.013967 | 0.0 | 0.06
Output | 0.00022793 | 0.00022793 | 0.00022793 | 0.0 | 0.00
Modify | 0.025412 | 0.025412 | 0.025412 | 0.0 | 0.11
Other | | 0.009448 | | | 0.04
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 13576 ave 13576 max 13576 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 780360 ave 780360 max 780360 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 1.56072e+06 ave 1.56072e+06 max 1.56072e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 1560720
Ave neighs/atom = 48.7725
Neighbor list builds = 8
Dangerous builds = 0
Total wall time: 0:00:23

View File

@ -1,83 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# bulk Ni in MEAM
units metal
atom_style atomic
lattice fcc 3.52
Lattice spacing in x,y,z = 3.52 3.52 3.52
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (70.4 70.4 70.4)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
Time spent = 0.00058651 secs
pair_style meam/c
pair_coeff * * library.meam Ni4 Ni.meam Ni4
velocity all create 1600.0 376847 loop geom
neighbor 1.0 bin
neigh_modify delay 5 every 1
fix 1 all nve
timestep 0.005
thermo 50
run 100
Neighbor list info ...
update every 1 steps, delay 5 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5
ghost atom cutoff = 5
binsize = 2.5, bins = 29 29 29
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/c, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/c, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 17.41 | 17.41 | 17.41 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -142400 0 -135782.09 20259.18
50 885.10702 -139411.51 0 -135750.54 32425.431
100 895.50973 -139454.3 0 -135750.3 31804.185
Loop time of 6.45947 on 4 procs for 100 steps with 32000 atoms
Performance: 6.688 ns/day, 3.589 hours/ns, 15.481 timesteps/s
98.0% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 6.22 | 6.2385 | 6.265 | 0.7 | 96.58
Neigh | 0.12657 | 0.12691 | 0.12721 | 0.1 | 1.96
Comm | 0.052339 | 0.07915 | 0.097897 | 5.9 | 1.23
Output | 9.7752e-05 | 0.0001151 | 0.00016594 | 0.0 | 0.00
Modify | 0.010194 | 0.010291 | 0.010442 | 0.1 | 0.16
Other | | 0.004529 | | | 0.07
Nlocal: 8000 ave 8045 max 7947 min
Histogram: 1 0 0 1 0 0 0 1 0 1
Nghost: 6066.75 ave 6120 max 6021 min
Histogram: 1 0 1 0 0 0 1 0 0 1
Neighs: 195090 ave 196403 max 193697 min
Histogram: 1 0 0 1 0 0 0 1 0 1
FullNghs: 390180 ave 392616 max 387490 min
Histogram: 1 0 0 1 0 0 0 1 0 1
Total # of neighbors = 1560720
Ave neighs/atom = 48.7725
Neighbor list builds = 8
Dangerous builds = 0
Total wall time: 0:00:06

View File

@ -1,217 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# Crack growth in notched 3D Peridynamic block
# Mesh spacing
variable h equal 5.00e-4
# Peridynamic horizon
variable delta equal 3.0*${h}
variable delta equal 3.0*0.0005
# Height of plate (meters)
variable height equal 31.5*${h}
variable height equal 31.5*0.0005
# Width of plate (meters)
variable width equal 39.5*${h}
variable width equal 39.5*0.0005
# Thickness of plate (meters)
variable depth equal 24.5*${h}
variable depth equal 24.5*0.0005
# Height of notch
variable crackheight equal 10*${h}
variable crackheight equal 10*0.0005
# Density of plate
variable mydensity equal 2440.0
# Elastic modulus of material
variable myE equal 72.0e9
# Strain energy release rate at branching
variable myG equal 135.0
# constant, but define it as a variable here
variable pi equal 3.14159265358979323846
units si
boundary s s s
atom_style peri
atom_modify map array
variable myskin equal 2.0*${h}
variable myskin equal 2.0*0.0005
neighbor ${myskin} bin
neighbor 0.001 bin
lattice sc $h
lattice sc 0.0005
Lattice spacing in x,y,z = 0.0005 0.0005 0.0005
variable myxmin equal 0.0
variable myxmax equal ${width}
variable myxmax equal 0.01975
variable myymin equal 0.0
variable myymax equal ${height}
variable myymax equal 0.01575
variable myzmin equal 0.0
variable myzmax equal ${depth}
variable myzmax equal 0.01225
region plate block ${myxmin} ${myxmax} ${myymin} ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 ${myxmax} ${myymin} ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 0.01975 ${myymin} ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 0.01975 0 ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 0.01975 0 0.01575 ${myzmin} ${myzmax} units box
region plate block 0 0.01975 0 0.01575 0 ${myzmax} units box
region plate block 0 0.01975 0 0.01575 0 0.01225 units box
create_box 3 plate
Created orthogonal box = (0 0 0) to (0.01975 0.01575 0.01225)
1 by 1 by 1 MPI processor grid
create_atoms 1 region plate
Created 32000 atoms
Time spent = 0.00362897 secs
pair_style peri/pmb
variable myk equal (2.0/3.0)*${myE}
variable myk equal (2.0/3.0)*72000000000
variable myc equal ((18.0*${myk})/(${pi}*(${delta}^4)))
variable myc equal ((18.0*48000000000)/(${pi}*(${delta}^4)))
variable myc equal ((18.0*48000000000)/(3.14159265358979*(${delta}^4)))
variable myc equal ((18.0*48000000000)/(3.14159265358979*(0.0015^4)))
variable mydelta equal (${delta}+(${delta}/100.0))
variable mydelta equal (0.0015+(${delta}/100.0))
variable mydelta equal (0.0015+(0.0015/100.0))
variable mys0 equal sqrt((5.0*${myG})/(9.0*${myk}*${delta}))
variable mys0 equal sqrt((5.0*135)/(9.0*${myk}*${delta}))
variable mys0 equal sqrt((5.0*135)/(9.0*48000000000*${delta}))
variable mys0 equal sqrt((5.0*135)/(9.0*48000000000*0.0015))
variable tmpvar1 equal ${myymax}-${crackheight}
variable tmpvar1 equal 0.01575-${crackheight}
variable tmpvar1 equal 0.01575-0.005
variable tmpvar2 equal 0.5*${width}
variable tmpvar2 equal 0.5*0.01975
region topleft block 0.0 ${tmpvar2} ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 ${myymax} ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 0.01575 ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 0.01575 0 ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 0.01575 0 0.01225 units box
region topright block ${tmpvar2} ${myxmax} ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 ${myxmax} ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 0.01575 ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 0.01575 0 ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 0.01575 0 0.01225 units box
set region topleft type 2
5000 settings made for type
set region topright type 3
5000 settings made for type
pair_coeff 1 1 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 1 1 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 1 1 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 1 1 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 2 2 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 2 2 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 2 2 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 2 2 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 3 3 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 3 3 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 3 3 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 3 3 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 2 3 ${myc} 0.0 ${mys0} 0.0
pair_coeff 2 3 5.43248872420337e+22 0.0 ${mys0} 0.0
pair_coeff 2 3 5.43248872420337e+22 0.0 0.00102062072615966 0.0
pair_coeff 1 2 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 1 2 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 1 2 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 1 2 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 1 3 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 1 3 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 1 3 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 1 3 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
set group all density ${mydensity}
set group all density 2440
32000 settings made for density
variable myvolume equal ($h)^3
variable myvolume equal (0.0005)^3
set group all volume ${myvolume}
set group all volume 1.25e-10
32000 settings made for volume
velocity all set 0.0 0.0 0.0 sum no units box
fix F1 all nve
compute C1 all damage/atom
velocity all ramp vx -10.0 10.0 x ${myxmin} ${myxmax} units box
velocity all ramp vx -10.0 10.0 x 0 ${myxmax} units box
velocity all ramp vx -10.0 10.0 x 0 0.01975 units box
variable mystep equal 0.8*sqrt((2.0*${mydensity})/(512*(${myc}/$h)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(${myc}/$h)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(5.43248872420337e+22/$h)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(5.43248872420337e+22/0.0005)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(5.43248872420337e+22/0.0005)*1.25e-10))
timestep ${mystep}
timestep 2.11931492396226e-08
thermo 20
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 0.002515
ghost atom cutoff = 0.002515
binsize = 0.0012575, bins = 16 13 10
2 neighbor lists, perpetual/occasional/extra = 1 1 0
(1) pair peri/pmb, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
(2) fix PERI_NEIGH, occasional
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
Peridynamic bonds:
total # of bonds = 3457032
bonds/atom = 108.032
Per MPI rank memory allocation (min/avg/max) = 133.6 | 133.6 | 133.6 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 2.0134233e+27 0 0 1.3342785e+09 2.4509971e+14 3.6292128e-06
20 1.7695805e+27 1.6163291e+08 0 1.3343188e+09 2.1541601e+14 3.6292128e-06
40 1.3041477e+27 4.6848143e+08 0 1.332729e+09 1.5875756e+14 3.6292128e-06
60 9.8975313e+26 5.7284448e+08 0 1.2287455e+09 1.2048543e+14 3.6292128e-06
80 9.3888573e+26 4.0928092e+08 0 1.0314725e+09 1.1429321e+14 3.6292128e-06
100 8.3930314e+26 3.8522361e+08 0 9.4142265e+08 1.0217075e+14 3.6292128e-06
Loop time of 11.0398 on 1 procs for 100 steps with 32000 atoms
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 11.005 | 11.005 | 11.005 | 0.0 | 99.68
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 6.6042e-05 | 6.6042e-05 | 6.6042e-05 | 0.0 | 0.00
Output | 0.00057292 | 0.00057292 | 0.00057292 | 0.0 | 0.01
Modify | 0.0256 | 0.0256 | 0.0256 | 0.0 | 0.23
Other | | 0.008592 | | | 0.08
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 6.74442e+06 ave 6.74442e+06 max 6.74442e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 1.34888e+07 ave 1.34888e+07 max 1.34888e+07 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 13488836
Ave neighs/atom = 421.526
Neighbor list builds = 0
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:00:11

View File

@ -1,217 +0,0 @@
LAMMPS (16 Mar 2018)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (../comm.cpp:90)
using 1 OpenMP thread(s) per MPI task
# Crack growth in notched 3D Peridynamic block
# Mesh spacing
variable h equal 5.00e-4
# Peridynamic horizon
variable delta equal 3.0*${h}
variable delta equal 3.0*0.0005
# Height of plate (meters)
variable height equal 31.5*${h}
variable height equal 31.5*0.0005
# Width of plate (meters)
variable width equal 39.5*${h}
variable width equal 39.5*0.0005
# Thickness of plate (meters)
variable depth equal 24.5*${h}
variable depth equal 24.5*0.0005
# Height of notch
variable crackheight equal 10*${h}
variable crackheight equal 10*0.0005
# Density of plate
variable mydensity equal 2440.0
# Elastic modulus of material
variable myE equal 72.0e9
# Strain energy release rate at branching
variable myG equal 135.0
# constant, but define it as a variable here
variable pi equal 3.14159265358979323846
units si
boundary s s s
atom_style peri
atom_modify map array
variable myskin equal 2.0*${h}
variable myskin equal 2.0*0.0005
neighbor ${myskin} bin
neighbor 0.001 bin
lattice sc $h
lattice sc 0.0005
Lattice spacing in x,y,z = 0.0005 0.0005 0.0005
variable myxmin equal 0.0
variable myxmax equal ${width}
variable myxmax equal 0.01975
variable myymin equal 0.0
variable myymax equal ${height}
variable myymax equal 0.01575
variable myzmin equal 0.0
variable myzmax equal ${depth}
variable myzmax equal 0.01225
region plate block ${myxmin} ${myxmax} ${myymin} ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 ${myxmax} ${myymin} ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 0.01975 ${myymin} ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 0.01975 0 ${myymax} ${myzmin} ${myzmax} units box
region plate block 0 0.01975 0 0.01575 ${myzmin} ${myzmax} units box
region plate block 0 0.01975 0 0.01575 0 ${myzmax} units box
region plate block 0 0.01975 0 0.01575 0 0.01225 units box
create_box 3 plate
Created orthogonal box = (0 0 0) to (0.01975 0.01575 0.01225)
2 by 2 by 1 MPI processor grid
create_atoms 1 region plate
Created 32000 atoms
Time spent = 0.0011344 secs
pair_style peri/pmb
variable myk equal (2.0/3.0)*${myE}
variable myk equal (2.0/3.0)*72000000000
variable myc equal ((18.0*${myk})/(${pi}*(${delta}^4)))
variable myc equal ((18.0*48000000000)/(${pi}*(${delta}^4)))
variable myc equal ((18.0*48000000000)/(3.14159265358979*(${delta}^4)))
variable myc equal ((18.0*48000000000)/(3.14159265358979*(0.0015^4)))
variable mydelta equal (${delta}+(${delta}/100.0))
variable mydelta equal (0.0015+(${delta}/100.0))
variable mydelta equal (0.0015+(0.0015/100.0))
variable mys0 equal sqrt((5.0*${myG})/(9.0*${myk}*${delta}))
variable mys0 equal sqrt((5.0*135)/(9.0*${myk}*${delta}))
variable mys0 equal sqrt((5.0*135)/(9.0*48000000000*${delta}))
variable mys0 equal sqrt((5.0*135)/(9.0*48000000000*0.0015))
variable tmpvar1 equal ${myymax}-${crackheight}
variable tmpvar1 equal 0.01575-${crackheight}
variable tmpvar1 equal 0.01575-0.005
variable tmpvar2 equal 0.5*${width}
variable tmpvar2 equal 0.5*0.01975
region topleft block 0.0 ${tmpvar2} ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 ${myymax} ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 0.01575 ${myzmin} ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 0.01575 0 ${myzmax} units box
region topleft block 0.0 0.009875 0.01075 0.01575 0 0.01225 units box
region topright block ${tmpvar2} ${myxmax} ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 ${myxmax} ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 ${tmpvar1} ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 ${myymax} ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 0.01575 ${myzmin} ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 0.01575 0 ${myzmax} units box
region topright block 0.009875 0.01975 0.01075 0.01575 0 0.01225 units box
set region topleft type 2
5000 settings made for type
set region topright type 3
5000 settings made for type
pair_coeff 1 1 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 1 1 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 1 1 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 1 1 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 2 2 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 2 2 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 2 2 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 2 2 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 3 3 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 3 3 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 3 3 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 3 3 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 2 3 ${myc} 0.0 ${mys0} 0.0
pair_coeff 2 3 5.43248872420337e+22 0.0 ${mys0} 0.0
pair_coeff 2 3 5.43248872420337e+22 0.0 0.00102062072615966 0.0
pair_coeff 1 2 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 1 2 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 1 2 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 1 2 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
pair_coeff 1 3 ${myc} ${mydelta} ${mys0} 0.0
pair_coeff 1 3 5.43248872420337e+22 ${mydelta} ${mys0} 0.0
pair_coeff 1 3 5.43248872420337e+22 0.001515 ${mys0} 0.0
pair_coeff 1 3 5.43248872420337e+22 0.001515 0.00102062072615966 0.0
set group all density ${mydensity}
set group all density 2440
32000 settings made for density
variable myvolume equal ($h)^3
variable myvolume equal (0.0005)^3
set group all volume ${myvolume}
set group all volume 1.25e-10
32000 settings made for volume
velocity all set 0.0 0.0 0.0 sum no units box
fix F1 all nve
compute C1 all damage/atom
velocity all ramp vx -10.0 10.0 x ${myxmin} ${myxmax} units box
velocity all ramp vx -10.0 10.0 x 0 ${myxmax} units box
velocity all ramp vx -10.0 10.0 x 0 0.01975 units box
variable mystep equal 0.8*sqrt((2.0*${mydensity})/(512*(${myc}/$h)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(${myc}/$h)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(5.43248872420337e+22/$h)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(5.43248872420337e+22/0.0005)*${myvolume}))
variable mystep equal 0.8*sqrt((2.0*2440)/(512*(5.43248872420337e+22/0.0005)*1.25e-10))
timestep ${mystep}
timestep 2.11931492396226e-08
thermo 20
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 0.002515
ghost atom cutoff = 0.002515
binsize = 0.0012575, bins = 16 13 10
2 neighbor lists, perpetual/occasional/extra = 1 1 0
(1) pair peri/pmb, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
(2) fix PERI_NEIGH, occasional
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
Peridynamic bonds:
total # of bonds = 3457032
bonds/atom = 108.032
Per MPI rank memory allocation (min/avg/max) = 47.63 | 48.11 | 48.78 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 2.0134233e+27 0 0 1.3342785e+09 2.4509971e+14 3.6292128e-06
20 1.7695805e+27 1.6163291e+08 0 1.3343188e+09 2.1541601e+14 3.6292128e-06
40 1.3041477e+27 4.6848143e+08 0 1.332729e+09 1.5875756e+14 3.6292128e-06
60 9.8975313e+26 5.7284448e+08 0 1.2287455e+09 1.2048543e+14 3.6292128e-06
80 9.3888573e+26 4.0928092e+08 0 1.0314725e+09 1.1429321e+14 3.6292128e-06
100 8.3930314e+26 3.8522361e+08 0 9.4142265e+08 1.0217075e+14 3.6292128e-06
Loop time of 2.8928 on 4 procs for 100 steps with 32000 atoms
99.0% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 2.7472 | 2.7951 | 2.8585 | 2.9 | 96.62
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.019592 | 0.083156 | 0.13278 | 17.0 | 2.87
Output | 0.00022125 | 0.00034326 | 0.00058961 | 0.0 | 0.01
Modify | 0.0083542 | 0.0089623 | 0.0095983 | 0.5 | 0.31
Other | | 0.005276 | | | 0.18
Nlocal: 8000 ave 8000 max 8000 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Nghost: 5125 ave 5125 max 5125 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Neighs: 1.6861e+06 ave 1.77502e+06 max 1.60625e+06 min
Histogram: 2 0 0 0 0 0 0 0 1 1
FullNghs: 3.37221e+06 ave 3.41832e+06 max 3.3261e+06 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Total # of neighbors = 13488836
Ave neighs/atom = 421.526
Neighbor list builds = 0
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:00:03

Some files were not shown because too many files have changed in this diff Show More