diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 60fe82d86c..62e7186360 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -108,7 +108,7 @@ For bug reports, the next step is that one of the core LAMMPS developers will se For submitting pull requests, there is a [detailed tutorial](https://lammps.sandia.gov/doc/Howto_github.html) in the LAMMPS manual. Thus only a brief breakdown of the steps is presented here. Please note, that the LAMMPS developers are still reviewing and trying to improve the process. If you are unsure about something, do not hesitate to post a question on the lammps-users mailing list or contact one fo the core LAMMPS developers. Immediately after the submission, the LAMMPS continuing integration server at ci.lammps.org will download your submitted branch and perform a simple compilation test, i.e. will test whether your submitted code can be compiled under various conditions. It will also do a check on whether your included documentation translates cleanly. Whether these tests are successful or fail will be recorded. If a test fails, please inspect the corresponding output on the CI server and take the necessary steps, if needed, so that the code can compile cleanly again. The test will be re-run each the pull request is updated with a push to the remote branch on GitHub. -Next a LAMMPS core developer will self-assign and do an overall technical assessment of the submission. If you are not yet registered as a LAMMPS collaborator, you will receive an invitation for that. As part of the assesment, the pull request will be categorized with labels. There are two special labels: `needs_work` (indicates that work from the submitter of the pull request is needed) and `work_in_progress` (indicates, that the assigned LAMMPS developer will make changes, if not done by the contributor who made the submit). +Next a LAMMPS core developer will self-assign and do an overall technical assessment of the submission. If you are not yet registered as a LAMMPS collaborator, you will receive an invitation for that. As part of the assessment, the pull request will be categorized with labels. There are two special labels: `needs_work` (indicates that work from the submitter of the pull request is needed) and `work_in_progress` (indicates, that the assigned LAMMPS developer will make changes, if not done by the contributor who made the submit). You may also receive comments and suggestions on the overall submission or specific details and on occasion specific requests for changes as part of the review. If permitted, also additional changes may be pushed into your pull request branch or a pull request may be filed in your LAMMPS fork on GitHub to include those changes. The LAMMPS developer may then decide to assign the pull request to another developer (e.g. when that developer is more knowledgeable about the submitted feature or enhancement or has written the modified code). It may also happen, that additional developers are requested to provide a review and approve the changes. For submissions, that may change the general behavior of LAMMPS, or where a possibility of unwanted side effects exists, additional tests may be requested by the assigned developer. If the assigned developer is satisfied and considers the submission ready for inclusion into LAMMPS, the pull request will receive approvals and be merged into the master branch by one of the core LAMMPS developers. After the pull request is merged, you may delete the feature branch used for the pull request in your personal LAMMPS fork. diff --git a/doc/github-development-workflow.md b/doc/github-development-workflow.md index a7d41dd32a..c34a67dfcf 100644 --- a/doc/github-development-workflow.md +++ b/doc/github-development-workflow.md @@ -95,7 +95,7 @@ on the pull request discussion page on GitHub, so that other developers can later review the entire discussion after the fact and understand the rationale behind choices made. Exceptions to this policy are technical discussions, that are centered on tools or policies themselves -(git, github, c++) rather than on the content of the pull request. +(git, GitHub, c++) rather than on the content of the pull request. ### Checklist for Pull Requests diff --git a/doc/src/Howto_github.rst b/doc/src/Howto_github.rst index 63cb8945e8..311d716f18 100644 --- a/doc/src/Howto_github.rst +++ b/doc/src/Howto_github.rst @@ -72,7 +72,7 @@ explained in more detail here: `feature branch workflow .%%d.lammpstrj. \ Can be in compressed (.gz or .bz2) format. \ This is a required argument") - parser.add_argument("-logfn", "--logfn", default = "log.lammps", - help = "LAMMPS log file that contains swap history \ + parser.add_argument("-logfn", "--logfn", default="log.lammps", + help="LAMMPS log file that contains swap history \ of temperatures among replicas. \ Default = 'lammps.log'") - parser.add_argument("-tfn", "--tempfn", default = "temps.txt", - help = "ascii file (readable by numpy.loadtxt) with \ + parser.add_argument("-tfn", "--tempfn", default="temps.txt", + help="ascii file (readable by numpy.loadtxt) with \ the temperatures used in the REMD simulation.") - parser.add_argument("-ns", "--nswap", type = int, - help = "Swap frequency used in LAMMPS temper command") + parser.add_argument("-ns", "--nswap", type=int, + help="Swap frequency used in LAMMPS temper command") - parser.add_argument("-nw", "--nwrite", type = int, default = 1, - help = "Trajectory writing frequency used \ + parser.add_argument("-nw", "--nwrite", type=int, default=1, + help="Trajectory writing frequency used \ in LAMMPS dump command") - parser.add_argument("-np", "--nprod", type = int, default = 0, - help = "Number of timesteps to save in the reordered\ + parser.add_argument("-np", "--nprod", type=int, default=0, + help="Number of timesteps to save in the reordered\ trajectories.\ This should be in units of the LAMMPS timestep") - parser.add_argument("-logw", "--logw", action = 'store_true', - help = "Supplying this flag \ + parser.add_argument("-logw", "--logw", action='store_true', + help="Supplying this flag \ calculates *canonical* (NVT ensemble) log weights") parser.add_argument("-e", "--enefn", - help = "File that has n_replica x n_frames array\ + help="File that has n_replica x n_frames array\ of total potential energies") parser.add_argument("-kB", "--boltzmann_const", - type = float, default = 0.001987, - help = "Boltzmann constant in appropriate units. \ + type=float, default=0.001987, + help="Boltzmann constant in appropriate units. \ Default is kcal/mol") - parser.add_argument("-ot", "--out_temps", nargs = '+', type = np.float64, - help = "Reorder trajectories at these temperatures.\n \ + parser.add_argument("-ot", "--out_temps", nargs='+', type=np.float64, + help="Reorder trajectories at these temperatures.\n \ Default is all temperatures used in the simulation") - parser.add_argument("-od", "--outdir", default = ".", - help = "All output will be saved to this directory") + parser.add_argument("-od", "--outdir", default=".", + help="All output will be saved to this directory") # parse inputs args = parser.parse_args() @@ -438,14 +449,16 @@ if __name__ == "__main__": nprod = args.nprod enefn = args.enefn - if not enefn is None: enefn = os.path.abspath(enefn) + if not enefn is None: + enefn = os.path.abspath(enefn) get_logw = args.logw kB = args.boltzmann_const out_temps = args.out_temps outdir = os.path.abspath(args.outdir) if not os.path.isdir(outdir): - if me == ROOT: os.mkdir(outdir) + if me == ROOT: + os.mkdir(outdir) # check that all input files are present (only on the ROOT proc) if me == ROOT: @@ -465,7 +478,8 @@ if __name__ == "__main__": for i in range(ntemps): this_intrajfn = intrajfns[i] x = this_intrajfn + ".gz" - if os.path.isfile(this_intrajfn): continue + if os.path.isfile(this_intrajfn): + continue elif os.path.isfile(this_intrajfn + ".gz"): intrajfns[i] = this_intrajfn + ".gz" elif os.path.isfile(this_intrajfn + ".bz2"): @@ -476,42 +490,41 @@ if __name__ == "__main__": # set output filenames outprefix = os.path.join(outdir, traj_prefix.split('/')[-1]) - outtrajfns = ["%s.%3.2f.lammpstrj.gz" % \ - (outprefix, _get_nearest_temp(temps, t)) \ + outtrajfns = ["%s.%3.2f.lammpstrj.gz" % + (outprefix, _get_nearest_temp(temps, t)) for t in out_temps] - byteindfns = [os.path.join(outdir, ".byteind_%d.gz" % k) \ + byteindfns = [os.path.join(outdir, ".byteind_%d.gz" % k) for k in range(ntemps)] frametuplefn = outprefix + '.frametuple.pickle' if get_logw: logwfn = outprefix + ".logw.pickle" - # get a list of all frames at a particular temp visited by each replica # this is fast so run only on ROOT proc. master_frametuple_dict = {} if me == ROOT: - master_frametuple_dict = get_replica_frames(logfn = logfn, - temps = temps, - nswap = nswap, - writefreq = writefreq) + master_frametuple_dict = get_replica_frames(logfn=logfn, + temps=temps, + nswap=nswap, + writefreq=writefreq) # save to a pickle from the ROOT proc with open(frametuplefn, 'wb') as of: pickle.dump(master_frametuple_dict, of) # broadcast to all procs - master_frametuple_dict = comm.bcast(master_frametuple_dict, root = ROOT) + master_frametuple_dict = comm.bcast(master_frametuple_dict, root=ROOT) # define a chunk of replicas to process on each proc CHUNKSIZE_1 = int(ntemps/nproc) if me < nproc - 1: - my_rep_inds = range( (me*CHUNKSIZE_1), (me+1)*CHUNKSIZE_1 ) + my_rep_inds = range((me*CHUNKSIZE_1), (me+1)*CHUNKSIZE_1) else: - my_rep_inds = range( (me*CHUNKSIZE_1), ntemps ) + my_rep_inds = range((me*CHUNKSIZE_1), ntemps) # get byte indices from replica (un-ordered) trajs. in parallel - get_byte_index(rep_inds = my_rep_inds, - byteindfns = byteindfns, - intrajfns = intrajfns) + get_byte_index(rep_inds=my_rep_inds, + byteindfns=byteindfns, + intrajfns=intrajfns) # block until all procs have finished comm.barrier() @@ -520,7 +533,7 @@ if __name__ == "__main__": infobjs = [readwrite(i, "rb") for i in intrajfns] # open all byteindex files - byte_inds = dict( (i, np.loadtxt(fn)) for i, fn in enumerate(byteindfns) ) + byte_inds = dict((i, np.loadtxt(fn)) for i, fn in enumerate(byteindfns)) # define a chunk of output trajs. to process for each proc. # # of reordered trajs. to write may be less than the total # of replicas @@ -536,38 +549,38 @@ if __name__ == "__main__": else: nproc_active = nproc if me < nproc_active-1: - my_temp_inds = range( (me*CHUNKSIZE_2), (me+1)*CHUNKSIZE_1 ) + my_temp_inds = range((me*CHUNKSIZE_2), (me+1)*CHUNKSIZE_1) else: - my_temp_inds = range( (me*CHUNKSIZE_2), n_out_temps) + my_temp_inds = range((me*CHUNKSIZE_2), n_out_temps) # retire the excess procs # dont' forget to close any open file objects if me >= nproc_active: - for fobj in infobjs: fobj.close() + for fobj in infobjs: + fobj.close() exit() # write reordered trajectories to disk from active procs in parallel - write_reordered_traj(temp_inds = my_temp_inds, - byte_inds = byte_inds, - outtemps = out_temps, temps = temps, - frametuple_dict = master_frametuple_dict, - nprod = nprod, writefreq = writefreq, - outtrajfns = outtrajfns, - infobjs = infobjs) + write_reordered_traj(temp_inds=my_temp_inds, + byte_inds=byte_inds, + outtemps=out_temps, temps=temps, + frametuple_dict=master_frametuple_dict, + nprod=nprod, writefreq=writefreq, + outtrajfns=outtrajfns, + infobjs=infobjs) # calculate canonical log-weights if requested # usually this is very fast so retire all but the ROOT proc - if not get_logw: exit() - if not me == ROOT: exit() - - logw = get_canonical_logw(enefn = enefn, temps = temps, - frametuple_dict = master_frametuple_dict, - nprod = nprod, writefreq = writefreq, - kB = kB) + if not get_logw: + exit() + if not me == ROOT: + exit() + logw = get_canonical_logw(enefn=enefn, temps=temps, + frametuple_dict=master_frametuple_dict, + nprod=nprod, writefreq=writefreq, + kB=kB) # save the logweights to a pickle with open(logwfn, 'wb') as of: pickle.dump(logw, of) - -