This page should contain a short guide on what the plugin does and a short example on how to use the plugin.
Use the following commands to install the plugin:
git clone https://github.com/JuDFTteam/aiida-spirit . cd aiida-spirit pip install -e . # also installs aiida, if missing (but not postgres) #pip install -e .[pre-commit,testing] # install extras for more features verdi quicksetup # better to set up a new profile verdi calculation plugins # should now show your calclulation plugins
verdi code setup with the
spirit input plugin
to set up an AiiDA code for aiida-spirit.
A quick demo of how to submit a calculation:
verdi daemon start # make sure the daemon is running cd examples verdi run example_LLG.py # submit test calculation verdi calculation list -a # check status of calculation
If you have already set up your own aiida_spirit code using
verdi code setup, you may want to try the following command:
spirit-submit # uses aiida_spirit.cli
Run Spirit calculation from user defined inputs.
- code, Code, required – The Code to use for this job.
- defects, ArrayData, optional –
- Use a node that specifies the defects information for all spins
in the spirit supercell. This is an ArrayData object that should define the defects in the defects array (column should be i, da, db, dc, itype where itype<0 means vacancy). The atom type information can be given with the atom_type array in the defects ArrayData that has the columns (iatom atom_type mu_s concentration). See https://spirit-docs.readthedocs.io/en/latest/core/docs/Input.html for more information on defects in spirit.
- jij_data, ArrayData, required – Use a node that specifies the full list of pairwise interactions
- metadata, Namespace
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params extra_params … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, optional –
- Dict node that allows to control the input parameters for spirit
- pinning, ArrayData, optional –
- Use a node that specifies the full pinning information for all spins
in the spirit supercell that should be pinned (i.e. take into account the n_basis_cells input from the parameters input node. This is an ArrayData object which should have the array called pinning which has the columns (i, da, db, dc, Sx, Sy, Sz). See https://spirit-docs.readthedocs.io/en/latest/core/docs/Input.html#pinning-a-name-pinning-a for more information on pinning in spirit.
- run_options, Dict, optional –
- Dict node that allows to control the spirit run
(e.g. simulation_method=LLG, solver=Depondt). The configuration input specifies the input configuration (the default is to start from a random configuration, plus_z is also possible to start from all spins pointing in +z).
- structure, StructureData, required – Use a node that specifies the input crystal structure
- energies, ArrayData, required – energy convergence
- magnetization, ArrayData, required – initial and final magnetization
- output_parameters, Dict, required – Parsed values from the spirit stdout, stored as Dict for quick access.
- remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
- remote_stash, RemoteStashData, optional – Contents of the stash.source_list option are stored in this remote folder after job completion.
- retrieved, FolderData, required – Files that are retrieved by the daemon will be stored in this node. By default the stdout and stderr of the scheduler will be added, but one can add more by specifying them in CalcInfo.retrieve_list.