Getting started

This page should contain a short guide on what the plugin does and a short example on how to use the plugin.

Installation

Use the following commands to install the plugin:

git clone https://github.com/JuDFTteam/aiida-spirit .
cd aiida-spirit
pip install -e .  # also installs aiida, if missing (but not postgres)
#pip install -e .[pre-commit,testing] # install extras for more features
verdi quicksetup  # better to set up a new profile
verdi calculation plugins  # should now show your calclulation plugins

Then use verdi code setup with the spirit input plugin to set up an AiiDA code for aiida-spirit.

Usage

A quick demo of how to submit a calculation:

verdi daemon start         # make sure the daemon is running
cd examples
verdi run example_LLG.py   # submit test calculation
verdi calculation list -a  # check status of calculation

If you have already set up your own aiida_spirit code using verdi code setup, you may want to try the following command:

spirit-submit  # uses aiida_spirit.cli

Available calculations

calcjobaiida_spirit.calculations.SpiritCalculation

Run Spirit calculation from user defined inputs.

Inputs:

  • add_to_retrieved, List, optional – List of strings specifying additional files that should be retrieved.
  • code, Code, optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
  • defects, ArrayData, optional –
    Use a node that specifies the defects information for all spins

    in the spirit supercell. This is an ArrayData object that should define the defects in the ‘defects’ array (column should be i, da, db, dc, itype where itype<0 means vacancy). The atom type information can be given with the atom_type array in the defects ArrayData that has the columns (iatom atom_type mu_s concentration). See https://spirit-docs.readthedocs.io/en/latest/core/docs/Input.html for more information on defects in spirit.

  • initial_state, ArrayData, optional –
    Use a node that specifies the initial directions of all spins

    in the spirit supercell. This is an ArrayData object that should define the ‘initial_state’ array (columns should be x, y, z). This overwrites the configuration input!

  • jij_data, ArrayData, required – Use a node that specifies the full list of pairwise interactions
  • metadata, Namespace
    Namespace Ports
    • call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
    • computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
    • description, str, optional, non_db – Description to set on the process node.
    • dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
    • label, str, optional, non_db – Label to set on the process node.
    • options, Namespace
      Namespace Ports
      • account, str, optional, non_db – Set the account to use in for the queue on the remote computer
      • additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
      • append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
      • custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
      • environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
      • environment_variables_double_quotes, bool, optional, non_db – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
      • import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
      • input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
      • max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
      • max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
      • mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
      • output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
      • parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
      • prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
      • priority, str, optional, non_db – Set the priority of the job to be queued
      • qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
      • queue_name, str, optional, non_db – Set the name of the queue on the remote computer
      • rerunnable, bool, optional, non_db – Determines if the calculation can be requeued / rerun.
      • resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
      • scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
      • scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
      • stash, Namespace – Optional directives to stash files after the calculation job has completed.
        Namespace Ports
        • source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
        • stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
        • target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
      • submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
      • withmpi, bool, optional, non_db – Set the calculation to use mpi
    • store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
  • parameters, Dict, optional –
    Dict node that allows to control the input parameters for spirit

    (see https://spirit-docs.readthedocs.io/en/latest/core/docs/Input.html).

  • pinning, ArrayData, optional –
    Use a node that specifies the full pinning information for all spins

    in the spirit supercell that should be pinned (i.e. take into account the n_basis_cells input from the parameters input node. This is an ArrayData object which should have the array called ‘pinning’ which has the columns (i, da, db, dc, Sx, Sy, Sz). See https://spirit-docs.readthedocs.io/en/latest/core/docs/Input.html#pinning-a-name-pinning-a for more information on pinning in spirit.

  • remote_folder, RemoteData, optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
  • run_options, Dict, optional –
    Dict node that allows to control the spirit run

    (e.g. simulation_method=LLG, solver=Depondt). The configuration input specifies the input configuration (the default is to start from a random configuration, plus_z is also possible to start from all spins pointing in +z). The post_processing string is added to the run script and allows to add e.g. quantities.get_topological_charge(p_state) for the calculation of the topological charge of a 2D system.

  • structure, StructureData, required – Use a node that specifies the input crystal structure

Outputs:

  • atom_types, ArrayData, optional – list of atom types used in the simulation (-1 indicates vacancies).
  • energies, ArrayData, optional – energy convergence
  • magnetization, ArrayData, optional – initial and final magnetization
  • monte_carlo, ArrayData, optional – sampled quantities from a monte carlo run
  • output_parameters, Dict, required – Parsed values from the spirit stdout, stored as Dict for quick access.
  • remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
  • remote_stash, RemoteStashData, optional – Contents of the stash.source_list option are stored in this remote folder after job completion.
  • retrieved, FolderData, required – Files that are retrieved by the daemon will be stored in this node. By default the stdout and stderr of the scheduler will be added, but one can add more by specifying them in CalcInfo.retrieve_list.