aiida.orm.implementation.general.calculation.job package

class aiida.orm.implementation.general.calculation.job.AbstractJobCalculation[source]

Bases: aiida.orm.implementation.general.calculation.AbstractCalculation

This class provides the definition of an AiiDA calculation that is run remotely on a job scheduler.

__module__ = 'aiida.orm.implementation.general.calculation.job'
_add_outputs_from_cache(cache_node)[source]
_cacheable = True
classmethod _get_all_with_state(state, computer=None, user=None, only_computer_user_pairs=False, only_enabled=True, limit=None)[source]

Filter all calculations with a given state.

Issue a warning if the state is not in the list of valid states.

Parameters:
  • state (str) – The state to be used to filter (should be a string among those defined in aiida.common.datastructures.calc_states)
  • computer – a Django DbComputer entry, or a Computer object, of a computer in the DbComputer table. A string for the hostname is also valid.
  • user – a Django entry (or its pk) of a user in the DbUser table; if present, the results are restricted to calculations of that specific user
  • only_computer_user_pairs (bool) – if False (default) return a queryset where each element is a suitable instance of Node (it should be an instance of Calculation, if everything goes right!) If True, return only a list of tuples, where each tuple is in the format (‘dbcomputer__id’, ‘user__id’) [where the IDs are the IDs of the respective tables]
  • limit (int) – Limit the number of rows returned
Returns:

a list of calculation objects matching the filters.

_get_authinfo()[source]
classmethod _get_calculation_info_row(res, projections, times_since=None)[source]

Get a row of information about a calculation.

Parameters:
  • res – Results from the calculations query.
  • times_since (datetime.datetime) – Times are relative to this timepoint, if None then absolute times will be used.
  • projections (list) – The projections used in the calculation query
Returns:

A list of string with information about the calculation.

_get_last_jobinfo()[source]

Get the last information asked to the scheduler about the status of the job.

Returns:a JobInfo object (that closely resembles a dictionary) or None.
_get_linkname_retrieved()[source]

Get the linkname of the retrieved data folder object.

Returns:a string
_get_remote_workdir()[source]

Get the path to the remote (on cluster) scratch folder of the calculation.

Returns:a string with the remote path
_get_retrieve_list()[source]

Get the list of files/directories to be retrieved on the cluster. Their path is relative to the remote workdirectory path.

Returns:a list of strings for file/directory names
_get_retrieve_singlefile_list()[source]

Get the list of files to be retrieved from the cluster and stored as SinglefileData’s (or subclasses of it). Their path is relative to the remote workdirectory path.

Returns:a list of lists of strings for 1) linknames, 2) Singlefile subclass name 3) file names
_get_retrieve_temporary_list()[source]

Get the list of files/directories to be retrieved on the cluster and will be kept temporarily during parsing. Their path is relative to the remote workdirectory path.

Returns:a list of strings for file/directory names
_get_scheduler_lastchecktime()[source]

Return the time of the last update of the scheduler state by the daemon, or None if it was never set.

Returns:a datetime object.
_get_state_string()[source]

Return a string, that is correct also when the state is imported (in this case, the string will be in the format IMPORTED/ORIGSTATE where ORIGSTATE is the original state from the node attributes).

_get_transport()[source]

Return the transport for this calculation.

_hash_ignored_attributes

A class that, when used as a decorator, works as if the two decorators @property and @classmethod where applied together (i.e., the object works as a property, both for the Class and for any of its instance; and is called with the class cls rather than with the instance as its first argument).

_init_internal_params()[source]

Define here internal parameters that should be defined right after the __init__ This function is actually called by the __init__

Note:if you inherit this function, ALWAYS remember to call super()._init_internal_params() as the first thing in your inherited function.
_is_new()[source]

Get whether the calculation is in the NEW status.

Returns:a boolean
_is_running()[source]

Get whether the calculation is in a running state, i.e. one of TOSUBMIT, SUBMITTING, WITHSCHEDULER, COMPUTED, RETRIEVING or PARSING.

Returns:a boolean
_linking_as_output(dest, link_type)[source]

An output of a JobCalculation can only be set when the calculation is in the SUBMITTING or RETRIEVING or PARSING state. (during SUBMITTING, the execmanager adds a link to the remote folder; all other links are added while in the retrieving phase).

Note:Further checks, such as that the output data type is ‘Data’, are done in the super() class.
Parameters:dest – a Data object instance of the database
Raise:ValueError if a link from self to dest is not allowed.
classmethod _list_calculations(states=None, past_days=None, groups=None, all_users=False, pks=(), relative_ctime=True, with_scheduler_state=False, order_by=None, limit=None, filters=None, projections=('pk', 'state', 'ctime', 'sched', 'computer', 'type'), raw=False)[source]

Print a description of the AiiDA calculations.

Parameters:
  • states – a list of string with states. If set, print only the calculations in the states “states”, otherwise shows all. Default = None.
  • past_days – If specified, show only calculations that were created in the given number of past days.
  • groups – If specified, show only calculations belonging to these groups
  • pks – if specified, must be a list of integers, and only calculations within that list are shown. Otherwise, all calculations are shown. If specified, sets state to None and ignores the value of the past_days option.”)
  • relative_ctime – if true, prints the creation time relative from now. (like 2days ago). Default = True
  • filters – a dictionary of filters to be passed to the QueryBuilder query
  • all_users – if True, list calculation belonging to all users. Default = False
  • raw – Only print the query result, without any headers, footers or other additional information
Returns:

a string with description of calculations.

_prepare_for_submission(tempfolder, inputdict)[source]

This is the routine to be called when you want to create the input files and related stuff with a plugin.

Args:
tempfolder: a aiida.common.folders.Folder subclass where
the plugin should put all its files.
inputdict: A dictionary where
each key is an input link name and each value an AiiDA node, as it would be returned by the self.get_inputs_dict() method (with the Code!). The advantage of having this explicitly passed is that this allows to choose outside which nodes to use, and whether to use also unstored nodes, e.g. in a test_submit phase.
TODO: document what it has to return (probably a CalcInfo object)
and what is the behavior on the tempfolder
_presubmit(folder, use_unstored_links=False)[source]

Prepares the calculation folder with all inputs, ready to be copied to the cluster.

Parameters:
  • folder – a SandboxFolder, empty in input, that will be filled with calculation input files and the scheduling script.
  • use_unstored_links – if set to True, it will the presubmit will try to launch the calculation using also unstored nodes linked to the Calculation only in the cache.
Return calcinfo:
 

the CalcInfo object containing the information needed by the daemon to handle operations.

_raw_input_folder

Get the input folder object.

Returns:the input folder object.
Raise:NotExistent: if the raw folder hasn’t been created yet

Remove a link. Only possible if the calculation is in state NEW.

Parameters:label (str) – Name of the link to remove.

Replace a link. Add the additional constraint that this is only possible if the calculation is in state NEW.

Parameters:
  • src – a node of the database. It cannot be a Calculation object.
  • label (str) – Name of the link.
_set_defaults

Return the default parameters to set. It is done as a property so that it can read the default parameters defined in _init_internal_params.

Note:It is a property because in this way, e.g. the parser_name is taken from the actual subclass of calculation, and not from the parent Calculation class
_set_job_id(job_id)[source]

Always set as a string

_set_last_jobinfo(last_jobinfo)[source]
_set_linkname_retrieved(linkname)[source]

Set the linkname of the retrieved data folder object.

Parameters:linkname – a string.
_set_remote_workdir(remote_workdir)[source]
_set_retrieve_list(retrieve_list)[source]
_set_retrieve_singlefile_list(retrieve_singlefile_list)[source]

Set the list of information for the retrieval of singlefiles

_set_retrieve_temporary_list(retrieve_temporary_list)[source]

Set the list of paths that are to retrieved for parsing and be deleted as soon as the parsing has been completed.

_set_scheduler_state(state)[source]
_set_state(state)[source]

Set the state of the calculation.

Set it in the DbCalcState to have also the uniqueness check. Moreover (except for the IMPORTED state) also store in the ‘state’ attribute, useful to know it also after importing, and for faster querying.

Todo

Add further checks to enforce that the states are set in order?

Parameters:state – a string with the state. This must be a valid string, from aiida.common.datastructures.calc_states.
Raise:ModificationNotAllowed if the given state was already set.
_store_raw_input_folder(folder_path)[source]

Copy the content of the folder internally, in a subfolder called ‘raw_input’

Parameters:folder_path – the path to the folder from which the content should be taken
_updatable_attributes = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', '_process_label', 'process_state', 'process_status', 'job_id', 'scheduler_state', 'scheduler_lastchecktime', 'last_jobinfo', 'remote_workdir', 'retrieve_list', 'retrieve_temporary_list', 'retrieve_singlefile_list', 'state')
_validate()[source]

Verify if all the input nodes are present and valid.

Raise:ValidationError: if invalid parameters are found.

Add a link with a code as destination. Add the additional contraint that this is only possible if the calculation is in state NEW.

You can use the parameters of the base Node class, in particular the label parameter to label the link.

Parameters:
  • src – a node of the database. It cannot be a Calculation object.
  • label (str) – Name of the link. Default=None
  • link_type – The type of link, must be one of the enum values form LinkType
compound_projection_map = {'job_state': ('calculation', ('state', 'attributes.scheduler_state')), 'state': ('calculation', ('attributes.process_state', 'attributes.exit_status'))}
exit_status_enum[source]

alias of JobCalculationExitStatus

exit_status_label

Return the label belonging to the exit status of the Calculation

Returns:the exit status label
failed

Returns whether the Calculation has failed, which means that it terminated nominally but it had a non-zero exit status

Returns:True if the calculation has failed, False otherwise
Return type:bool
finished_ok

Returns whether the Calculation has finished successfully, which means that it terminated nominally and had a zero exit status indicating a successful execution

Returns:True if the calculation has finished successfully, False otherwise
Return type:bool
get_account()[source]

Get the account on the cluster.

Returns:a string or None.
get_append_text()[source]

Get the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution.

classmethod get_builder()[source]

Return a JobProcessBuilder instance, tailored for this calculation class

This builder is a mapping of the inputs of the JobCalculation class, supports tab-completion, automatic validation when settings values as well as automated docstrings for each input

Returns:JobProcessBuilder instance
get_builder_restart()[source]

Return a JobProcessBuilder instance, tailored for this calculation instance

This builder is a mapping of the inputs of the JobCalculation class, supports tab-completion, automatic validation when settings values as well as automated docstrings for each input.

The fields of the builder will be pre-populated with all the inputs recorded for this instance as well as settings all the options that were explicitly set for this calculation instance.

This builder can then directly be launched again to effectively run a duplicate calculation. But more useful is that it serves as a starting point to, after changing one or more inputs, launch a similar calculation by using this already completed calculation as a starting point.

Returns:JobProcessBuilder instance
get_custom_scheduler_commands()[source]

Return a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. See also the documentation of the corresponding set_ method.

Returns:the custom scheduler command, or an empty string if no custom command was defined.
get_desc()[source]

Returns a string with infos retrieved from a JobCalculation node’s properties.

get_environment_variables()[source]

Return a dictionary of the environment variables that are set for this calculation.

Return an empty dictionary if no special environment variables have to be set for this calculation.

get_hash(ignore_errors=True, ignored_folder_content=('raw_input', ), **kwargs)[source]
get_import_sys_environment()[source]

To check if it’s loading the system environment on the submission script.

Returns:a boolean. If True the system environment will be load.
get_job_id()[source]

Get the scheduler job id of the calculation.

Returns:a string
get_max_memory_kb()[source]

Get the memory (in KiloBytes) requested to the scheduler.

Returns:an integer
get_max_wallclock_seconds()[source]

Get the max wallclock time in seconds requested to the scheduler.

Returns:an integer
Return type:int
get_mpirun_extra_params()[source]

Return a list of strings, that are the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x

Return an empty list if no parameters have been defined.

get_option(name, only_actually_set=False)[source]

Retun the value of an option that was set for this JobCalculation

Parameters:
  • name – the option name
  • only_actually_set – when False will return the default value even when option had not been explicitly set
Returns:

the option value or None

Raises:

ValueError for unknown option

get_options(only_actually_set=False)[source]

Return the dictionary of options set for this JobCalculation

Parameters:only_actually_set – when False will return the default value even when option had not been explicitly set
Returns:dictionary of the options and their values
get_parser_name()[source]

Return a string locating the module that contains the output parser of this calculation, that will be searched in the ‘aiida/parsers/plugins’ directory. None if no parser is needed/set.

Returns:a string.
get_parserclass()[source]

Return the output parser object for this calculation, or None if no parser is set.

Returns:a Parser class.
Raise:MissingPluginError from ParserFactory no plugin is found.
get_prepend_text()[source]

Get the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution.

get_priority()[source]

Get the priority, if set, of the job on the cluster.

Returns:a string or None
get_qos()[source]

Get the quality of service on the cluster.

Returns:a string or None.
get_queue_name()[source]

Get the name of the queue on cluster.

Returns:a string or None.
get_resources(full=False)[source]

Returns the dictionary of the job resources set.

Parameters:full – if True, also add the default values, e.g. default_mpiprocs_per_machine
Returns:a dictionary
get_retrieved_node()[source]

Return the retrieved data folder, if present.

Returns:the retrieved data folder object, or None if no such output node is found.
Raises:MultipleObjectsError – if more than one output node is found.
get_scheduler_error()[source]

Return the output of the scheduler error (a string) if the calculation has finished, and output node is present, and the output of the scheduler was retrieved.

Return None otherwise.

get_scheduler_output()[source]

Return the output of the scheduler output (a string) if the calculation has finished, and output node is present, and the output of the scheduler was retrieved.

Return None otherwise.

get_scheduler_state()[source]

Return the status of the calculation according to the cluster scheduler.

Returns:a string.
get_state(from_attribute=False)[source]

Get the state of the calculation.

Note

the ‘most recent’ state is obtained using the logic in the aiida.common.datastructures.sort_states function.

Todo

Understand if the state returned when no state entry is found in the DB is the best choice.

Parameters:from_attribute – if set to True, read it from the attributes (the attribute is also set with set_state, unless the state is set to IMPORTED; in this way we can also see the state before storing).
Returns:a string. If from_attribute is True and no attribute is found, return None. If from_attribute is False and no entry is found in the DB, also return None.
get_withmpi()[source]

Get whether the job is set with mpi execution.

Returns:a boolean. Default=True.
kill()[source]

Kill a calculation on the cluster.

Can only be called if the calculation is in status WITHSCHEDULER.

The command tries to run the kill command as provided by the scheduler, and raises an exception is something goes wrong. No changes of calculation status are done (they will be done later by the calculation manager).

options = {'account': {'attribute_key': 'account', 'required': False, 'non_db': True, 'help': 'Set the account to use in for the queue on the remote computer', 'valid_type': (<type 'basestring'>,)}, 'append_text': {'attribute_key': 'append_text', 'help': 'Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution', 'default': '', 'required': False, 'valid_type': <type 'basestring'>, 'non_db': True}, 'computer': {'attribute_key': None, 'required': False, 'non_db': True, 'help': 'Set the computer to be used by the calculation', 'valid_type': <class 'aiida.orm.computers.Computer'>}, 'custom_scheduler_commands': {'attribute_key': 'custom_scheduler_commands', 'help': 'Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the `prepend_text` is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command', 'default': '', 'required': False, 'valid_type': (<type 'basestring'>,), 'non_db': True}, 'environment_variables': {'attribute_key': 'custom_environment_variables', 'help': 'Set a dictionary of custom environment variables for this calculation', 'default': {}, 'required': False, 'valid_type': <type 'dict'>, 'non_db': True}, 'import_sys_environment': {'attribute_key': 'import_sys_environment', 'help': 'If set to true, the submission script will load the system environment variables', 'default': True, 'required': False, 'valid_type': <type 'bool'>, 'non_db': True}, 'max_memory_kb': {'attribute_key': 'max_memory_kb', 'required': False, 'non_db': True, 'help': 'Set the maximum memory (in KiloBytes) to be asked to the scheduler', 'valid_type': <type 'int'>}, 'max_wallclock_seconds': {'attribute_key': 'max_wallclock_seconds', 'required': False, 'non_db': True, 'help': 'Set the wallclock in seconds asked to the scheduler', 'valid_type': <type 'int'>}, 'mpirun_extra_params': {'attribute_key': 'mpirun_extra_params', 'help': 'Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] ... exec.x', 'default': [], 'required': False, 'valid_type': (<type 'list'>, <type 'tuple'>), 'non_db': True}, 'parser_name': {'attribute_key': 'parser', 'required': False, 'non_db': True, 'help': 'Set a string for the output parser. Can be None if no output plugin is available or needed', 'valid_type': <type 'basestring'>}, 'prepend_text': {'attribute_key': 'prepend_text', 'help': 'Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution', 'default': '', 'required': False, 'valid_type': <type 'basestring'>, 'non_db': True}, 'priority': {'attribute_key': 'priority', 'required': False, 'non_db': True, 'help': 'Set the priority of the job to be queued', 'valid_type': <type 'basestring'>}, 'qos': {'attribute_key': 'qos', 'required': False, 'non_db': True, 'help': 'Set the quality of service to use in for the queue on the remote computer', 'valid_type': (<type 'basestring'>,)}, 'queue_name': {'attribute_key': 'queue_name', 'required': False, 'non_db': True, 'help': 'Set the name of the queue on the remote computer', 'valid_type': (<type 'basestring'>,)}, 'resources': {'default': {}, 'attribute_key': 'jobresource_params', 'help': 'Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.', 'valid_type': <type 'dict'>}, 'withmpi': {'attribute_key': 'withmpi', 'help': 'Set the calculation to use mpi', 'default': True, 'required': False, 'valid_type': <type 'bool'>, 'non_db': True}}
classmethod process()[source]

Return the JobProcess class constructed based on this JobCalculatin class

Returns:JobProcess class
projection_map = {'calculation_state': ('calculation', 'state'), 'computer': ('computer', 'name'), 'ctime': ('calculation', 'ctime'), 'description': ('calculation', 'description'), 'exit_status': ('calculation', 'attributes.exit_status'), 'label': ('calculation', 'label'), 'mtime': ('calculation', 'mtime'), 'pk': ('calculation', 'id'), 'process_state': ('calculation', 'attributes.process_state'), 'scheduler_state': ('calculation', 'attributes.scheduler_state'), 'sealed': ('calculation', 'attributes.sealed'), 'type': ('calculation', 'type'), 'user': ('user', 'email'), 'uuid': ('calculation', 'uuid')}
res

To be used to get direct access to the parsed parameters.

Returns:an instance of the CalculationResultManager.
Note:a practical example on how it is meant to be used: let’s say that there is a key ‘energy’ in the dictionary of the parsed results which contains a list of floats. The command calc.res.energy will return such a list.
set_account(val)[source]

Set the account on the remote computer.

Parameters:val (str) – the account name
set_append_text(val)[source]

Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution.

Parameters:val – a (possibly multiline) string
set_custom_scheduler_commands(val)[source]

Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler.

The difference of this method with respect to the set_prepend_text is the position in the scheduler submission file where such text is inserted: with this method, the string is inserted before any non-scheduler command.

set_environment_variables(env_vars_dict)[source]

Set a dictionary of custom environment variables for this calculation.

Both keys and values must be strings.

In the remote-computer submission script, it’s going to export variables as export 'keys'='values'

set_import_sys_environment(val)[source]

If set to true, the submission script will load the system environment variables.

Parameters:val (bool) – load the environment if True
set_max_memory_kb(val)[source]

Set the maximum memory (in KiloBytes) to be asked to the scheduler.

Parameters:val – an integer. Default=None
set_max_wallclock_seconds(val)[source]

Set the wallclock in seconds asked to the scheduler.

Parameters:val – An integer. Default=None
set_mpirun_extra_params(extra_params)[source]

Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x

Parameters:extra_params – must be a list of strings, one for each extra parameter
set_option(name, value)[source]

Set an option to the given value

Parameters:
  • name – the option name
  • value – the value to set
Raises:

ValueError for unknown option

Raises:

TypeError for values with invalid type

set_options(options)[source]

Set the options for this JobCalculation

Parameters:options – dictionary of option and their values to set
set_parser_name(parser)[source]

Set a string for the output parser Can be None if no output plugin is available or needed.

Parameters:parser – a string identifying the module of the parser. Such module must be located within the folder ‘aiida/parsers/plugins’
set_prepend_text(val)[source]

Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution.

See also set_custom_scheduler_commands

Parameters:val – a (possibly multiline) string
set_priority(val)[source]

Set the priority of the job to be queued.

Parameters:val – the values of priority as accepted by the cluster scheduler.
set_qos(val)[source]

Set the quality of service on the remote computer.

Parameters:val (str) – the quality of service
set_queue_name(val)[source]

Set the name of the queue on the remote computer.

Parameters:val (str) – the queue name
set_resources(resources_dict)[source]

Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus, … This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler. (scheduler type can be found with calc.get_computer().get_scheduler_type() )

set_withmpi(val)[source]

Set the calculation to use mpi.

Parameters:val – A boolean. Default=True
store(*args, **kwargs)[source]

Override the store() method to store also the calculation in the NEW state as soon as this is stored for the first time.

submit()[source]

Creates a ContinueJobCalculation and submits it with the current calculation node as the database record. This will ensure that even when this legacy entry point is called, that the calculation is taken through the Process layer. Preferably this legacy method should not be used at all but rather a JobProcess should be used.

submit_test(folder=None, subfolder_name=None)[source]

Run a test submission by creating the files that would be generated for the real calculation in a local folder, without actually storing the calculation nor the input nodes. This functionality therefore also does not require any of the inputs nodes to be stored yet.

Parameters:
  • folder – a Folder object, within which to create the calculation files. By default a folder will be created in the current working directory
  • subfolder_name – the name of the subfolder to use within the directory of the folder object. By default a unique string will be generated based on the current datetime with the format yymmdd- followed by an auto incrementing index
class aiida.orm.implementation.general.calculation.job.CalculationResultManager(calc)[source]

Bases: object

An object used internally to interface the calculation object with the Parser and consequentially with the ParameterData object result. It shouldn’t be used explicitly by a user.

__dict__ = dict_proxy({'__module__': 'aiida.orm.implementation.general.calculation.job', '__getitem__': <function __getitem__>, '__getattr__': <function __getattr__>, '__iter__': <function __iter__>, '__dir__': <function __dir__>, '__dict__': <attribute '__dict__' of 'CalculationResultManager' objects>, '_get_dict': <function _get_dict>, '__weakref__': <attribute '__weakref__' of 'CalculationResultManager' objects>, '__doc__': "\n An object used internally to interface the calculation object with the Parser\n and consequentially with the ParameterData object result.\n It shouldn't be used explicitly by a user.\n ", '__init__': <function __init__>})
__dir__()[source]

Allow to list all valid attributes

__getattr__(name)[source]

interface to get to the parser results as an attribute.

Parameters:name – name of the attribute to be asked to the parser results.
__getitem__(name)[source]

interface to get to the parser results as a dictionary.

Parameters:name – name of the attribute to be asked to the parser results.
__init__(calc)[source]
Parameters:calc – the calculation object.
__iter__()[source]
__module__ = 'aiida.orm.implementation.general.calculation.job'
__weakref__

list of weak references to the object (if defined)

_get_dict()[source]

Return a dictionary of all results

class aiida.orm.implementation.general.calculation.job.JobCalculationExitStatus[source]

Bases: enum.Enum

This enumeration maps specific calculation states to an integer. This integer can then be used to set the exit status of a JobCalculation node. The values defined here map directly on the failed calculation states, but the idea is that sub classes of AbstractJobCalculation can extend this enum with additional error codes

FAILED = 400
FINISHED = 0
PARSINGFAILED = 300
RETRIEVALFAILED = 200
SUBMISSIONFAILED = 100
__module__ = 'aiida.orm.implementation.general.calculation.job'