aiida.work package

exception aiida.work.PastException[source]

Bases: aiida.common.exceptions.AiidaException

Raised when an attempt is made to continue a Process that has already excepted before

__module__ = 'aiida.work.exceptions'
class aiida.work.ExitCode(status, message)

Bases: tuple

__dict__ = dict_proxy({'status': <property object>, '__module__': 'aiida.work.exit_code', '__getstate__': <function __getstate__>, '__new__': <staticmethod object>, '_make': <classmethod object>, '_fields': ('status', 'message'), '_replace': <function _replace>, '__slots__': (), '_asdict': <function _asdict>, '__repr__': <function __repr__>, '__dict__': <property object>, 'message': <property object>, '__getnewargs__': <function __getnewargs__>, '__doc__': 'ExitCode(status, message)'})
__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__module__ = 'aiida.work.exit_code'
static __new__(_cls, status=0, message=None)

Create new instance of ExitCode(status, message)

__repr__()

Return a nicely formatted representation string

__slots__ = ()
_asdict()

Return a new OrderedDict which maps field names to their values

_fields = ('status', 'message')
classmethod _make(iterable, new=<built-in method __new__ of type object at 0x906d60>, len=<built-in function len>)

Make a new ExitCode object from a sequence or iterable

_replace(**kwds)

Return a new ExitCode object replacing specified fields with new values

message

Alias for field number 1

status

Alias for field number 0

class aiida.work.ExitCodesNamespace[source]

Bases: aiida.common.extendeddicts.AttributeDict

A namespace of ExitCode tuples that can be accessed through getattr as well as getitem. Additionally, the collection can be called with an identifier, that can either reference the integer status of the ExitCode that needs to be retrieved or the key in the collection

__call__(identifier)[source]

Return a specific exit code identified by either its exit status or label

Parameters:identifier – the identifier of the exit code. If the type is integer, it will be interpreted as the exit code status, otherwise it be interpreted as the exit code label
Returns:an ExitCode named tuple
Raises:ValueError – if no exit code with the given label is defined for this process
__module__ = 'aiida.work.exit_code'
class aiida.work.Process(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: plumpy.processes.Process

This class represents an AiiDA process which can be executed and will have full provenance saved in the database.

SINGLE_RETURN_LINKNAME = 'result'
class SaveKeys[source]

Bases: enum.Enum

Keys used to identify things in the saved instance state bundle.

CALC_ID = 'calc_id'
__module__ = 'aiida.work.processes'
__abstractmethods__ = frozenset([])
__init__(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

The signature of the constructor should not be changed by subclassing processes.

Parameters:
  • inputs (dict) – A dictionary of the process inputs
  • pid – The process ID, can be manually set, if not a unique pid will be chosen
  • logger (logging.Logger) – An optional logger for the process to use
  • loop – The event loop
  • communicator (plumpy.Communicator) – The (optional) communicator
__metaclass__

alias of abc.ABCMeta

__module__ = 'aiida.work.processes'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_add_description_and_label()[source]
_auto_persist = set(['_paused_flag', '_enable_persistence', '_pid', '_future', '_parent_pid', '_CREATION_TIME'])
_calc_class

alias of aiida.orm.implementation.general.calculation.work.WorkCalculation

_create_and_setup_db_record()[source]
_flat_inputs()[source]

Return a flattened version of the parsed inputs dictionary. The eventual keys will be a concatenation of the nested keys

Returns:flat dictionary of parsed inputs
_flatten_inputs(port, port_value, parent_name='', separator='_')[source]

Function that will recursively flatten the inputs dictionary, omitting inputs for ports that are marked as being non database storable

Parameters:
  • port – port against which to map the port value, can be InputPort or PortNamespace
  • port_value – value for the current port, can be a Mapping
  • parent_name – the parent key with which to prefix the keys
  • separator – character to use for the concatenation of keys
static _get_namespace_list(namespace=None, agglomerate=True)[source]
_setup_db_record()[source]
_spec_type

alias of aiida.work.process_spec.ProcessSpec

calc
decode_input_args(encoded)[source]

Decode saved input arguments as they came from the saved instance state Bundle

Parameters:encoded
Returns:The decoded input args
classmethod define(spec)[source]
encode_input_args(inputs)[source]

Encode input arguments such that they may be saved in a Bundle

Parameters:inputs – A mapping of the inputs as passed to the process
Returns:The encoded inputs
exposed_inputs(process_class, namespace=None, agglomerate=True)[source]

Gather a dictionary of the inputs that were exposed for a given Process class under an optional namespace.

Parameters:
  • process_class – Process class whose inputs to try and retrieve
  • namespace (str) – PortNamespace in which to look for the inputs
  • agglomerate (bool) – If set to true, all parent namespaces of the given namespace will also be searched for inputs. Inputs in lower-lying namespaces take precedence.
exposed_outputs(process_instance, process_class, namespace=None, agglomerate=True)[source]

Gather the outputs which were exposed from the process_class and emitted by the specific process_instance in a dictionary.

Parameters:
  • namespace (str) – Namespace in which to search for exposed outputs.
  • agglomerate (bool) – If set to true, all parent namespaces of the given namespace will also be searched for outputs. Outputs in lower-lying namespaces take precedence.
classmethod get_builder()[source]
classmethod get_or_create_db_record()[source]

Create a database calculation node that represents what happened in this process. :return: A calculation

get_parent_calc()[source]
get_provenance_inputs_iterator()[source]
has_finished()[source]

Has the process finished i.e. completed running normally, without abort or an exception.

Returns:True if finished, False otherwise
Return type:bool
init()[source]
kill(msg=None)[source]

Kill the process and all the children calculations it called

load_instance_state(saved_state, load_context)[source]
on_create()[source]
on_entered(from_state)[source]
on_entering(state)[source]
on_except(exc_info)[source]

Log the exception by calling the report method with formatted stack trace from exception info object and store the exception string as a node attribute

Parameters:exc_info – the sys.exc_info() object
on_finish(result, successful)[source]

Set the finish status on the Calculation node

on_output_emitting(output_port, value)[source]

The process has emitted a value on the given output port.

Parameters:
  • output_port – The output port name the value was emitted on
  • value – The value emitted
on_terminated()[source]

Called when a Process enters a terminal state.

out(output_port, value=None)[source]

Record an output value for a specific output port. If the output port matches an explicitly defined Port it will be validated against that. If not it will be validated against the PortNamespace, which means it will be checked for dynamicity and whether the type of the value is valid

Parameters:
  • output_port – the name of the output port, can be namespaced
  • value – the value for the output port
Raises:

TypeError if the output value is not validated against the port

out_many(out_dict)[source]

Add all values given in out_dict to the outputs. The keys of the dictionary will be used as output names.

report(msg, *args, **kwargs)[source]

Log a message to the logger, which should get saved to the database through the attached DbLogHandler. The class name and function name of the caller are prepended to the given message

run_process(process, *args, **inputs)[source]
runner
save_instance_state(out_state, save_context)[source]

Ask the process to save its current instance state.

Parameters:
  • out_state (plumpy.Bundle) – A bundle to save the state to
  • save_context – The save context
submit(process, *args, **kwargs)[source]
update_node_state(state)[source]
update_outputs()[source]
class aiida.work.ProcessState[source]

Bases: enum.Enum

The possible states that a Process can be in.

CREATED = 'created'
EXCEPTED = 'excepted'
FINISHED = 'finished'
KILLED = 'killed'
RUNNING = 'running'
WAITING = 'waiting'
__module__ = 'plumpy.process_states'
class aiida.work.FunctionProcess(*args, **kwargs)[source]

Bases: aiida.work.processes.Process

__abstractmethods__ = frozenset([])
__init__(*args, **kwargs)[source]

The signature of the constructor should not be changed by subclassing processes.

Parameters:
  • inputs (dict) – A dictionary of the process inputs
  • pid – The process ID, can be manually set, if not a unique pid will be chosen
  • logger (logging.Logger) – An optional logger for the process to use
  • loop – The event loop
  • communicator (plumpy.Communicator) – The (optional) communicator
__module__ = 'aiida.work.processes'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_calc_node_class

alias of aiida.orm.implementation.general.calculation.function.FunctionCalculation

static _func(*args, **kwargs)[source]

This is used internally to store the actual function that is being wrapped and will be replaced by the build method.

_func_args = None
_run()[source]
_setup_db_record()[source]
classmethod args_to_dict(*args)[source]

Create an input dictionary (i.e. label: value) from supplied args.

Parameters:args – The values to use
Returns:A label: value dictionary
static build(func, calc_node_class=None)[source]

Build a Process from the given function. All function arguments will be assigned as process inputs. If keyword arguments are specified then these will also become inputs.

Parameters:
  • func – The function to build a process from
  • calc_node_class (aiida.orm.calculation.Calculation) – Provide a custom calculation class to be used, has to be constructable with no arguments
Returns:

A Process class that represents the function

Return type:

FunctionProcess

classmethod create_inputs(*args, **kwargs)[source]
execute()[source]

Execute the process. This will return if the process terminates or is paused.

Returns:None if not terminated, otherwise self.outputs
classmethod get_or_create_db_record()[source]

Create a database calculation node that represents what happened in this process. :return: A calculation

class aiida.work.Runner(rmq_config=None, poll_interval=0.0, loop=None, rmq_submit=False, enable_persistence=True, persister=None)[source]

Bases: object

Class that can launch processes by running in the current interpreter or by submitting them to the daemon.

__dict__ = dict_proxy({'__module__': 'aiida.work.runners', 'rmq': <property object>, '_closed': False, '_poll_calculation': <function _poll_calculation>, '_run': <function _run>, '_create_child_runner': <function _create_child_runner>, 'run_get_pid': <function run_get_pid>, 'get_calculation_future': <function get_calculation_future>, 'child_runner': <function child_runner>, '__dict__': <attribute '__dict__' of 'Runner' objects>, 'close': <function close>, '__weakref__': <attribute '__weakref__' of 'Runner' objects>, 'transport': <property object>, '_setup_rmq': <function _setup_rmq>, '_rmq_connector': None, 'call_on_legacy_workflow_finish': <function call_on_legacy_workflow_finish>, 'run_get_node': <function run_get_node>, '__enter__': <function __enter__>, '_poll_legacy_wf': <function _poll_legacy_wf>, 'submit': <function submit>, '_communicator': None, 'start': <function start>, 'is_closed': <function is_closed>, '__doc__': 'Class that can launch processes by running in the current interpreter or by submitting them to the daemon.', '__exit__': <function __exit__>, '_persister': None, 'run': <function run>, 'run_until_complete': <function run_until_complete>, 'stop': <function stop>, 'instantiate_process': <function instantiate_process>, 'communicator': <property object>, 'persister': <property object>, '__init__': <function __init__>, 'call_on_calculation_finish': <function call_on_calculation_finish>, 'loop': <property object>})
__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
__init__(rmq_config=None, poll_interval=0.0, loop=None, rmq_submit=False, enable_persistence=True, persister=None)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.runners'
__weakref__

list of weak references to the object (if defined)

_closed = False
_communicator = None
_create_child_runner()[source]
_persister = None
_poll_calculation(calc_node, callback)[source]
_poll_legacy_wf(workflow, callback)[source]
_rmq_connector = None
_run(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and the calculation node

_setup_rmq(url, prefix=None, task_prefetch_count=None, testing_mode=False)[source]

Setup the RabbitMQ connection by creating a connector, communicator and control panel

Parameters:
  • url – the url to use for the connector
  • prefix – the rmq prefix to use for the communicator
  • task_prefetch_count – the maximum number of tasks the communicator may retrieve at a given time
  • testing_mode – whether to create a communicator in testing mode
call_on_calculation_finish(pk, callback)[source]

Callback to be called when the calculation of the given pk is terminated

Parameters:
  • pk – the pk of the calculation
  • callback – the function to be called upon calculation termination
call_on_legacy_workflow_finish(pk, callback)[source]

Callback to be called when the workflow of the given pk is terminated

Parameters:
  • pk – the pk of the workflow
  • callback – the function to be called upon workflow termination
child_runner(**kwds)[source]

Contextmanager that will yield a runner that is a child of this runner

Returns:a Runner instance that inherits the attributes of this runner
close()[source]

Close the runner by stopping the loop and disconnecting the RmqConnector if it has one.

communicator
get_calculation_future(pk)[source]

Get a future for an orm Calculation. The future will have the calculation node as the result when finished.

Returns:A future representing the completion of the calculation node
instantiate_process(process, *args, **inputs)[source]

Return an instance of the process with the given runner and inputs. The function can deal with various types of the process:

  • Process instance: will simply return the instance
  • JobCalculation class: will construct the JobProcess and instantiate it
  • ProcessBuilder instance: will instantiate the Process from the class and inputs defined within it
  • Process class: will instantiate with the specified inputs

If anything else is passed, a ValueError will be raised

Parameters:
  • self – instance of a Runner
  • process – Process instance or class, JobCalculation class or ProcessBuilder instance
  • inputs – the inputs for the process to be instantiated with
is_closed()[source]
loop
persister
rmq
run(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

the outputs of the process

run_get_node(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and the calculation node

run_get_pid(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and process pid

run_until_complete(future)[source]

Run the loop until the future has finished and return the result.

start()[source]

Start the internal event loop.

stop()[source]

Stop the internal event loop.

submit(process, *args, **inputs)[source]

Submit the process with the supplied inputs to this runner immediately returning control to the interpreter. The return value will be the calculation node of the submitted process

Parameters:
  • process – the process class to submit
  • inputs – the inputs to be passed to the process
Returns:

the calculation node of the process

transport
class aiida.work.DaemonRunner(*args, **kwargs)[source]

Bases: aiida.work.runners.Runner

A sub class of Runner suited for a daemon runner

__init__(*args, **kwargs)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.runners'
_setup_rmq(url, prefix=None, task_prefetch_count=None, testing_mode=False)[source]

Setup the RabbitMQ connection by creating a connector, communicator and control panel

Parameters:
  • url – the url to use for the connector
  • prefix – the rmq prefix to use for the communicator
  • task_prefetch_count – the maximum number of tasks the communicator may retrieve at a given time
  • testing_mode – whether to create a communicator in testing mode
aiida.work.new_runner(**kwargs)[source]

Create a default runner optionally passing keyword arguments

Parameters:kwargs – arguments to be passed to Runner constructor
Returns:a new runner instance
aiida.work.set_runner(runner)[source]

Set the global runner instance

Parameters:runner – the runner instance to set as the global runner
aiida.work.get_runner()[source]

Get the global runner instance

Returns:the global runner
class aiida.work.WorkChain(inputs=None, logger=None, runner=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

A WorkChain, the base class for AiiDA workflows.

_CONTEXT = 'CONTEXT'
_STEPPER_STATE = 'stepper_state'
__abstractmethods__ = frozenset([])
__init__(inputs=None, logger=None, runner=None, enable_persistence=True)[source]

The signature of the constructor should not be changed by subclassing processes.

Parameters:
  • inputs (dict) – A dictionary of the process inputs
  • pid – The process ID, can be manually set, if not a unique pid will be chosen
  • logger (logging.Logger) – An optional logger for the process to use
  • loop – The event loop
  • communicator (plumpy.Communicator) – The (optional) communicator
__module__ = 'aiida.work.workchain'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_auto_persist = set(['_parent_pid', '_paused_flag', '_enable_persistence', '_pid', '_future', '_CREATION_TIME', '_awaitables'])
_do_step()[source]

Execute the next step in the outline and return the result.

If the stepper returns a non-finished status and the return value is of type ToContext, the contents of the ToContext container will be turned into awaitables if necessary. If any awaitables were created, the process will enter in the Wait state, otherwise it will go to Continue. When the stepper returns that it is done, the stepper result will be converted to None and returned, unless it is an integer or instance of ExitCode.

_run()[source]
_spec_type

alias of _WorkChainSpec

action_awaitables()[source]

Handle the awaitables that are currently registered with the workchain

Depending on the class type of the awaitable’s target a different callback function will be bound with the awaitable and the runner will be asked to call it when the target is completed

ctx
classmethod define(spec)[source]
exit_codes = {}
insert_awaitable(awaitable)[source]

Insert a awaitable that will cause the workchain to wait until the wait on is finished before continuing to the next step.

Parameters:awaitable (aiida.work.awaitable.Awaitable) – The thing to await
load_instance_state(saved_state, load_context)[source]
on_calculation_finished(awaitable, pk)[source]

Callback function called by the runner when the calculation instance identified by pk is completed. The awaitable will be effectuated on the context of the workchain and removed from the internal list. If all awaitables have been dealt with, the workchain process is resumed

Parameters:
  • awaitable – an Awaitable instance
  • pk – the pk of the awaitable’s target
on_legacy_workflow_finished(awaitable, pk)[source]

Callback function called by the runner when the legacy workflow instance identified by pk is completed. The awaitable will be effectuated on the context of the workchain and removed from the internal list. If all awaitables have been dealt with, the workchain process is resumed

Parameters:
  • awaitable – an Awaitable instance
  • pk – the pk of the awaitable’s target
on_run()[source]
on_wait(awaitables)[source]
remove_awaitable(awaitable)[source]

Remove a awaitable.

Precondition: must be a awaitable that was previously inserted

Parameters:awaitable – The awaitable to remove
save_instance_state(out_state, save_context)[source]

Ask the process to save its current instance state.

Parameters:
  • out_state (plumpy.Bundle) – A bundle to save the state to
  • save_context – The save context
to_context(**kwargs)[source]

This is a convenience method that provides syntactic sugar, for a user to add multiple intersteps that will assign a certain value to the corresponding key in the context of the workchain

aiida.work.assign_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to ASSIGN. When the awaitable target is completed it will be assigned to the context for a key that is to be defined later

Parameters:target – an instance of Calculation, Workflow or Awaitable
Returns:the awaitable
Return type:Awaitable
aiida.work.append_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to APPEND. When the awaitable target is completed it will be appended to a list in the context for a key that is to be defined later

Parameters:target – an instance of Calculation, Workflow or Awaitable
Returns:the awaitable
Return type:Awaitable
aiida.work.if_(condition)[source]

A conditional that can be used in a workchain outline.

Use as:

if_(cls.conditional)(
  cls.step1,
  cls.step2
)

Each step can, of course, also be any valid workchain step e.g. conditional.

Parameters:condition – The workchain method that will return True or False
aiida.work.while_(condition)[source]

A while loop that can be used in a workchain outline.

Use as:

while_(cls.conditional)(
  cls.step1,
  cls.step2
)

Each step can, of course, also be any valid workchain step e.g. conditional.

Parameters:condition – The workchain method that will return True or False
aiida.work.ToContext

alias of __builtin__.dict

class aiida.work._WorkChainSpec[source]

Bases: aiida.work.process_spec.ProcessSpec, plumpy.workchains.WorkChainSpec

__module__ = 'aiida.work.workchain'
aiida.work.run(process, *args, **inputs)

Run the process with the supplied inputs in a local runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

the outputs of the process

aiida.work.run_get_pid(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and process pid

aiida.work.run_get_node(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and the calculation node

aiida.work.submit(process, **inputs)[source]

Submit the process with the supplied inputs to the daemon runner immediately returning control to the interpreter. The return value will be the calculation node of the submitted process

Parameters:
  • process – the process class to submit
  • inputs – the inputs to be passed to the process
Returns:

the calculation node of the process

aiida.work.workfunction(func, calc_node_class=None)[source]

A decorator to turn a standard python function into a workfunction. Example usage:

>>> from aiida.orm.data.int import Int
>>>
>>> # Define the workfunction
>>> @workfunction
>>> def sum(a, b):
>>>    return a + b
>>> # Run it with some input
>>> r = sum(Int(4), Int(5))
>>> print(r)
9
>>> r.get_inputs_dict() 
{u'result': <FunctionCalculation: uuid: ce0c63b3-1c84-4bb8-ba64-7b70a36adf34 (pk: 3567)>}
>>> r.get_inputs_dict()['result'].get_inputs()
[4, 5]
class aiida.work.ProcessState[source]

Bases: enum.Enum

The possible states that a Process can be in.

CREATED = 'created'
EXCEPTED = 'excepted'
FINISHED = 'finished'
KILLED = 'killed'
RUNNING = 'running'
WAITING = 'waiting'
__module__ = 'plumpy.process_states'
class aiida.work.JobProcess(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

CALC_NODE_LABEL = 'calc_node'
OPTIONS_INPUT_LABEL = 'options'
TRANSPORT_OPERATION = 'TRANSPORT_OPERATION'
__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.job_processes'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_calc_class = None
_setup_db_record()[source]

Link up all the retrospective provenance for this JobCalculation

classmethod build(calc_class)[source]
classmethod get_builder()[source]
get_or_create_db_record()[source]

Create a database calculation node that represents what happened in this process. :return: A calculation

classmethod get_state_classes()[source]
on_excepted()[source]

The Process excepted so we set the calculation and scheduler state.

on_killed()[source]

The Process was killed so we set the calculation and scheduler state.

retrieved(retrieved_temporary_folder=None)[source]

Parse a retrieved job calculation. This is called once it’s finished waiting for the calculation to be finished and the data has been retrieved.

run()[source]

Run the calculation, we put it in the TOSUBMIT state and then wait for it to be copied over, submitted, retrieved, etc.

update_outputs()[source]
aiida.work.new_control_panel()[source]

Create a new control panel based on the current profile configuration

Returns:A new control panel instance
Return type:aiida.work.rmq.ProcessControlPanel
aiida.work.new_blocking_control_panel()[source]

Create a new blocking control panel based on the current profile configuration

Returns:A new control panel instance
Return type:aiida.work.rmq.BlockingProcessControlPanel
class aiida.work.BlockingProcessControlPanel(prefix, testing_mode=False)[source]

Bases: aiida.work.rmq.ProcessControlPanel

A blocking adapter for the ProcessControlPanel.

__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
__init__(prefix, testing_mode=False)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.rmq'
close()[source]
execute_action(action)[source]
execute_process_start(process_class, init_args=None, init_kwargs=None)[source]
exception aiida.work.RemoteException[source]

Bases: exceptions.Exception

An exception occurred at the remote end of the call

__module__ = 'kiwipy.communications'
__weakref__

list of weak references to the object (if defined)

exception aiida.work.DeliveryFailed[source]

Bases: exceptions.Exception

Failed to deliver a message

__module__ = 'kiwipy.communications'
__weakref__

list of weak references to the object (if defined)

class aiida.work.ProcessLauncher(loop=None, persister=None, load_context=None, loader=None)[source]

Bases: plumpy.process_comms.ProcessLauncher

A sub class of plumpy.ProcessLauncher to launch a Process

It overrides the _continue method to make sure the node corresponding to the task can be loaded and that if it is already marked as terminated, it is not continued but the future is reconstructed and returned

__module__ = 'aiida.work.rmq'
_continue(task)[source]

Continue the task

Note that the task may already have been completed, as indicated from the corresponding the node, in which case it is not continued, but the corresponding future is reconstructed and returned. This scenario may occur when the Process was already completed by another worker that however failed to send the acknowledgment.

Parameters:task – the task to continue
Raises:plumpy.TaskRejected – if the node corresponding to the task cannot be loaded
class aiida.work.ProcessControlPanel(prefix, rmq_connector, testing_mode=False)[source]

Bases: object

RMQ control panel for launching, controlling and getting status of Processes over the RMQ protocol.

__dict__ = dict_proxy({'execute_action': <function execute_action>, '__module__': 'aiida.work.rmq', '__exit__': <function __exit__>, 'execute_process': <function execute_process>, 'pause_process': <function pause_process>, 'connect': <function connect>, 'communicator': <property object>, '__dict__': <attribute '__dict__' of 'ProcessControlPanel' objects>, 'close': <function close>, 'kill_process': <function kill_process>, '__weakref__': <attribute '__weakref__' of 'ProcessControlPanel' objects>, 'launch_process': <function launch_process>, '__init__': <function __init__>, '__enter__': <function __enter__>, '__doc__': '\n RMQ control panel for launching, controlling and getting status of\n Processes over the RMQ protocol.\n ', 'play_process': <function play_process>, 'request_status': <function request_status>, 'continue_process': <function continue_process>})
__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
__init__(prefix, rmq_connector, testing_mode=False)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.rmq'
__weakref__

list of weak references to the object (if defined)

close()[source]
communicator
connect()[source]
continue_process(pid)[source]
execute_action(action)[source]
execute_process(process_class, init_args=None, init_kwargs=None)[source]
kill_process(pid, msg=None)[source]
launch_process(process_class, init_args=None, init_kwargs=None)[source]
pause_process(pid)[source]
play_process(pid)[source]
request_status(pid)[source]
class aiida.work.Future[source]

Bases: tornado.concurrent.Future

__module__ = 'kiwipy.futures'
_cancelled = False
cancel()[source]

Cancel the operation, if possible.

Tornado Futures do not support cancellation, so this method always returns False.

cancelled()[source]

Returns True if the operation has been cancelled.

Tornado Futures do not support cancellation, so this method always returns False.

result()[source]

If the operation succeeded, return its result. If it failed, re-raise its exception.

This method takes a timeout argument for compatibility with concurrent.futures.Future but it is an error to call it before the Future is done, so the timeout is never used.

set_result(result)[source]

Sets the result of a Future.

It is undefined to call any of the set methods more than once on the same object.

class aiida.work.CalculationFuture(pk, loop=None, poll_interval=None, communicator=None)[source]

Bases: kiwipy.futures.Future

A future that waits for a calculation to complete using both polling and listening for broadcast events if possible

__init__(pk, loop=None, poll_interval=None, communicator=None)[source]

Get a future for a calculation node being finished. If a None poll_interval is supplied polling will not be used. If a communicator is supplied it will be used to listen for broadcast messages.

Parameters:
  • pk – The calculation pk
  • loop – An event loop
  • poll_interval – The polling interval. Can be None in which case no polling.
  • communicator – A communicator. Can be None in which case no broadcast listens.
__module__ = 'aiida.work.futures'
_filtered = None
_poll_calculation(**kwargs)[source]

Poll whether the calculation node has reached a terminal state.

cleanup()[source]

Clean up the future by removing broadcast subscribers from the communicator if it still exists.

class aiida.work.ObjectLoader[source]

Bases: plumpy.loaders.DefaultObjectLoader

The AiiDA specific object loader.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.persistence'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
static is_wrapped_job_calculation(name)[source]
load_object(identifier)[source]

Given an identifier load an object.

Parameters:identifier – The identifier
Returns:The loaded object
Raises:ValueError if the object cannot be loaded
aiida.work.get_object_loader()[source]

Get the global AiiDA object loader

Returns:The global object loader
Return type:plumpy.ObjectLoader

Submodules

Enums and function for the awaitables of Processes.

class aiida.work.awaitable.Awaitable(**kwargs)[source]

Bases: plumpy.utils.AttributesDict

An attribute dictionary that represents an action that a Process could be waiting for to finish

__module__ = 'aiida.work.awaitable'
class aiida.work.awaitable.AwaitableTarget[source]

Bases: enum.Enum

Enum that describes the class of the target a given awaitable

CALCULATION = 'calculation'
WORKFLOW = 'workflow'
__module__ = 'aiida.work.awaitable'
class aiida.work.awaitable.AwaitableAction[source]

Bases: enum.Enum

Enum that describes the action to be taken for a given awaitable

APPEND = 'append'
ASSIGN = 'assign'
__module__ = 'aiida.work.awaitable'
aiida.work.awaitable.construct_awaitable(target)[source]

Construct an instance of the Awaitable class that will contain the information related to the action to be taken with respect to the context once the awaitable object is completed.

The awaitable is a simple dictionary with the following keys

  • pk: the pk of the node that is being waited on
  • action: the context action to be performed upon completion
  • outputs: a boolean that toggles whether the node itself

Currently the only awaitable classes are Calculation and Workflow The only awaitable actions are the Assign and Append operators

Convenience functions to add awaitables to the Context of a WorkChain.

aiida.work.context.ToContext

alias of __builtin__.dict

aiida.work.context.assign_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to ASSIGN. When the awaitable target is completed it will be assigned to the context for a key that is to be defined later

Parameters:target – an instance of Calculation, Workflow or Awaitable
Returns:the awaitable
Return type:Awaitable
aiida.work.context.append_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to APPEND. When the awaitable target is completed it will be appended to a list in the context for a key that is to be defined later

Parameters:target – an instance of Calculation, Workflow or Awaitable
Returns:the awaitable
Return type:Awaitable

Exceptions that can be thrown by parts of the workflow engine.

exception aiida.work.exceptions.PastException[source]

Bases: aiida.common.exceptions.AiidaException

Raised when an attempt is made to continue a Process that has already excepted before

__module__ = 'aiida.work.exceptions'

A namedtuple and namespace for ExitCodes that can be used to exit from Processes.

class aiida.work.exit_code.ExitCode(status, message)

Bases: tuple

__dict__ = dict_proxy({'status': <property object>, '__module__': 'aiida.work.exit_code', '__getstate__': <function __getstate__>, '__new__': <staticmethod object>, '_make': <classmethod object>, '_fields': ('status', 'message'), '_replace': <function _replace>, '__slots__': (), '_asdict': <function _asdict>, '__repr__': <function __repr__>, '__dict__': <property object>, 'message': <property object>, '__getnewargs__': <function __getnewargs__>, '__doc__': 'ExitCode(status, message)'})
__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__module__ = 'aiida.work.exit_code'
static __new__(_cls, status=0, message=None)

Create new instance of ExitCode(status, message)

__repr__()

Return a nicely formatted representation string

__slots__ = ()
_asdict()

Return a new OrderedDict which maps field names to their values

_fields = ('status', 'message')
classmethod _make(iterable, new=<built-in method __new__ of type object at 0x906d60>, len=<built-in function len>)

Make a new ExitCode object from a sequence or iterable

_replace(**kwds)

Return a new ExitCode object replacing specified fields with new values

message

Alias for field number 1

status

Alias for field number 0

class aiida.work.exit_code.ExitCodesNamespace[source]

Bases: aiida.common.extendeddicts.AttributeDict

A namespace of ExitCode tuples that can be accessed through getattr as well as getitem. Additionally, the collection can be called with an identifier, that can either reference the integer status of the ExitCode that needs to be retrieved or the key in the collection

__call__(identifier)[source]

Return a specific exit code identified by either its exit status or label

Parameters:identifier – the identifier of the exit code. If the type is integer, it will be interpreted as the exit code status, otherwise it be interpreted as the exit code label
Returns:an ExitCode named tuple
Raises:ValueError – if no exit code with the given label is defined for this process
__module__ = 'aiida.work.exit_code'

Futures that can poll or receive broadcasted messages while waiting for a task to be completed.

class aiida.work.futures.Future[source]

Bases: tornado.concurrent.Future

__module__ = 'kiwipy.futures'
_cancelled = False
cancel()[source]

Cancel the operation, if possible.

Tornado Futures do not support cancellation, so this method always returns False.

cancelled()[source]

Returns True if the operation has been cancelled.

Tornado Futures do not support cancellation, so this method always returns False.

result()[source]

If the operation succeeded, return its result. If it failed, re-raise its exception.

This method takes a timeout argument for compatibility with concurrent.futures.Future but it is an error to call it before the Future is done, so the timeout is never used.

set_result(result)[source]

Sets the result of a Future.

It is undefined to call any of the set methods more than once on the same object.

class aiida.work.futures.CalculationFuture(pk, loop=None, poll_interval=None, communicator=None)[source]

Bases: kiwipy.futures.Future

A future that waits for a calculation to complete using both polling and listening for broadcast events if possible

__init__(pk, loop=None, poll_interval=None, communicator=None)[source]

Get a future for a calculation node being finished. If a None poll_interval is supplied polling will not be used. If a communicator is supplied it will be used to listen for broadcast messages.

Parameters:
  • pk – The calculation pk
  • loop – An event loop
  • poll_interval – The polling interval. Can be None in which case no polling.
  • communicator – A communicator. Can be None in which case no broadcast listens.
__module__ = 'aiida.work.futures'
_filtered = None
_poll_calculation(**kwargs)[source]

Poll whether the calculation node has reached a terminal state.

cleanup()[source]

Clean up the future by removing broadcast subscribers from the communicator if it still exists.

class aiida.work.job_processes.JobProcess(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

CALC_NODE_LABEL = 'calc_node'
OPTIONS_INPUT_LABEL = 'options'
TRANSPORT_OPERATION = 'TRANSPORT_OPERATION'
__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.job_processes'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_calc_class = None
_setup_db_record()[source]

Link up all the retrospective provenance for this JobCalculation

classmethod build(calc_class)[source]
classmethod get_builder()[source]
get_or_create_db_record()[source]

Create a database calculation node that represents what happened in this process. :return: A calculation

classmethod get_state_classes()[source]
on_excepted()[source]

The Process excepted so we set the calculation and scheduler state.

on_killed()[source]

The Process was killed so we set the calculation and scheduler state.

retrieved(retrieved_temporary_folder=None)[source]

Parse a retrieved job calculation. This is called once it’s finished waiting for the calculation to be finished and the data has been retrieved.

run()[source]

Run the calculation, we put it in the TOSUBMIT state and then wait for it to be copied over, submitted, retrieved, etc.

update_outputs()[source]

Top level functions that can be used to launch a Process.

aiida.work.launch.run(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

the outputs of the process

aiida.work.launch.run_get_pid(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and process pid

aiida.work.launch.run_get_node(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and the calculation node

aiida.work.launch.submit(process, **inputs)[source]

Submit the process with the supplied inputs to the daemon runner immediately returning control to the interpreter. The return value will be the calculation node of the submitted process

Parameters:
  • process – the process class to submit
  • inputs – the inputs to be passed to the process
Returns:

the calculation node of the process

Definition of AiiDA’s process persister and the necessary object loaders.

class aiida.work.persistence.ObjectLoader[source]

Bases: plumpy.loaders.DefaultObjectLoader

The AiiDA specific object loader.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.persistence'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
static is_wrapped_job_calculation(name)[source]
load_object(identifier)[source]

Given an identifier load an object.

Parameters:identifier – The identifier
Returns:The loaded object
Raises:ValueError if the object cannot be loaded
aiida.work.persistence.get_object_loader()[source]

Get the global AiiDA object loader

Returns:The global object loader
Return type:plumpy.ObjectLoader

AiiDA specific implementation of plumpy Ports and PortNamespaces for the ProcessSpec.

class aiida.work.ports.InputPort(*args, **kwargs)[source]

Bases: aiida.work.ports.WithSerialize, aiida.work.ports.WithNonDb, plumpy.ports.InputPort

Sub class of plumpy.InputPort which mixes in the WithSerialize and WithNonDb mixins to support automatic value serialization to database storable types and support non database storable input types as well.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.ports'
__slotnames__ = []
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
get_description()[source]

Return a description of the InputPort, which will be a dictionary of its attributes

Returns:a dictionary of the stringified InputPort attributes
class aiida.work.ports.PortNamespace(name=None, help=None, required=True, validator=None, valid_type=None, default=(), dynamic=False)[source]

Bases: plumpy.ports.PortNamespace

Sub class of plumpy.PortNamespace which implements the serialize method to support automatic recursive serialization of a given mapping onto the ports of the PortNamespace.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.ports'
__slotnames__ = []
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 117
_abc_registry = <_weakrefset.WeakSet object>
serialize(mapping)[source]

Serialize the given mapping onto this Portnamespace. It will recursively call this function on any nested PortNamespace or the serialize function on any Ports.

Parameters:mapping – a mapping of values to be serialized
Returns:the serialized mapping
class aiida.work.ports.WithNonDb(*args, **kwargs)[source]

Bases: object

A mixin that adds support to a port to flag a that should not be stored in the database using the non_db=True flag.

The mixins have to go before the main port class in the superclass order to make sure the mixin has the chance to strip out the non_db keyword.

__dict__ = dict_proxy({'__module__': 'aiida.work.ports', 'non_db': <property object>, '__dict__': <attribute '__dict__' of 'WithNonDb' objects>, '__weakref__': <attribute '__weakref__' of 'WithNonDb' objects>, '__doc__': '\n A mixin that adds support to a port to flag a that should not be stored\n in the database using the non_db=True flag.\n\n The mixins have to go before the main port class in the superclass order\n to make sure the mixin has the chance to strip out the non_db keyword.\n ', '__init__': <function __init__>})
__init__(*args, **kwargs)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.ports'
__weakref__

list of weak references to the object (if defined)

non_db
class aiida.work.ports.WithSerialize(*args, **kwargs)[source]

Bases: object

A mixin that adds support for a serialization function which is automatically applied on inputs that are not AiiDA data types.

__dict__ = dict_proxy({'__module__': 'aiida.work.ports', 'serialize': <function serialize>, '__dict__': <attribute '__dict__' of 'WithSerialize' objects>, '__weakref__': <attribute '__weakref__' of 'WithSerialize' objects>, '__doc__': '\n A mixin that adds support for a serialization function which is automatically applied on inputs\n that are not AiiDA data types.\n ', '__init__': <function __init__>})
__init__(*args, **kwargs)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.ports'
__weakref__

list of weak references to the object (if defined)

serialize(value)[source]

Serialize the given value if it is not already a Data type and a serializer function is defined

Parameters:value – the value to be serialized
Returns:a serialized version of the value or the unchanged value

Convenience classes to help building the input dictionaries for Processes.

class aiida.work.process_builder.ProcessBuilder(process_class)[source]

Bases: aiida.work.process_builder.ProcessBuilderNamespace

A process builder that helps creating a new calculation

__abstractmethods__ = frozenset([])
__init__(process_class)[source]

Dynamically construct the get and set properties for the ports of the given port namespace

For each port in the given port namespace a get and set property will be constructed dynamically and added to the ProcessBuilderNamespace. The docstring for these properties will be defined by calling str() on the Port, which should return the description of the Port.

Parameters:port_namespace – the inputs PortNamespace for which to construct the builder
__module__ = 'aiida.work.process_builder'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 117
_abc_registry = <_weakrefset.WeakSet object>
process_class
class aiida.work.process_builder.JobProcessBuilder(process_class)[source]

Bases: aiida.work.process_builder.ProcessBuilder

A process builder specific to JobCalculation classes, that provides also the submit_test functionality

__abstractmethods__ = frozenset([])
__dir__()[source]
__module__ = 'aiida.work.process_builder'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 117
_abc_registry = <_weakrefset.WeakSet object>
submit_test(folder=None, subfolder_name=None)[source]

Run a test submission by creating the files that would be generated for the real calculation in a local folder, without actually storing the calculation nor the input nodes. This functionality therefore also does not require any of the inputs nodes to be stored yet.

Parameters:
  • folder – a Folder object, within which to create the calculation files. By default a folder will be created in the current working directory
  • subfolder_name – the name of the subfolder to use within the directory of the folder object. By default a unique string will be generated based on the current datetime with the format yymmdd- followed by an auto incrementing index
class aiida.work.process_builder.ProcessBuilderNamespace(port_namespace)[source]

Bases: _abcoll.Mapping

Input namespace for the ProcessBuilder. Dynamically generates the getters and setters for the input ports of a given PortNamespace

__abstractmethods__ = frozenset([])
__dir__()[source]
__getitem__(item)[source]
__init__(port_namespace)[source]

Dynamically construct the get and set properties for the ports of the given port namespace

For each port in the given port namespace a get and set property will be constructed dynamically and added to the ProcessBuilderNamespace. The docstring for these properties will be defined by calling str() on the Port, which should return the description of the Port.

Parameters:port_namespace – the inputs PortNamespace for which to construct the builder
__iter__()[source]
__len__()[source]
__module__ = 'aiida.work.process_builder'
__repr__() <==> repr(x)[source]
__setattr__(attr, value)[source]

Any attributes without a leading underscore being set correspond to inputs and should hence be validated with respect to the corresponding input port from the process spec

_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 117
_abc_registry = <_weakrefset.WeakSet object>

AiiDA specific implementation of plumpy’s ProcessSpec.

class aiida.work.process_spec.ProcessSpec[source]

Bases: plumpy.process_spec.ProcessSpec

Sub classes plumpy.ProcessSpec to define the INPUT_PORT_TYPE and PORT_NAMESPACE_TYPE with the variants implemented in AiiDA

INPUT_PORT_TYPE

alias of aiida.work.ports.InputPort

PORT_NAMESPACE_TYPE

alias of aiida.work.ports.PortNamespace

__init__()[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.process_spec'
exit_code(status, label, message)[source]

Add an exit code to the ProcessSpec

Parameters:
  • status – the exit status integer
  • label – a label by which the exit code can be addressed
  • message – a more detailed description of the exit code
exit_codes

Return the namespace of exit codes defined for this ProcessSpec

Returns:ExitCodesNamespace of ExitCode named tuples
class aiida.work.processes.Process(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: plumpy.processes.Process

This class represents an AiiDA process which can be executed and will have full provenance saved in the database.

SINGLE_RETURN_LINKNAME = 'result'
class SaveKeys[source]

Bases: enum.Enum

Keys used to identify things in the saved instance state bundle.

CALC_ID = 'calc_id'
__module__ = 'aiida.work.processes'
__abstractmethods__ = frozenset([])
__init__(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

The signature of the constructor should not be changed by subclassing processes.

Parameters:
  • inputs (dict) – A dictionary of the process inputs
  • pid – The process ID, can be manually set, if not a unique pid will be chosen
  • logger (logging.Logger) – An optional logger for the process to use
  • loop – The event loop
  • communicator (plumpy.Communicator) – The (optional) communicator
__metaclass__

alias of abc.ABCMeta

__module__ = 'aiida.work.processes'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_add_description_and_label()[source]
_auto_persist = set(['_paused_flag', '_enable_persistence', '_pid', '_future', '_parent_pid', '_CREATION_TIME'])
_calc_class

alias of aiida.orm.implementation.general.calculation.work.WorkCalculation

_create_and_setup_db_record()[source]
_flat_inputs()[source]

Return a flattened version of the parsed inputs dictionary. The eventual keys will be a concatenation of the nested keys

Returns:flat dictionary of parsed inputs
_flatten_inputs(port, port_value, parent_name='', separator='_')[source]

Function that will recursively flatten the inputs dictionary, omitting inputs for ports that are marked as being non database storable

Parameters:
  • port – port against which to map the port value, can be InputPort or PortNamespace
  • port_value – value for the current port, can be a Mapping
  • parent_name – the parent key with which to prefix the keys
  • separator – character to use for the concatenation of keys
static _get_namespace_list(namespace=None, agglomerate=True)[source]
_setup_db_record()[source]
_spec_type

alias of aiida.work.process_spec.ProcessSpec

calc
decode_input_args(encoded)[source]

Decode saved input arguments as they came from the saved instance state Bundle

Parameters:encoded
Returns:The decoded input args
classmethod define(spec)[source]
encode_input_args(inputs)[source]

Encode input arguments such that they may be saved in a Bundle

Parameters:inputs – A mapping of the inputs as passed to the process
Returns:The encoded inputs
exposed_inputs(process_class, namespace=None, agglomerate=True)[source]

Gather a dictionary of the inputs that were exposed for a given Process class under an optional namespace.

Parameters:
  • process_class – Process class whose inputs to try and retrieve
  • namespace (str) – PortNamespace in which to look for the inputs
  • agglomerate (bool) – If set to true, all parent namespaces of the given namespace will also be searched for inputs. Inputs in lower-lying namespaces take precedence.
exposed_outputs(process_instance, process_class, namespace=None, agglomerate=True)[source]

Gather the outputs which were exposed from the process_class and emitted by the specific process_instance in a dictionary.

Parameters:
  • namespace (str) – Namespace in which to search for exposed outputs.
  • agglomerate (bool) – If set to true, all parent namespaces of the given namespace will also be searched for outputs. Outputs in lower-lying namespaces take precedence.
classmethod get_builder()[source]
classmethod get_or_create_db_record()[source]

Create a database calculation node that represents what happened in this process. :return: A calculation

get_parent_calc()[source]
get_provenance_inputs_iterator()[source]
has_finished()[source]

Has the process finished i.e. completed running normally, without abort or an exception.

Returns:True if finished, False otherwise
Return type:bool
init()[source]
kill(msg=None)[source]

Kill the process and all the children calculations it called

load_instance_state(saved_state, load_context)[source]
on_create()[source]
on_entered(from_state)[source]
on_entering(state)[source]
on_except(exc_info)[source]

Log the exception by calling the report method with formatted stack trace from exception info object and store the exception string as a node attribute

Parameters:exc_info – the sys.exc_info() object
on_finish(result, successful)[source]

Set the finish status on the Calculation node

on_output_emitting(output_port, value)[source]

The process has emitted a value on the given output port.

Parameters:
  • output_port – The output port name the value was emitted on
  • value – The value emitted
on_terminated()[source]

Called when a Process enters a terminal state.

out(output_port, value=None)[source]

Record an output value for a specific output port. If the output port matches an explicitly defined Port it will be validated against that. If not it will be validated against the PortNamespace, which means it will be checked for dynamicity and whether the type of the value is valid

Parameters:
  • output_port – the name of the output port, can be namespaced
  • value – the value for the output port
Raises:

TypeError if the output value is not validated against the port

out_many(out_dict)[source]

Add all values given in out_dict to the outputs. The keys of the dictionary will be used as output names.

report(msg, *args, **kwargs)[source]

Log a message to the logger, which should get saved to the database through the attached DbLogHandler. The class name and function name of the caller are prepended to the given message

run_process(process, *args, **inputs)[source]
runner
save_instance_state(out_state, save_context)[source]

Ask the process to save its current instance state.

Parameters:
  • out_state (plumpy.Bundle) – A bundle to save the state to
  • save_context – The save context
submit(process, *args, **kwargs)[source]
update_node_state(state)[source]
update_outputs()[source]
class aiida.work.processes.ProcessState[source]

Bases: enum.Enum

The possible states that a Process can be in.

CREATED = 'created'
EXCEPTED = 'excepted'
FINISHED = 'finished'
KILLED = 'killed'
RUNNING = 'running'
WAITING = 'waiting'
__module__ = 'plumpy.process_states'
class aiida.work.processes.FunctionProcess(*args, **kwargs)[source]

Bases: aiida.work.processes.Process

__abstractmethods__ = frozenset([])
__init__(*args, **kwargs)[source]

The signature of the constructor should not be changed by subclassing processes.

Parameters:
  • inputs (dict) – A dictionary of the process inputs
  • pid – The process ID, can be manually set, if not a unique pid will be chosen
  • logger (logging.Logger) – An optional logger for the process to use
  • loop – The event loop
  • communicator (plumpy.Communicator) – The (optional) communicator
__module__ = 'aiida.work.processes'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_calc_node_class

alias of aiida.orm.implementation.general.calculation.function.FunctionCalculation

static _func(*args, **kwargs)[source]

This is used internally to store the actual function that is being wrapped and will be replaced by the build method.

_func_args = None
_run()[source]
_setup_db_record()[source]
classmethod args_to_dict(*args)[source]

Create an input dictionary (i.e. label: value) from supplied args.

Parameters:args – The values to use
Returns:A label: value dictionary
static build(func, calc_node_class=None)[source]

Build a Process from the given function. All function arguments will be assigned as process inputs. If keyword arguments are specified then these will also become inputs.

Parameters:
  • func – The function to build a process from
  • calc_node_class (aiida.orm.calculation.Calculation) – Provide a custom calculation class to be used, has to be constructable with no arguments
Returns:

A Process class that represents the function

Return type:

FunctionProcess

classmethod create_inputs(*args, **kwargs)[source]
execute()[source]

Execute the process. This will return if the process terminates or is paused.

Returns:None if not terminated, otherwise self.outputs
classmethod get_or_create_db_record()[source]

Create a database calculation node that represents what happened in this process. :return: A calculation

Components to communicate tasks to RabbitMQ.

aiida.work.rmq.new_control_panel()[source]

Create a new control panel based on the current profile configuration

Returns:A new control panel instance
Return type:aiida.work.rmq.ProcessControlPanel
aiida.work.rmq.new_blocking_control_panel()[source]

Create a new blocking control panel based on the current profile configuration

Returns:A new control panel instance
Return type:aiida.work.rmq.BlockingProcessControlPanel
class aiida.work.rmq.BlockingProcessControlPanel(prefix, testing_mode=False)[source]

Bases: aiida.work.rmq.ProcessControlPanel

A blocking adapter for the ProcessControlPanel.

__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
__init__(prefix, testing_mode=False)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.rmq'
close()[source]
execute_action(action)[source]
execute_process_start(process_class, init_args=None, init_kwargs=None)[source]
exception aiida.work.rmq.RemoteException[source]

Bases: exceptions.Exception

An exception occurred at the remote end of the call

__module__ = 'kiwipy.communications'
__weakref__

list of weak references to the object (if defined)

exception aiida.work.rmq.DeliveryFailed[source]

Bases: exceptions.Exception

Failed to deliver a message

__module__ = 'kiwipy.communications'
__weakref__

list of weak references to the object (if defined)

class aiida.work.rmq.ProcessLauncher(loop=None, persister=None, load_context=None, loader=None)[source]

Bases: plumpy.process_comms.ProcessLauncher

A sub class of plumpy.ProcessLauncher to launch a Process

It overrides the _continue method to make sure the node corresponding to the task can be loaded and that if it is already marked as terminated, it is not continued but the future is reconstructed and returned

__module__ = 'aiida.work.rmq'
_continue(task)[source]

Continue the task

Note that the task may already have been completed, as indicated from the corresponding the node, in which case it is not continued, but the corresponding future is reconstructed and returned. This scenario may occur when the Process was already completed by another worker that however failed to send the acknowledgment.

Parameters:task – the task to continue
Raises:plumpy.TaskRejected – if the node corresponding to the task cannot be loaded
class aiida.work.rmq.ProcessControlPanel(prefix, rmq_connector, testing_mode=False)[source]

Bases: object

RMQ control panel for launching, controlling and getting status of Processes over the RMQ protocol.

__dict__ = dict_proxy({'execute_action': <function execute_action>, '__module__': 'aiida.work.rmq', '__exit__': <function __exit__>, 'execute_process': <function execute_process>, 'pause_process': <function pause_process>, 'connect': <function connect>, 'communicator': <property object>, '__dict__': <attribute '__dict__' of 'ProcessControlPanel' objects>, 'close': <function close>, 'kill_process': <function kill_process>, '__weakref__': <attribute '__weakref__' of 'ProcessControlPanel' objects>, 'launch_process': <function launch_process>, '__init__': <function __init__>, '__enter__': <function __enter__>, '__doc__': '\n RMQ control panel for launching, controlling and getting status of\n Processes over the RMQ protocol.\n ', 'play_process': <function play_process>, 'request_status': <function request_status>, 'continue_process': <function continue_process>})
__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
__init__(prefix, rmq_connector, testing_mode=False)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.rmq'
__weakref__

list of weak references to the object (if defined)

close()[source]
communicator
connect()[source]
continue_process(pid)[source]
execute_action(action)[source]
execute_process(process_class, init_args=None, init_kwargs=None)[source]
kill_process(pid, msg=None)[source]
launch_process(process_class, init_args=None, init_kwargs=None)[source]
pause_process(pid)[source]
play_process(pid)[source]
request_status(pid)[source]

Runners that can run and submit processes.

class aiida.work.runners.Runner(rmq_config=None, poll_interval=0.0, loop=None, rmq_submit=False, enable_persistence=True, persister=None)[source]

Bases: object

Class that can launch processes by running in the current interpreter or by submitting them to the daemon.

__dict__ = dict_proxy({'__module__': 'aiida.work.runners', 'rmq': <property object>, '_closed': False, '_poll_calculation': <function _poll_calculation>, '_run': <function _run>, '_create_child_runner': <function _create_child_runner>, 'run_get_pid': <function run_get_pid>, 'get_calculation_future': <function get_calculation_future>, 'child_runner': <function child_runner>, '__dict__': <attribute '__dict__' of 'Runner' objects>, 'close': <function close>, '__weakref__': <attribute '__weakref__' of 'Runner' objects>, 'transport': <property object>, '_setup_rmq': <function _setup_rmq>, '_rmq_connector': None, 'call_on_legacy_workflow_finish': <function call_on_legacy_workflow_finish>, 'run_get_node': <function run_get_node>, '__enter__': <function __enter__>, '_poll_legacy_wf': <function _poll_legacy_wf>, 'submit': <function submit>, '_communicator': None, 'start': <function start>, 'is_closed': <function is_closed>, '__doc__': 'Class that can launch processes by running in the current interpreter or by submitting them to the daemon.', '__exit__': <function __exit__>, '_persister': None, 'run': <function run>, 'run_until_complete': <function run_until_complete>, 'stop': <function stop>, 'instantiate_process': <function instantiate_process>, 'communicator': <property object>, 'persister': <property object>, '__init__': <function __init__>, 'call_on_calculation_finish': <function call_on_calculation_finish>, 'loop': <property object>})
__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
__init__(rmq_config=None, poll_interval=0.0, loop=None, rmq_submit=False, enable_persistence=True, persister=None)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.runners'
__weakref__

list of weak references to the object (if defined)

_closed = False
_communicator = None
_create_child_runner()[source]
_persister = None
_poll_calculation(calc_node, callback)[source]
_poll_legacy_wf(workflow, callback)[source]
_rmq_connector = None
_run(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and the calculation node

_setup_rmq(url, prefix=None, task_prefetch_count=None, testing_mode=False)[source]

Setup the RabbitMQ connection by creating a connector, communicator and control panel

Parameters:
  • url – the url to use for the connector
  • prefix – the rmq prefix to use for the communicator
  • task_prefetch_count – the maximum number of tasks the communicator may retrieve at a given time
  • testing_mode – whether to create a communicator in testing mode
call_on_calculation_finish(pk, callback)[source]

Callback to be called when the calculation of the given pk is terminated

Parameters:
  • pk – the pk of the calculation
  • callback – the function to be called upon calculation termination
call_on_legacy_workflow_finish(pk, callback)[source]

Callback to be called when the workflow of the given pk is terminated

Parameters:
  • pk – the pk of the workflow
  • callback – the function to be called upon workflow termination
child_runner(**kwds)[source]

Contextmanager that will yield a runner that is a child of this runner

Returns:a Runner instance that inherits the attributes of this runner
close()[source]

Close the runner by stopping the loop and disconnecting the RmqConnector if it has one.

communicator
get_calculation_future(pk)[source]

Get a future for an orm Calculation. The future will have the calculation node as the result when finished.

Returns:A future representing the completion of the calculation node
instantiate_process(process, *args, **inputs)[source]

Return an instance of the process with the given runner and inputs. The function can deal with various types of the process:

  • Process instance: will simply return the instance
  • JobCalculation class: will construct the JobProcess and instantiate it
  • ProcessBuilder instance: will instantiate the Process from the class and inputs defined within it
  • Process class: will instantiate with the specified inputs

If anything else is passed, a ValueError will be raised

Parameters:
  • self – instance of a Runner
  • process – Process instance or class, JobCalculation class or ProcessBuilder instance
  • inputs – the inputs for the process to be instantiated with
is_closed()[source]
loop
persister
rmq
run(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

the outputs of the process

run_get_node(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and the calculation node

run_get_pid(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters:
  • process – the process class or workfunction to run
  • inputs – the inputs to be passed to the process
Returns:

tuple of the outputs of the process and process pid

run_until_complete(future)[source]

Run the loop until the future has finished and return the result.

start()[source]

Start the internal event loop.

stop()[source]

Stop the internal event loop.

submit(process, *args, **inputs)[source]

Submit the process with the supplied inputs to this runner immediately returning control to the interpreter. The return value will be the calculation node of the submitted process

Parameters:
  • process – the process class to submit
  • inputs – the inputs to be passed to the process
Returns:

the calculation node of the process

transport
class aiida.work.runners.DaemonRunner(*args, **kwargs)[source]

Bases: aiida.work.runners.Runner

A sub class of Runner suited for a daemon runner

__init__(*args, **kwargs)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.work.runners'
_setup_rmq(url, prefix=None, task_prefetch_count=None, testing_mode=False)[source]

Setup the RabbitMQ connection by creating a connector, communicator and control panel

Parameters:
  • url – the url to use for the connector
  • prefix – the rmq prefix to use for the communicator
  • task_prefetch_count – the maximum number of tasks the communicator may retrieve at a given time
  • testing_mode – whether to create a communicator in testing mode
aiida.work.runners.new_runner(**kwargs)[source]

Create a default runner optionally passing keyword arguments

Parameters:kwargs – arguments to be passed to Runner constructor
Returns:a new runner instance
aiida.work.runners.set_runner(runner)[source]

Set the global runner instance

Parameters:runner – the runner instance to set as the global runner
aiida.work.runners.get_runner()[source]

Get the global runner instance

Returns:the global runner

Utilities for testing components from the workflow engine

class aiida.work.test_utils.AddProcess(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

A simple Process that adds two integers.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.test_utils'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_run()[source]
classmethod define(spec)[source]
class aiida.work.test_utils.BadOutput(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

A Process that emits an output that isn’t part of the spec raising an exception.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.test_utils'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_run()[source]
classmethod define(spec)[source]
class aiida.work.test_utils.DummyProcess(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

A Process that does nothing when it runs.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.test_utils'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_run()[source]
classmethod define(spec)[source]
class aiida.work.test_utils.ExceptionProcess(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

A Process that raises a RuntimeError when run.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.test_utils'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_run()[source]
class aiida.work.test_utils.WaitProcess(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

A Process that waits until it is asked to continue.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.work.test_utils'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_run()[source]
next_step()[source]

A transport queue to batch process multiple tasks that require a Transport.

class aiida.work.transports.TransportQueue(loop=None, interval=30.0)[source]

Bases: object

A queue to get transport objects from authinfo. This class allows clients to register their interest in a transport object which will be provided at some point in the future using a callback.

Internally the class will wait for a specific interval at the end of which it will open the transport and give it to all the clients that asked for it up to that point. This way opening of transports (a costly operation) can be minimised.

class AuthInfoEntry(authinfo, transport, callbacks, callback_handle)

Bases: tuple

__dict__ = dict_proxy({'__module__': 'aiida.work.transports', '_make': <classmethod object>, 'callback_handle': <property object>, 'callbacks': <property object>, '_asdict': <function _asdict>, '__dict__': <property object>, '__getnewargs__': <function __getnewargs__>, 'transport': <property object>, '_fields': ('authinfo', 'transport', 'callbacks', 'callback_handle'), '__new__': <staticmethod object>, '_replace': <function _replace>, 'authinfo': <property object>, '__slots__': (), '__repr__': <function __repr__>, '__getstate__': <function __getstate__>, '__doc__': 'AuthInfoEntry(authinfo, transport, callbacks, callback_handle)'})
__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__module__ = 'aiida.work.transports'
static __new__(_cls, authinfo, transport, callbacks, callback_handle)

Create new instance of AuthInfoEntry(authinfo, transport, callbacks, callback_handle)

__repr__()

Return a nicely formatted representation string

__slots__ = ()
_asdict()

Return a new OrderedDict which maps field names to their values

_fields = ('authinfo', 'transport', 'callbacks', 'callback_handle')
classmethod _make(iterable, new=<built-in method __new__ of type object at 0x906d60>, len=<built-in function len>)

Make a new AuthInfoEntry object from a sequence or iterable

_replace(**kwds)

Return a new AuthInfoEntry object replacing specified fields with new values

authinfo

Alias for field number 0

callback_handle

Alias for field number 3

callbacks

Alias for field number 2

transport

Alias for field number 1

__dict__ = dict_proxy({'call_me_with_transport': <function call_me_with_transport>, '__module__': 'aiida.work.transports', '__doc__': '\n A queue to get transport objects from authinfo. This class allows clients\n to register their interest in a transport object which will be provided at\n some point in the future using a callback.\n\n Internally the class will wait for a specific interval at the end of which\n it will open the transport and give it to all the clients that asked for it\n up to that point. This way opening of transports (a costly operation) can\n be minimised.\n ', '__dict__': <attribute '__dict__' of 'TransportQueue' objects>, '_get_or_create_entry': <function _get_or_create_entry>, '__weakref__': <attribute '__weakref__' of 'TransportQueue' objects>, '_do_callback': <function _do_callback>, '__init__': <function __init__>, 'AuthInfoEntry': <class 'aiida.work.transports.AuthInfoEntry'>})
__init__(loop=None, interval=30.0)[source]
Parameters:
  • loop – The io loop
  • interval – The callback interval in seconds
__module__ = 'aiida.work.transports'
__weakref__

list of weak references to the object (if defined)

_do_callback(authinfo_id)[source]

Perform the callback for the given AuthInfoEntry id.

_get_or_create_entry(authinfo)[source]

Create a callback entry for the given authinfo from which the appropriate Transport will be retrieved

Parameters:authinfo – the AuthInfo from which the Transport is to be retrieved
Returns:the constructed AuthInfoEntry
call_me_with_transport(authinfo, callback)[source]

Add a callback to the queue

Parameters:
  • authinfo – the AuthInfo that the callback should use for the Transport
  • callback – the function to be called with Transport

Utilities for the workflow engine.

Components for the WorkChain concept of the workflow engine.

class aiida.work.workchain.WorkChain(inputs=None, logger=None, runner=None, enable_persistence=True)[source]

Bases: aiida.work.processes.Process

A WorkChain, the base class for AiiDA workflows.

_CONTEXT = 'CONTEXT'
_Process__called = True
_STEPPER_STATE = 'stepper_state'
__abstractmethods__ = frozenset([])
__init__(inputs=None, logger=None, runner=None, enable_persistence=True)[source]

The signature of the constructor should not be changed by subclassing processes.

Parameters:
  • inputs (dict) – A dictionary of the process inputs
  • pid – The process ID, can be manually set, if not a unique pid will be chosen
  • logger (logging.Logger) – An optional logger for the process to use
  • loop – The event loop
  • communicator (plumpy.Communicator) – The (optional) communicator
__module__ = 'aiida.work.workchain'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 92
_abc_registry = <_weakrefset.WeakSet object>
_auto_persist = set(['_parent_pid', '_paused_flag', '_enable_persistence', '_pid', '_future', '_CREATION_TIME', '_awaitables'])
_do_step()[source]

Execute the next step in the outline and return the result.

If the stepper returns a non-finished status and the return value is of type ToContext, the contents of the ToContext container will be turned into awaitables if necessary. If any awaitables were created, the process will enter in the Wait state, otherwise it will go to Continue. When the stepper returns that it is done, the stepper result will be converted to None and returned, unless it is an integer or instance of ExitCode.

_run()[source]
_spec = <aiida.work.workchain._WorkChainSpec object>
_spec_type

alias of _WorkChainSpec

action_awaitables()[source]

Handle the awaitables that are currently registered with the workchain

Depending on the class type of the awaitable’s target a different callback function will be bound with the awaitable and the runner will be asked to call it when the target is completed

ctx
classmethod define(spec)[source]
exit_codes = {}
insert_awaitable(awaitable)[source]

Insert a awaitable that will cause the workchain to wait until the wait on is finished before continuing to the next step.

Parameters:awaitable (aiida.work.awaitable.Awaitable) – The thing to await
load_instance_state(saved_state, load_context)[source]
on_calculation_finished(awaitable, pk)[source]

Callback function called by the runner when the calculation instance identified by pk is completed. The awaitable will be effectuated on the context of the workchain and removed from the internal list. If all awaitables have been dealt with, the workchain process is resumed

Parameters:
  • awaitable – an Awaitable instance
  • pk – the pk of the awaitable’s target
on_legacy_workflow_finished(awaitable, pk)[source]

Callback function called by the runner when the legacy workflow instance identified by pk is completed. The awaitable will be effectuated on the context of the workchain and removed from the internal list. If all awaitables have been dealt with, the workchain process is resumed

Parameters:
  • awaitable – an Awaitable instance
  • pk – the pk of the awaitable’s target
on_run()[source]
on_wait(awaitables)[source]
remove_awaitable(awaitable)[source]

Remove a awaitable.

Precondition: must be a awaitable that was previously inserted

Parameters:awaitable – The awaitable to remove
save_instance_state(out_state, save_context)[source]

Ask the process to save its current instance state.

Parameters:
  • out_state (plumpy.Bundle) – A bundle to save the state to
  • save_context – The save context
to_context(**kwargs)[source]

This is a convenience method that provides syntactic sugar, for a user to add multiple intersteps that will assign a certain value to the corresponding key in the context of the workchain

aiida.work.workchain.assign_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to ASSIGN. When the awaitable target is completed it will be assigned to the context for a key that is to be defined later

Parameters:target – an instance of Calculation, Workflow or Awaitable
Returns:the awaitable
Return type:Awaitable
aiida.work.workchain.append_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to APPEND. When the awaitable target is completed it will be appended to a list in the context for a key that is to be defined later

Parameters:target – an instance of Calculation, Workflow or Awaitable
Returns:the awaitable
Return type:Awaitable
aiida.work.workchain.if_(condition)[source]

A conditional that can be used in a workchain outline.

Use as:

if_(cls.conditional)(
  cls.step1,
  cls.step2
)

Each step can, of course, also be any valid workchain step e.g. conditional.

Parameters:condition – The workchain method that will return True or False
aiida.work.workchain.while_(condition)[source]

A while loop that can be used in a workchain outline.

Use as:

while_(cls.conditional)(
  cls.step1,
  cls.step2
)

Each step can, of course, also be any valid workchain step e.g. conditional.

Parameters:condition – The workchain method that will return True or False
aiida.work.workchain.ToContext

alias of __builtin__.dict

class aiida.work.workchain._WorkChainSpec[source]

Bases: aiida.work.process_spec.ProcessSpec, plumpy.workchains.WorkChainSpec

__module__ = 'aiida.work.workchain'

Function decorator that will turn a normal function into an AiiDA workfunction.

aiida.work.workfunctions.workfunction(func, calc_node_class=None)[source]

A decorator to turn a standard python function into a workfunction. Example usage:

>>> from aiida.orm.data.int import Int
>>>
>>> # Define the workfunction
>>> @workfunction
>>> def sum(a, b):
>>>    return a + b
>>> # Run it with some input
>>> r = sum(Int(4), Int(5))
>>> print(r)
9
>>> r.get_inputs_dict() 
{u'result': <FunctionCalculation: uuid: ce0c63b3-1c84-4bb8-ba64-7b70a36adf34 (pk: 3567)>}
>>> r.get_inputs_dict()['result'].get_inputs()
[4, 5]