aiida.orm package

class aiida.orm.JobCalculation(**kwargs)[source]

Bases: aiida.orm.implementation.general.calculation.job.AbstractJobCalculation, aiida.orm.implementation.django.calculation.Calculation

__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.django.calculation.job'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_logger = <logging.Logger object>
_plugin_type_string = 'calculation.job.JobCalculation.'
_query_type_string = 'calculation.job.'
_set_state(state)[source]

Set the state of the calculation.

Set it in the DbCalcState to have also the uniqueness check. Moreover (except for the IMPORTED state) also store in the ‘state’ attribute, useful to know it also after importing, and for faster querying.

Todo

Add further checks to enforce that the states are set in order?

Parameters:state – a string with the state. This must be a valid string, from aiida.common.datastructures.calc_states.
Raise:ModificationNotAllowed if the given state was already set.
get_state(from_attribute=False)[source]

Get the state of the calculation.

Note

this method returns the NOTFOUND state if no state is found in the DB.

Note

the ‘most recent’ state is obtained using the logic in the aiida.common.datastructures.sort_states function.

Todo

Understand if the state returned when no state entry is found in the DB is the best choice.

Parameters:from_attribute – if set to True, read it from the attributes (the attribute is also set with set_state, unless the state is set to IMPORTED; in this way we can also see the state before storing).
Returns:a string. If from_attribute is True and no attribute is found, return None. If from_attribute is False and no entry is found in the DB, return the “NOTFOUND” state.
class aiida.orm.WorkCalculation(**kwargs)[source]

Bases: aiida.orm.implementation.django.calculation.Calculation

Calculation node to record the results of a aiida.work.processes.Process from the workflow system in the database

STEPPER_STATE_INFO_KEY = 'stepper_state_info'
__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.general.calculation.work'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_logger = <logging.Logger object>
_plugin_type_string = 'calculation.work.WorkCalculation.'
_query_type_string = 'calculation.work.'
_updatable_attributes = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', '_process_label', 'process_state', 'process_status', 'stepper_state_info')
set_stepper_state_info(stepper_state_info)[source]

Set the stepper state info of the Calculation

Parameters:state – string representation of the stepper state info
stepper_state_info

Return the stepper state info of the Calculation

Returns:string representation of the stepper state info
class aiida.orm.Code(**kwargs)[source]

Bases: aiida.orm.implementation.general.code.AbstractCode

__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.django.code'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_logger = <logging.Logger object>
_plugin_type_string = 'code.Code.'
_query_type_string = 'code.'
_set_local()[source]

Set the code as a ‘local’ code, meaning that all the files belonging to the code will be copied to the cluster, and the file set with set_exec_filename will be run.

It also deletes the flags related to the local case (if any)

can_run_on(computer)[source]

Return True if this code can run on the given computer, False otherwise.

Local codes can run on any machine; remote codes can run only on the machine on which they reside.

TODO: add filters to mask the remote machines on which a local code can run.

classmethod get(pk=None, label=None, machinename=None)[source]

Get a Computer object with given identifier string, that can either be the numeric ID (pk), or the label (and computername) (if unique).

Parameters:
  • pk – the numeric ID (pk) for code
  • label – the code label identifying the code to load
  • machinename – the machine name where code is setup
Raises:
classmethod get_from_string(code_string)[source]

Get a Computer object with given identifier string in the format label@machinename. See the note below for details on the string detection algorithm.

Note

the (leftmost) ‘@’ symbol is always used to split code and computername. Therefore do not use ‘@’ in the code name if you want to use this function (‘@’ in the computer name are instead valid).

Parameters:

code_string – the code string identifying the code to load

Raises:
classmethod list_for_plugin(plugin, labels=True)[source]

Return a list of valid code strings for a given plugin.

Parameters:
  • plugin – The string of the plugin.
  • labels – if True, return a list of code names, otherwise return the code PKs (integers).
Returns:

a list of string, with the code names if labels is True, otherwise a list of integers with the code PKs.

set_remote_computer_exec(remote_computer_exec)[source]

Set the code as remote, and pass the computer on which it resides and the absolute path on that computer.

Args:
remote_computer_exec: a tuple (computer, remote_exec_path), where
computer is a aiida.orm.Computer or an aiida.backends.djsite.db.models.DbComputer object, and remote_exec_path is the absolute path of the main executable on remote computer.
aiida.orm.CalculationFactory(entry_point)[source]

Return the Calculation plugin class for a given entry point

Parameters:entry_point – the entry point name of the Calculation plugin
aiida.orm.DataFactory(entry_point)[source]

Return the Data plugin class for a given entry point

Parameters:entry_point – the entry point name of the Data plugin
aiida.orm.WorkflowFactory(entry_point)[source]

Return the Workflow plugin class for a given entry point

Parameters:entry_point – the entry point name of the Workflow plugin
class aiida.orm.QueryBuilder(*args, **kwargs)[source]

Bases: object

The class to query the AiiDA database.

Usage:

from aiida.orm.querybuilder import QueryBuilder
qb = QueryBuilder()
# Querying nodes:
qb.append(Node)
# retrieving the results:
results = qb.all()
_EDGE_TAG_DELIM = '--'
_VALID_PROJECTION_KEYS = ('func', 'cast')
__dict__ = dict_proxy({'_add_to_projections': <function _add_to_projections>, '_get_json_compatible': <function _get_json_compatible>, '__module__': 'aiida.orm.querybuilder', '_join_outputs': <function _join_outputs>, '_join_ancestors_recursive': <function _join_ancestors_recursive>, '__str__': <function __str__>, 'get_query': <function get_query>, 'all': <function all>, '_EDGE_TAG_DELIM': '--', 'one': <function one>, '_join_group_members': <function _join_group_members>, '__dict__': <attribute '__dict__' of 'QueryBuilder' objects>, 'get_aliases': <function get_aliases>, '_build_filters': <function _build_filters>, 'add_filter': <function add_filter>, '_get_function_map': <function _get_function_map>, '__weakref__': <attribute '__weakref__' of 'QueryBuilder' objects>, 'children': <function children>, 'append': <function append>, 'get_used_tags': <function get_used_tags>, 'order_by': <function order_by>, '_get_ormclass': <function _get_ormclass>, 'distinct': <function distinct>, 'set_debug': <function set_debug>, '_join_to_computer_used': <function _join_to_computer_used>, '_build_projections': <function _build_projections>, 'dict': <function dict>, '__init__': <function __init__>, 'outputs': <function outputs>, 'iterall': <function iterall>, 'parents': <function parents>, '__doc__': '\n The class to query the AiiDA database. \n \n Usage::\n\n from aiida.orm.querybuilder import QueryBuilder\n qb = QueryBuilder()\n # Querying nodes:\n qb.append(Node)\n # retrieving the results:\n results = qb.all()\n\n ', 'iterdict': <function iterdict>, '_build_order': <function _build_order>, 'inputs': <function inputs>, '_VALID_PROJECTION_KEYS': ('func', 'cast'), '_join_group_user': <function _join_group_user>, '_join_masters': <function _join_masters>, '_join_inputs': <function _join_inputs>, '_join_slaves': <function _join_slaves>, 'get_results_dict': <function get_results_dict>, 'add_projection': <function add_projection>, '_join_user_group': <function _join_user_group>, '_join_descendants_recursive': <function _join_descendants_recursive>, 'inject_query': <function inject_query>, 'offset': <function offset>, '_get_projectable_entity': <function _get_projectable_entity>, '_join_creator_of': <function _join_creator_of>, 'except_if_input_to': <function except_if_input_to>, '_build': <function _build>, 'count': <function count>, '_join_computer': <function _join_computer>, 'get_json_compatible_queryhelp': <function get_json_compatible_queryhelp>, '_join_created_by': <function _join_created_by>, '_get_unique_tag': <function _get_unique_tag>, '_get_tag_from_specification': <function _get_tag_from_specification>, 'get_alias': <function get_alias>, '_get_connecting_node': <function _get_connecting_node>, '_join_groups': <function _join_groups>, 'limit': <function limit>, '_check_dbentities': <staticmethod object>, '_add_type_filter': <function _add_type_filter>, 'first': <function first>})
__init__(*args, **kwargs)[source]

Instantiates a QueryBuilder instance.

Which backend is used decided here based on backend-settings (taken from the user profile). This cannot be overriden so far by the user.

Parameters:
  • debug (bool) – Turn on debug mode. This feature prints information on the screen about the stages of the QueryBuilder. Does not affect results.
  • path (list) – A list of the vertices to traverse. Leave empty if you plan on using the method QueryBuilder.append().
  • filters – The filters to apply. You can specify the filters here, when appending to the query using QueryBuilder.append() or even later using QueryBuilder.add_filter(). Check latter gives API-details.
  • project – The projections to apply. You can specify the projections here, when appending to the query using QueryBuilder.append() or even later using QueryBuilder.add_projection(). Latter gives you API-details.
  • limit (int) – Limit the number of rows to this number. Check QueryBuilder.limit() for more information.
  • offset (int) – Set an offset for the results returned. Details in QueryBuilder.offset().
  • order_by – How to order the results. As the 2 above, can be set also at later stage, check QueryBuilder.order_by() for more information.
__module__ = 'aiida.orm.querybuilder'
__str__()[source]

When somebody hits: print(QueryBuilder) or print(str(QueryBuilder)) I want to print the SQL-query. Because it looks cool…

__weakref__

list of weak references to the object (if defined)

_add_to_projections(alias, projectable_entity_name, cast=None, func=None)[source]
Parameters:
  • alias – A instance of sqlalchemy.orm.util.AliasedClass, alias for an ormclass
  • projectable_entity_name – User specification of what to project. Appends to query’s entities what the user wants to project (have returned by the query)
_add_type_filter(tagspec, query_type_string, plugin_type_string, subclassing)[source]

Add a filter on the type based on the query_type_string

_build()[source]

build the query and return a sqlalchemy.Query instance

_build_filters(alias, filter_spec)[source]

Recurse through the filter specification and apply filter operations.

Parameters:
  • alias – The alias of the ORM class the filter will be applied on
  • filter_spec – the specification as given by the queryhelp
Returns:

an instance of sqlalchemy.sql.elements.BinaryExpression.

_build_order(alias, entitytag, entityspec)[source]
_build_projections(tag, items_to_project=None)[source]
static _check_dbentities(entities_cls_joined, entities_cls_to_join, relationship)[source]
Parameters:
  • entities_cls_joined (list) – A list (tuple) of the aliased class passed as joined_entity and the ormclass that was expected
  • entities_cls_joined – A list (tuple) of the aliased class passed as entity_to_join and the ormclass that was expected
  • relationship (str) – The relationship between the two entities to make the Exception comprehensible
_get_connecting_node(index, joining_keyword=None, joining_value=None, **kwargs)[source]
Parameters:
  • querydict – A dictionary specifying how the current node is linked to other nodes.
  • index – Index of this node within the path specification

Valid (currently implemented) keys are:

  • input_of
  • output_of
  • descendant_of
  • ancestor_of
  • direction
  • group_of
  • member_of
  • has_computer
  • computer_of
  • created_by
  • creator_of
  • owner_of
  • belongs_to

Future:

  • master_of
  • slave_of
_get_function_map()[source]
_get_json_compatible(inp)[source]
Parameters:inp – The input value that will be converted. Recurses into each value if inp is an iterable.
_get_ormclass(cls, ormclasstype)[source]

For testing purposes, I want to check whether the implementation gives the correct ormclass back. Just relaying to the implementation, details for this function in the interface.

_get_projectable_entity(alias, column_name, attrpath, **entityspec)[source]
_get_tag_from_specification(specification)[source]
Parameters:specification – If that is a string, I assume the user has deliberately specified it with tag=specification. In that case, I simply check that it’s not a duplicate. If it is a class, I check if it’s in the _cls_to_tag_map!
_get_unique_tag(ormclasstype)[source]

Using the function get_tag_from_type, I get a tag. I increment an index that is appended to that tag until I have an unused tag. This function is called in QueryBuilder.append() when autotag is set to True.

Parameters:ormclasstype (str) – The string that defines the type of the AiiDA ORM class. For subclasses of Node, this is the Node._plugin_type_string, for other they are as defined as returned by QueryBuilder._get_ormclass().
Returns:A tag, as a string.
_join_ancestors_recursive(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]

joining ancestors using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.

_join_computer(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – An entity that can use a computer (eg a node)
  • entity_to_join – aliased dbcomputer entity
_join_created_by(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – the aliased user you want to join to
  • entity_to_join – the (aliased) node or group in the DB to join with
_join_creator_of(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – the aliased node
  • entity_to_join – the aliased user to join to that node
_join_descendants_recursive(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]

joining descendants using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.

_join_group_members(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) ORMclass that is a group in the database
  • entity_to_join – The (aliased) ORMClass that is a node and member of the group

joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as group to enitity_to_join as node. (enitity_to_join is an member_of joined_entity)

_join_group_user(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – An aliased dbgroup
  • entity_to_join – aliased dbuser
_join_groups(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) node in the database
  • entity_to_join – The (aliased) Group

joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as node to enitity_to_join as group. (enitity_to_join is an group_of joined_entity)

_join_inputs(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) ORMclass that is an output
  • entity_to_join – The (aliased) ORMClass that is an input.

joined_entity and entity_to_join are joined with a link from joined_entity as output to enitity_to_join as input (enitity_to_join is an input_of joined_entity)

_join_masters(joined_entity, entity_to_join)[source]
_join_outputs(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) ORMclass that is an input
  • entity_to_join – The (aliased) ORMClass that is an output.

joined_entity and entity_to_join are joined with a link from joined_entity as input to enitity_to_join as output (enitity_to_join is an output_of joined_entity)

_join_slaves(joined_entity, entity_to_join)[source]
_join_to_computer_used(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – the (aliased) computer entity
  • entity_to_join – the (aliased) node entity
_join_user_group(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – An aliased user
  • entity_to_join – aliased group
add_filter(tagspec, filter_spec)[source]

Adding a filter to my filters.

Parameters:
  • tagspec – The tag, which has to exist already as a key in self._filters
  • filter_spec – The specifications for the filter, has to be a dictionary

Usage:

qb = QueryBuilder()         # Instantiating the QueryBuilder instance
qb.append(Node, tag='node') # Appending a Node
#let's put some filters:
qb.add_filter('node',{'id':{'>':12}})
# 2 filters together:
qb.add_filter('node',{'label':'foo', 'uuid':{'like':'ab%'}})
# Now I am overriding the first filter I set:
qb.add_filter('node',{'id':13})
add_projection(tag_spec, projection_spec)[source]

Adds a projection

Parameters:
  • tag_spec – A valid specification for a tag
  • projection_spec – The specification for the projection. A projection is a list of dictionaries, with each dictionary containing key-value pairs where the key is database entity (e.g. a column / an attribute) and the value is (optional) additional information on how to process this database entity.

If the given projection_spec is not a list, it will be expanded to a list. If the listitems are not dictionaries, but strings (No additional processing of the projected results desired), they will be expanded to dictionaries.

Usage:

qb = QueryBuilder()
qb.append(StructureData, tag='struc')

# Will project the uuid and the kinds
qb.add_projection('struc', ['uuid', 'attributes.kinds'])

The above example will project the uuid and the kinds-attribute of all matching structures. There are 2 (so far) special keys.

The single star * will project the ORM-instance:

qb = QueryBuilder()
qb.append(StructureData, tag='struc')
# Will project the ORM instance
qb.add_projection('struc', '*')
print type(qb.first()[0])
# >>> aiida.orm.data.structure.StructureData

The double start ** projects all possible projections of this entity:

QueryBuilder().append(StructureData,tag=’s’, project=’**’).limit(1).dict()[0][‘s’].keys()

# >>> u’user_id, description, ctime, label, extras, mtime, id, attributes, dbcomputer_id, nodeversion, type, public, uuid’

Be aware that the result of ** depends on the backend implementation.

all(batch_size=None)[source]

Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing!
Returns:a list of lists of all projected entities.
append(cls=None, type=None, tag=None, filters=None, project=None, subclassing=True, edge_tag=None, edge_filters=None, edge_project=None, outerjoin=False, **kwargs)[source]

Any iterative procedure to build the path for a graph query needs to invoke this method to append to the path.

Parameters:
  • cls

    The Aiida-class (or backend-class) defining the appended vertice. Also supports a tuple/list of classes. This results in an all instances of this class being accepted in a query. However the classes have to have the same orm-class for the joining to work. I.e. both have to subclasses of Node. Valid is:

    cls=(StructureData, ParameterData)
    

    This is invalid:

    cls=(Group, Node)
  • type (str) – The type of the class, if cls is not given. Also here, a tuple or list is accepted.
  • autotag (bool) – Whether to find automatically a unique tag. If this is set to True (default False),
  • tag (str) – A unique tag. If none is given, I will create a unique tag myself.
  • filters – Filters to apply for this vertice. See add_filter(), the method invoked in the background, or usage examples for details.
  • project – Projections to apply. See usage examples for details. More information also in add_projection().
  • subclassing (bool) – Whether to include subclasses of the given class (default True). E.g. Specifying a Calculation as cls will include JobCalculations, InlineCalculations, etc..
  • outerjoin (bool) – If True, (default is False), will do a left outerjoin instead of an inner join
  • edge_tag (str) – The tag that the edge will get. If nothing is specified (and there is a meaningful edge) the default is tag1–tag2 with tag1 being the entity joining from and tag2 being the entity joining to (this entity).
  • edge_filters (str) – The filters to apply on the edge. Also here, details in add_filter().
  • edge_project (str) – The project from the edges. API-details in add_projection().

A small usage example how this can be invoked:

qb = QueryBuilder()             # Instantiating empty querybuilder instance
qb.append(cls=StructureData)    # First item is StructureData node
# The
# next node in the path is a PwCalculation, with
# the structure joined as an input
qb.append(
    cls=PwCalculation,
    output_of=StructureData
)
Returns:self
children(**kwargs)[source]

Join to children/descendants of previous vertice in path.

Returns:self
count()[source]

Counts the number of rows returned by the backend.

Returns:the number of rows as an integer
dict(batch_size=None)[source]

Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing!
Returns:a list of dictionaries of all projected entities. Each dictionary consists of key value pairs, where the key is the tag of the vertice and the value a dictionary of key-value pairs where key is the entity description (a column name or attribute path) and the value the value in the DB.

Usage:

qb = QueryBuilder()
qb.append(
    StructureData,
    tag='structure',
    filters={'uuid':{'==':myuuid}},
)
qb.append(
    Node,
    descendant_of='structure',
    project=['type', 'id'],  # returns type (string) and id (string)
    tag='descendant'
)

# Return the dictionaries:
print "qb.iterdict()"
for d in qb.iterdict():
    print '>>>', d

results in the following output:

qb.iterdict()
>>> {'descendant': {
        'type': u'calculation.job.quantumespresso.pw.PwCalculation.',
        'id': 7716}
    }
>>> {'descendant': {
        'type': u'data.remote.RemoteData.',
        'id': 8510}
    }
distinct()[source]

Asks for distinct rows, which is the same as asking the backend to remove duplicates. Does not execute the query!

If you want a distinct query:

qb = QueryBuilder()
# append stuff!
qb.append(...)
qb.append(...)
...
qb.distinct().all() #or
qb.distinct().dict()
Returns:self
except_if_input_to(calc_class)[source]

Makes counterquery based on the own path, only selecting entries that have been input to calc_class

Parameters:calc_class – The calculation class to check against
Returns:self
first()[source]

Executes query asking for one instance. Use as follows:

qb = QueryBuilder(**queryhelp)
qb.first()
Returns:One row of results as a list
get_alias(tag)[source]

In order to continue a query by the user, this utility function returns the aliased ormclasses.

Parameters:tag – The tag for a vertice in the path
Returns:the alias given for that vertice
get_aliases()[source]
Returns:the list of aliases
get_json_compatible_queryhelp()[source]

Makes the queryhelp a json - compatible dictionary. In this way,the queryhelp can be stored in the database or a json-object, retrieved or shared and used later. See this usage:

qb = QueryBuilder(limit=3).append(StructureData, project='id').order_by({StructureData:'id'})
queryhelp  = qb.get_json_compatible_queryhelp()

# Now I could save this dictionary somewhere and use it later:

qb2=QueryBuilder(**queryhelp)

# This is True if no change has been made to the database.
# Note that such a comparison can only be True if the order of results is enforced
qb.all()==qb2.all()
Returns:the json-compatible queryhelp
get_query()[source]

Instantiates and manipulates a sqlalchemy.orm.Query instance if this is needed. First, I check if the query instance is still valid by hashing the queryhelp. In this way, if a user asks for the same query twice, I am not recreating an instance.

Returns:an instance of sqlalchemy.orm.Query that is specific to the backend used.
get_results_dict()[source]

Deprecated, use dict() instead

get_used_tags(vertices=True, edges=True)[source]

Returns a list of all the vertices that are being used. Some parameter allow to select only subsets. :param bool vertices: Defaults to True. If True, adds the tags of vertices to the returned list :param bool edges: Defaults to True. If True, adds the tags of edges to the returnend list.

Returns:A list of all tags, including (if there is) also the tag give for the edges
inject_query(query)[source]

Manipulate the query an inject it back. This can be done to add custom filters using SQLA. :param query: A sqlalchemy.orm.Query instance

inputs(**kwargs)[source]

Join to inputs of previous vertice in path.

Returns:self
iterall(batch_size=100)[source]

Same as all(), but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_per

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter.
Returns:a generator of lists
iterdict(batch_size=100)[source]

Same as dict(), but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_per

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter.
Returns:a generator of dictionaries
limit(limit)[source]

Set the limit (nr of rows to return)

Parameters:limit (int) – integers of number of rows of rows to return
offset(offset)[source]

Set the offset. If offset is set, that many rows are skipped before returning. offset = 0 is the same as omitting setting the offset. If both offset and limit appear, then offset rows are skipped before starting to count the limit rows that are returned.

Parameters:offset (int) – integers of nr of rows to skip
one()[source]

Executes the query asking for exactly one results. Will raise an exception if this is not the case :raises: MultipleObjectsError if more then one row can be returned :raises: NotExistent if no result was found

order_by(order_by)[source]

Set the entity to order by

Parameters:order_by – This is a list of items, where each item is a dictionary specifies what to sort for an entity

In each dictionary in that list, keys represent valid tags of entities (tables), and values are list of columns.

Usage:

#Sorting by id (ascending):
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':['id']})

# or
#Sorting by id (ascending):
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':[{'id':{'order':'asc'}}]})

# for descending order:
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':[{'id':{'order':'desc'}}]})

# or (shorter)
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':[{'id':'desc'}]})
outputs(**kwargs)[source]

Join to outputs of previous vertice in path.

Returns:self
parents(**kwargs)[source]

Join to parents/ancestors of previous vertice in path.

Returns:self
set_debug(debug)[source]

Run in debug mode. This does not affect functionality, but prints intermediate stages when creating a query on screen.

Parameters:debug (bool) – Turn debug on or off
class aiida.orm.Workflow(**kwargs)[source]

Bases: aiida.orm.implementation.general.workflow.AbstractWorkflow

__init__(**kwargs)[source]

Initializes the Workflow super class, store the instance in the DB and in case stores the starting parameters.

If initialized with an uuid the Workflow is loaded from the DB, if not a new workflow is generated and added to the DB following the stack frameworks. This means that only modules inside aiida.workflows are allowed to implements the workflow super calls and be stored. The caller names, modules and files are retrieved from the stack.

Parameters:
  • uuid – a string with the uuid of the object to be loaded.
  • params – a dictionary of storable objects to initialize the specific workflow
Raise:

NotExistent: if there is no entry of the desired workflow kind with the given uuid.

__module__ = 'aiida.orm.implementation.django.workflow'
_get_dbworkflowinstance()[source]
_increment_version_number_db()[source]

This function increments the version number in the DB. This should be called every time you need to increment the version (e.g. on adding a extra or attribute).

_update_db_description_field(field_value)[source]

Safety method to store the description of the workflow

Returns:a string
_update_db_label_field(field_value)[source]

Safety method to store the label of the workflow

Returns:a string
add_attribute(_name, _value)[source]

Add one attributes to the Workflow. If another attribute is present with the same name it will be overwritten. :param name: a string with the attribute name to store :param value: a storable object to store

add_attributes(_params)[source]

Add a set of attributes to the Workflow. If another attribute is present with the same name it will be overwritten. :param name: a string with the attribute name to store :param value: a storable object to store

add_result(_name, _value)[source]

Add one result to the Workflow. If another result is present with the same name it will be overwritten. :param name: a string with the result name to store :param value: a storable object to store

add_results(_params)[source]

Add a set of results to the Workflow. If another result is present with the same name it will be overwritten. :param name: a string with the result name to store :param value: a storable object to store

append_to_report(text)[source]

Adds text to the Workflow report.

Note:Once, in case the workflow is a subworkflow of any other Workflow this method calls the parent append_to_report method; now instead this is not the case anymore
clear_report()[source]

Wipe the Workflow report. In case the workflow is a subworflow of any other Workflow this method calls the parent clear_report method.

ctime
dbworkflowinstance

Get the DbWorkflow object stored in the super class.

Returns:DbWorkflow object from the database
description

Get the description of the workflow.

Returns:a string
get_attribute(_name)[source]

Get one Workflow attribute :param name: a string with the attribute name to retrieve :return: a dictionary of storable objects

get_attributes()[source]

Get the Workflow attributes :return: a dictionary of storable objects

get_parameter(_name)[source]

Get one Workflow paramenter :param name: a string with the parameters name to retrieve :return: a dictionary of storable objects

get_parameters()[source]

Get the Workflow paramenters :return: a dictionary of storable objects

get_report()[source]

Return the Workflow report.

Note:once, in case the workflow is a subworkflow of any other Workflow this method calls the parent get_report method. This is not the case anymore.
Returns:a list of strings
get_result(_name)[source]

Get one Workflow result :param name: a string with the result name to retrieve :return: a dictionary of storable objects

get_results()[source]

Get the Workflow results :return: a dictionary of storable objects

get_state()[source]

Get the Workflow’s state :return: a state from wf_states in aiida.common.datastructures

get_step(step_method)[source]

Retrieves by name a step from the Workflow. :param step_method: a string with the name of the step to retrieve or a method :raise: ObjectDoesNotExist: if there is no step with the specific name. :return: a DbWorkflowStep object.

get_steps(state=None)[source]

Retrieves all the steps from a specific workflow Workflow with the possibility to limit the list to a specific step’s state. :param state: a state from wf_states in aiida.common.datastructures :return: a list of DbWorkflowStep objects.

classmethod get_subclass_from_dbnode(wf_db)[source]

Loads the workflow object and reaoads the python script in memory with the importlib library, the main class is searched and then loaded. :param wf_db: a specific DbWorkflowNode object representing the Workflow :return: a Workflow subclass from the specific source code

classmethod get_subclass_from_pk(pk)[source]

Calls the get_subclass_from_dbnode selecting the DbWorkflowNode from the input pk. :param pk: a primary key index for the DbWorkflowNode :return: a Workflow subclass from the specific source code

classmethod get_subclass_from_uuid(uuid)[source]

Calls the get_subclass_from_dbnode selecting the DbWorkflowNode from the input uuid. :param uuid: a uuid for the DbWorkflowNode :return: a Workflow subclass from the specific source code

has_failed()[source]

Returns True is the Workflow’s state is ERROR

has_finished_ok()[source]

Returns True is the Workflow’s state is FINISHED

info()[source]

Returns an array with all the informations about the modules, file, class to locate the workflow source code

is_new()[source]

Returns True is the Workflow’s state is CREATED

is_running()[source]

Returns True is the Workflow’s state is RUNNING

is_subworkflow()[source]

Return True is this is a subworkflow (i.e., if it has a parent), False otherwise.

label

Get the label of the workflow.

Returns:a string
logger

Get the logger of the Workflow object, so that it also logs to the DB.

Returns:LoggerAdapter object, that works like a logger, but also has the ‘extra’ embedded
pk

Returns the DbWorkflow pk

classmethod query(*args, **kwargs)[source]

Map to the aiidaobjects manager of the DbWorkflow, that returns Workflow objects instead of DbWorkflow entities.

set_params(params, force=False)[source]

Adds parameters to the Workflow that are both stored and used every time the workflow engine re-initialize the specific workflow to launch the new methods.

set_state(state)[source]

Set the Workflow’s state :param name: a state from wf_states in aiida.common.datastructures

store()[source]

Stores the DbWorkflow object data in the database

uuid

Returns the DbWorkflow uuid

class aiida.orm.Group(**kwargs)[source]

Bases: aiida.orm.implementation.general.group.AbstractGroup

__abstractmethods__ = frozenset([])
__init__(**kwargs)[source]

Create a new group. Either pass a dbgroup parameter, to reload ad group from the DB (and then, no further parameters are allowed), or pass the parameters for the Group creation.

Parameters:
  • dbgroup – the dbgroup object, if you want to reload the group from the DB rather than creating a new one.
  • name – The group name, required on creation
  • description – The group description (by default, an empty string)
  • user – The owner of the group (by default, the automatic user)
  • type_string – a string identifying the type of group (by default, an empty string, indicating an user-defined group.
__int__()[source]

Convert the class to an integer. This is needed to allow querying with Django. Be careful, though, not to pass it to a wrong field! This only returns the local DB principal key (pk) value.

Returns:the integer pk of the node or None if not stored.
__module__ = 'aiida.orm.implementation.django.group'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
add_nodes(nodes)[source]

Add a node or a set of nodes to the group.

Note:The group must be already stored.
Note:each of the nodes passed to add_nodes must be already stored.
Parameters:nodes – a Node or DbNode object to add to the group, or a list of Nodes or DbNodes to add.
dbgroup
delete()[source]

Delete the group from the DB

description
id
is_stored
name
nodes
pk
classmethod query(name=None, type_string='', pk=None, uuid=None, nodes=None, user=None, node_attributes=None, past_days=None, name_filters=None, **kwargs)[source]

Query for groups.

Note:

By default, query for user-defined groups only (type_string==”“). If you want to query for all type of groups, pass type_string=None. If you want to query for a specific type of groups, pass a specific string as the type_string argument.

Parameters:
  • name – the name of the group
  • nodes – a node or list of nodes that belongs to the group (alternatively, you can also pass a DbNode or list of DbNodes)
  • pk – the pk of the group
  • uuid – the uuid of the group
  • type_string – the string for the type of node; by default, look only for user-defined groups (see note above).
  • user – by default, query for groups of all users; if specified, must be a DbUser object, or a string for the user email.
  • past_days – by default, query for all groups; if specified, query the groups created in the last past_days. Must be a datetime object.
  • node_attributes – if not None, must be a dictionary with format {k: v}. It will filter and return only groups where there is at least a node with an attribute with key=k and value=v. Different keys of the dictionary are joined with AND (that is, the group should satisfy all requirements. v can be a base data type (str, bool, int, float, …) If it is a list or iterable, that the condition is checked so that there should be at least a node in the group with key=k and value=each of the values of the iterable.
  • kwargs

    any other filter to be passed to DbGroup.objects.filter

    Example: if node_attributes = {'elements': ['Ba', 'Ti'], 'md5sum': 'xxx'},
    it will find groups that contain the node with md5sum = ‘xxx’, and moreover contain at least one node for element ‘Ba’ and one node for element ‘Ti’.
remove_nodes(nodes)[source]

Remove a node or a set of nodes to the group.

Note:The group must be already stored.
Note:each of the nodes passed to add_nodes must be already stored.
Parameters:nodes – a Node or DbNode object to add to the group, or a list of Nodes or DbNodes to add.
store()[source]
type_string
user
uuid
class aiida.orm.Calculation(**kwargs)[source]

Bases: aiida.orm.implementation.general.calculation.AbstractCalculation, aiida.orm.implementation.django.node.Node

__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.django.calculation'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_logger = <logging.Logger object>
_plugin_type_string = 'calculation.Calculation.'
_query_type_string = 'calculation.'
class aiida.orm.JobCalculation(**kwargs)[source]

Bases: aiida.orm.implementation.general.calculation.job.AbstractJobCalculation, aiida.orm.implementation.django.calculation.Calculation

__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.django.calculation.job'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_logger = <logging.Logger object>
_plugin_type_string = 'calculation.job.JobCalculation.'
_query_type_string = 'calculation.job.'
_set_state(state)[source]

Set the state of the calculation.

Set it in the DbCalcState to have also the uniqueness check. Moreover (except for the IMPORTED state) also store in the ‘state’ attribute, useful to know it also after importing, and for faster querying.

Todo

Add further checks to enforce that the states are set in order?

Parameters:state – a string with the state. This must be a valid string, from aiida.common.datastructures.calc_states.
Raise:ModificationNotAllowed if the given state was already set.
get_state(from_attribute=False)[source]

Get the state of the calculation.

Note

this method returns the NOTFOUND state if no state is found in the DB.

Note

the ‘most recent’ state is obtained using the logic in the aiida.common.datastructures.sort_states function.

Todo

Understand if the state returned when no state entry is found in the DB is the best choice.

Parameters:from_attribute – if set to True, read it from the attributes (the attribute is also set with set_state, unless the state is set to IMPORTED; in this way we can also see the state before storing).
Returns:a string. If from_attribute is True and no attribute is found, return None. If from_attribute is False and no entry is found in the DB, return the “NOTFOUND” state.
class aiida.orm.WorkCalculation(**kwargs)[source]

Bases: aiida.orm.implementation.django.calculation.Calculation

Calculation node to record the results of a aiida.work.processes.Process from the workflow system in the database

STEPPER_STATE_INFO_KEY = 'stepper_state_info'
__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.general.calculation.work'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_logger = <logging.Logger object>
_plugin_type_string = 'calculation.work.WorkCalculation.'
_query_type_string = 'calculation.work.'
_updatable_attributes = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', '_process_label', 'process_state', 'process_status', 'stepper_state_info')
set_stepper_state_info(stepper_state_info)[source]

Set the stepper state info of the Calculation

Parameters:state – string representation of the stepper state info
stepper_state_info

Return the stepper state info of the Calculation

Returns:string representation of the stepper state info
class aiida.orm.FunctionCalculation(**kwargs)[source]

Bases: aiida.orm.mixins.FunctionCalculationMixin, aiida.orm.implementation.django.calculation.Calculation

Calculation node to record the results of a function that was run wrapped by the aiida.work.workfunctions.workfunction decorator

__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.general.calculation.function'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_logger = <logging.Logger object>
_plugin_type_string = 'calculation.function.FunctionCalculation.'
_query_type_string = 'calculation.function.'
class aiida.orm.InlineCalculation(**kwargs)[source]

Bases: aiida.orm.mixins.FunctionCalculationMixin, aiida.orm.implementation.django.calculation.Calculation

Calculation node to record the results of a function that was run wrapped by the aiida.orm.calculation.inline.make_inline decorator

__abstractmethods__ = frozenset([])
__module__ = 'aiida.orm.implementation.general.calculation.inline'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_cacheable = True
_logger = <logging.Logger object>
_plugin_type_string = 'calculation.inline.InlineCalculation.'
_query_type_string = 'calculation.inline.'
get_desc()[source]

Returns a string with the name of function that was wrapped by the make_inline decorator that resulted in the creation of this InlineCalculation instance

Returns:description string or None if no function name can be retrieved from the node
aiida.orm.make_inline(func)[source]

This make_inline wrapper/decorator takes a function with specific requirements, runs it and stores the result as an InlineCalculation node. It will also store all other nodes, including any possibly unstored input node! The return value of the wrapped calculation will also be slightly changed, see below.

The wrapper:

  • checks that the function name ends with the string '_inline'
  • checks that each input parameter is a valid Data node (can be stored or unstored)
  • runs the actual function
  • gets the result values
  • checks that the result value is a dictionary, where the key are all strings and the values are all unstored data nodes
  • creates an InlineCalculation node, links all the kwargs as inputs and the returned nodes as outputs, using the keys as link labels
  • stores all the nodes (including, possibly, unstored input nodes given as kwargs)
  • returns a length-two tuple, where the first element is the InlineCalculation node, and the second is the dictionary returned by the wrapped function

To use this function, you can use it as a decorator of a wrapped function:

@make_inline
def copy_inline(source):
    return {copy: source.copy()}

In this way, every time you call copy_inline, the wrapped version is actually called, and the return value will be a tuple with the InlineCalculation instance, and the returned dictionary. For instance, if s is a valid Data node, with the following lines:

c, s_copy_dict = copy_inline(source=s)
s_copy = s_copy_dict['copy']

c will contain the new InlineCalculation instance, s_copy the (stored) copy of s (with the side effect that, if s was not stored, after the function call it will be automatically stored).

Note:If you use a wrapper, make sure to write explicitly in the docstrings that the function is going to store the nodes.

The second possibility, if you want that by default the function does not store anything, but can be wrapped when it is necessary, is the following. You simply define the function you want to wrap (copy_inline in the example above) without decorator:

def copy_inline(source):
    return {copy: source.copy()}

This is a normal function, so to call it you will normally do:

s_copy_dict = copy_inline(s)

while if you want to wrap it, so that an InlineCalculation is created, and everything is stored, you will run:

c, s_copy_dict = make_inline(f)(s=s)

Note that, with the wrapper, all the parameters to f() have to be passed as keyworded arguments. Moreover, the return value is different, i.e. (c, s_copy_dict) instead of simply s_copy_dict.

Note

EXTREMELY IMPORTANT! The wrapped function MUST have the following requirements in order to be reproducible. These requirements cannot be enforced, but must be followed when writing the wrapped function.

  • The function MUST NOT USE information that is not passed in the kwargs. In particular, it cannot read files from the hard-drive (that will not be present in another user’s computer), it cannot connect to external databases and retrieve the current entries in that database (that could change over time), etc.
  • The only exception to the above rule is the access to the AiiDA database for the parents of the input nodes. That is, you can take the input nodes passed as kwargs, and use also the data given in their inputs, the inputs of their inputs, … but you CANNOT use any output of any of the above-mentioned nodes (that could change over time).
  • The function MUST NOT have side effects (creating files on the disk, adding entries to an external database, …).

Note

The function will also store:

  • in the attributes: the function name, function namespace and the starting line number of the function in the source file
  • in the repository: the full source file if it is possible to retrieve it otherwise it will be set to None, e.g. if the function was defined in the interactive shell).

For this reason, try to keep, if possible, all the code to be run within the same file, so that it is possible to keep the provenance of the functions that were run (if you instead call a function in a different file, you will never know in the future what that function did). If you call external modules and you matter about provenance, if would be good to also return in a suitable dictionary the version of these modules (e.g., after importing a module XXX, you can check if the module defines a variable XXX.__version__ or XXX.VERSION or something similar, and store it in an output node).

Note:

All nodes will be stored, including unstored input nodes!!

Parameters:

kwargs – all kwargs are passed to the wrapped function

Returns:

a length-two tuple, where the first element is the InlineCalculation node, and the second is the dictionary returned by the wrapped function. All nodes are stored.

Raises:
  • TypeError – if the return value is not a dictionary, the keys are not strings, or the values are not data nodes. Raise also if the input values are not data nodes.
  • ModificationNotAllowed – if the returned Data nodes are already stored.
  • Exception – All other exceptions from the wrapped function are not catched.
aiida.orm.optional_inline(func)[source]

optional_inline wrapper/decorator takes a function, which can be called either as wrapped in InlineCalculation or a simple function, depending on ‘store’ keyworded argument (True stands for InlineCalculation, False for simple function). The wrapped function has to adhere to the requirements by make_inline wrapper/decorator.

Usage example:

@optional_inline
def copy_inline(source=None):
    return {'copy': source.copy()}

Function copy_inline will be wrapped in InlineCalculation when invoked in following way:

copy_inline(source=node,store=True)

while it will be called as a simple function when invoked:

copy_inline(source=node)

In any way the copy_inline will return the same results.

aiida.orm.CalculationFactory(entry_point)[source]

Return the Calculation plugin class for a given entry point

Parameters:entry_point – the entry point name of the Calculation plugin
aiida.orm.DataFactory(entry_point)[source]

Return the Data plugin class for a given entry point

Parameters:entry_point – the entry point name of the Data plugin
aiida.orm.WorkflowFactory(entry_point)[source]

Return the Workflow plugin class for a given entry point

Parameters:entry_point – the entry point name of the Workflow plugin
aiida.orm.load_group(identifier=None, pk=None, uuid=None, label=None, query_with_dashes=True)[source]

Load a group by one of its identifiers: pk, uuid or label. If the type of the identifier is unknown simply pass it without a keyword and the loader will attempt to infer the type

Parameters:
  • identifier – pk (integer), uuid (string) or label (string) of a group
  • pk – pk of a group
  • uuid – uuid of a group, or the beginning of the uuid
  • label – label of a group
  • query_with_dashes (bool) – allow to query for a uuid with dashes
Returns:

the group instance

Raises:
aiida.orm.load_node(identifier=None, pk=None, uuid=None, sub_class=None, query_with_dashes=True)[source]

Load a node by one of its identifiers: pk or uuid. If the type of the identifier is unknown simply pass it without a keyword and the loader will attempt to infer the type

Parameters:
  • identifier – pk (integer) or uuid (string)
  • pk – pk of a node
  • uuid – uuid of a node, or the beginning of the uuid
  • sub_class – an optional tuple of orm classes, that should each be strict sub class of Node, to narrow the queryset
  • query_with_dashes (bool) – allow to query for a uuid with dashes
Returns:

the node instance

Raises:
aiida.orm.load_workflow(wf_id=None, pk=None, uuid=None)[source]

Return an AiiDA workflow given PK or UUID.

Parameters:
  • wf_id – PK (integer) or UUID (string) or UUID instance or a workflow
  • pk – PK of a workflow
  • uuid – UUID of a workflow
Returns:

an AiiDA workflow

Raises:

ValueError if none or more than one of parameters is supplied or type of wf_id is neither string nor integer

class aiida.orm.BackendDelegateWithDefault(backend)[source]

Bases: object

This class is a helper to implement the delegation pattern [1] by delegating functionality (i.e. calling through) to the backend class which will do the actual work.

[1] https://en.wikipedia.org/wiki/Delegation_pattern

_DEFAULT = None
__abstractmethods__ = frozenset(['create_default'])
__dict__ = dict_proxy({'__module__': 'aiida.orm.utils', '_abc_negative_cache': <_weakrefset.WeakSet object>, 'create_default': <aiida.common.utils.abstractclassmethod object>, '__dict__': <attribute '__dict__' of 'BackendDelegateWithDefault' objects>, '_DEFAULT': None, '__weakref__': <attribute '__weakref__' of 'BackendDelegateWithDefault' objects>, '__init__': <function __init__>, '_abc_cache': <_weakrefset.WeakSet object>, 'get_default': <classmethod object>, '__abstractmethods__': frozenset(['create_default']), '_abc_negative_cache_version': 39, '_abc_registry': <_weakrefset.WeakSet object>, '__doc__': '\n This class is a helper to implement the delegation pattern [1] by\n delegating functionality (i.e. calling through) to the backend class\n which will do the actual work.\n\n [1] https://en.wikipedia.org/wiki/Delegation_pattern\n '})
__init__(backend)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.orm.utils'
__weakref__

list of weak references to the object (if defined)

_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
classmethod create_default()[source]
classmethod get_default()[source]
class aiida.orm.User(backend)[source]

Bases: aiida.orm.backend.CollectionEntry

This is the base class for User information in AiiDA. An implementing backend needs to provide a concrete version.

REQUIRED_FIELDS = ['first_name', 'last_name', 'institution']
__abstractmethods__ = frozenset(['first_name', 'last_name', 'is_active', 'email', '_set_password', 'is_stored', 'last_login', '_get_password', 'id', 'pk', 'institution', 'store', 'date_joined'])
__module__ = 'aiida.orm.user'
__str__() <==> str(x)[source]
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_get_password()[source]
_set_password(new_pass)[source]
date_joined
email
first_name
get_full_name()[source]

Return the user full name

Returns:the user full name
static get_schema()[source]

Every node property contains:

  • display_name: display name of the property
  • help text: short help text of the property
  • is_foreign_key: is the property foreign key to other type of the node
  • type: type of the property. e.g. str, dict, int
Returns:schema of the user
get_short_name()[source]

Return the user short name (typically, this returns the email)

Returns:The short name
has_usable_password()[source]
id
institution
is_active
is_stored

Is the user stored

Returns:True if stored, False otherwise
Return type:bool
last_login
last_name
password
pk
store()[source]
verify_password(password)[source]
class aiida.orm.UserCollection(backend)[source]

Bases: aiida.orm.backend.Collection

The collection of users stored in a backend

__abstractmethods__ = frozenset(['create'])
__module__ = 'aiida.orm.user'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
all()[source]

Get all users

Returns:A collection of users matching the criteria
create(email, first_name='', last_name='', institution='')[source]

Create a user with the provided email address

Parameters:
  • email – An email address for the user
  • first_name (str) – The users first name
  • last_name (str) – The users last name
Institution:

The users instution

Returns:

A new user object

Return type:

User

find(email=None, id=None)[source]

Final all users matching the given criteria

Parameters:email – An email address to search for
Returns:A collection of users matching the criteria
get(email)[source]

Get a user using the email address :param email: The user’s email address :return: The corresponding user object :raises: aiida.common.exceptions.MultipleObjectsError, aiida.common.exceptions.NotExistent

get_automatic_user()[source]

Get the current automatic (default) user :return: The automatic user

get_or_create(email)[source]

Get the existing user with a given email address or create an unstored one :param email: The user’s email address :return: The corresponding user object :raises: aiida.common.exceptions.MultipleObjectsError, aiida.common.exceptions.NotExistent

class aiida.orm.AuthInfo(backend)[source]

Bases: aiida.orm.backend.CollectionEntry

Base class to map a DbAuthInfo, that contains computer configuration specific to a given user (authorization info and other metadata, like how often to check on a given computer etc.)

__abstractmethods__ = frozenset(['enabled', 'is_stored', 'computer', 'set_metadata', 'user', 'get_metadata', 'get_auth_params', 'id', 'set_auth_params'])
__module__ = 'aiida.orm.authinfo'
__str__() <==> str(x)[source]
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 40
_abc_registry = <_weakrefset.WeakSet object>
computer
enabled

Is the computer enabled for this user?

Returns:Boolean
get_auth_params()[source]

Get the dictionary of auth_params

Returns:a dictionary
get_metadata()[source]

Get the metadata dictionary

Returns:a dictionary
get_transport()[source]

Return a configured transport to connect to the computer.

get_workdir()[source]

Get the workdir; defaults to the value of the corresponding computer, if not explicitly set

Returns:a string
id

Return the ID in the DB.

is_stored

Is it already stored or not?

Returns:Boolean
pk()[source]

Return the principal key in the DB.

set_auth_params(auth_params)[source]

Set the dictionary of auth_params

Parameters:auth_params – a dictionary with the new auth_params
set_metadata(metadata)[source]

Replace the metadata dictionary in the DB with the provided dictionary

user
class aiida.orm.AuthInfoCollection(backend)[source]

Bases: aiida.orm.backend.Collection

The collection of AuthInfo entries.

__abstractmethods__ = frozenset(['create', 'remove', 'get'])
__module__ = 'aiida.orm.authinfo'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 40
_abc_registry = <_weakrefset.WeakSet object>
create(computer, user)[source]

Create a AuthInfo given a computer and a user

Parameters:
  • computer – a Computer instance
  • user – a User instance
Returns:

a AuthInfo object associated to the given computer and user

get(computer, user)[source]

Return a AuthInfo given a computer and a user

Parameters:
  • computer – a Computer instance
  • user – a User instance
Returns:

a AuthInfo object associated to the given computer and user

Raises:
  • NotExistent – if the user is not configured to use computer
  • sqlalchemy.orm.exc.MultipleResultsFound – if the user is configured more than once to use the computer! Should never happen
remove(authinfo_id)[source]

Remove an AuthInfo from the collection with the given id :param authinfo_id: The ID of the authinfo to delete

class aiida.orm.Computer(backend)[source]

Bases: aiida.orm.backend.CollectionEntry

Base class to map a node in the DB + its permanent repository counterpart.

Stores attributes starting with an underscore.

Caches files and attributes before the first save, and saves everything only on store(). After the call to store(), attributes cannot be changed.

Only after storing (or upon loading from uuid) metadata can be modified and in this case they are directly set on the db.

In the plugin, also set the _plugin_type_string, to be set in the DB in the ‘type’ field.

__abstractmethods__ = frozenset(['is_enabled', 'hostname', 'set_description', 'set_enabled_state', 'get_shebang', 'set_scheduler_type', 'set_workdir', 'set_transport_type', 'get_description', 'get_workdir', 'is_stored', 'get_calculations_on_computer', 'get_transport_params', '_get_metadata', 'set_transport_params', 'id', '_set_metadata', 'description', 'set_name', 'copy', 'uuid', 'get_scheduler_type', 'get_transport_type', 'get_name', 'name', 'set_hostname', 'set', 'full_text_info', 'store'])
__module__ = 'aiida.orm.computer'
__repr__() <==> repr(x)[source]
__str__() <==> str(x)[source]
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 35
_abc_registry = <_weakrefset.WeakSet object>
classmethod _append_text_validator(append_text)[source]

Validates the append text string.

classmethod _default_mpiprocs_per_machine_validator(def_cpus_per_machine)[source]

Validates the default number of CPUs per machine (node)

_del_property(k, raise_exception=True)[source]
classmethod _description_validator(description)[source]

Validates the description.

classmethod _enabled_state_validator(enabled_state)[source]

Validates the hostname.

_get_metadata()[source]
_get_property(k, *args)[source]
classmethod _hostname_validator(hostname)[source]

Validates the hostname.

_logger = <logging.Logger object>
_mpirun_command_validator(mpirun_cmd)[source]

Validates the mpirun_command variable. MUST be called after properly checking for a valid scheduler.

classmethod _name_validator(name)[source]

Validates the name.

classmethod _prepend_text_validator(prepend_text)[source]

Validates the prepend text string.

classmethod _scheduler_type_validator(scheduler_type)[source]

Validates the transport string.

_set_metadata(metadata_dict)[source]

Set the metadata.

_set_property(k, v)[source]
classmethod _transport_type_validator(transport_type)[source]

Validates the transport string.

classmethod _workdir_validator(workdir)[source]

Validates the transport string.

copy()[source]

Return a copy of the current object to work with, not stored yet.

description
full_text_info

Return a (multiline) string with a human-readable detailed information on this computer.

Rypte:str
get_append_text()[source]
get_authinfo(user)[source]

Return the aiida.orm.authinfo.AuthInfo instance for the given user on this computer, if the computer is configured for the given user.

Parameters:user – a User instance.
Returns:a AuthInfo instance
Raises:NotExistent – if the computer is not configured for the given user.
get_calculations_on_computer()[source]
get_default_mpiprocs_per_machine()[source]

Return the default number of CPUs per machine (node) for this computer, or None if it was not set.

get_description()[source]
get_hostname()[source]

Get this computer hostname :rtype: str

get_mpirun_command()[source]

Return the mpirun command. Must be a list of strings, that will be then joined with spaces when submitting.

I also provide a sensible default that may be ok in many cases.

get_name()[source]
get_prepend_text()[source]
get_scheduler()[source]
get_scheduler_type()[source]
static get_schema()[source]
Every node property contains:
  • display_name: display name of the property
  • help text: short help text of the property
  • is_foreign_key: is the property foreign key to other type of the node
  • type: type of the property. e.g. str, dict, int
Returns:get schema of the computer
get_shebang()[source]
get_transport(user=None)[source]

Return a Tranport class, configured with all correct parameters. The Transport is closed (meaning that if you want to run any operation with it, you have to open it first (i.e., e.g. for a SSH tranport, you have to open a connection). To do this you can call transport.open(), or simply run within a with statement:

transport = Computer.get_transport()
with transport:
    print(transport.whoami())
Parameters:user – if None, try to obtain a transport for the default user. Otherwise, pass a valid User.
Returns:a (closed) Transport, already configured with the connection parameters to the supercomputer, as configured with verdi computer configure for the user specified as a parameter user.
get_transport_class()[source]
get_transport_params()[source]
get_transport_type()[source]
get_workdir()[source]

Get the working directory for this computer :return: The currently configured working directory :rtype: str

hostname
id

Return the ID in the DB.

is_enabled()[source]
is_stored

Is the computer stored?

Returns:True if stored, False otherwise
Return type:bool
is_user_configured(user)[source]
is_user_enabled(user)[source]
label

The computer label

logger
name
pk()[source]

Return the principal key in the DB.

set(**kwargs)[source]
set_append_text(val)[source]
set_default_mpiprocs_per_machine(def_cpus_per_machine)[source]

Set the default number of CPUs per machine (node) for this computer. Accepts None if you do not want to set this value.

set_description(val)[source]
set_enabled_state(enabled)[source]
set_hostname(val)[source]

Set the hostname of this computer :param val: The new hostname :type val: str

set_mpirun_command(val)[source]

Set the mpirun command. It must be a list of strings (you can use string.split() if you have a single, space-separated string).

set_name(val)[source]
set_prepend_text(val)[source]
set_scheduler_type(val)[source]
set_shebang(val)[source]
Parameters:val (str) – A valid shebang line
set_transport_params(val)[source]
set_transport_type(val)[source]
set_workdir(val)[source]
store()[source]

Store the computer in the DB.

Differently from Nodes, a computer can be re-stored if its properties are to be changed (e.g. a new mpirun command, etc.)

uuid

Return the UUID in the DB.

validate()[source]

Check if the attributes and files retrieved from the DB are valid. Raise a ValidationError if something is wrong.

Must be able to work even before storing: therefore, use the get_attr and similar methods that automatically read either from the DB or from the internal attribute cache.

For the base class, this is always valid. Subclasses will reimplement this. In the subclass, always call the super().validate() method first!

class aiida.orm.ComputerCollection(backend)[source]

Bases: aiida.orm.backend.Collection

The collection of Computer entries.

__abstractmethods__ = frozenset(['create', 'list_names', 'delete'])
__module__ = 'aiida.orm.computer'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 35
_abc_registry = <_weakrefset.WeakSet object>
create(name, hostname, description='', transport_type='', scheduler_type='', workdir=None, enabled_state=True)[source]

Create new a computer

Returns:the newly created computer
Return type:aiida.orm.Computer
delete(id)[source]

Delete the computer with the given id

get(id=None, name=None, uuid=None)[source]

Get a computer from one of it’s unique identifiers

Parameters:
  • id – the computer’s id
  • name (str) – the name of the computer
  • uuid – the uuid of the computer
Returns:

the corresponding computer

Return type:

aiida.orm.Computer

list_names()[source]

Return a list with all the names of the computers in the DB.

class aiida.orm.Backend[source]

Bases: object

The public interface that defines a backend factory that creates backend specific concrete objects.

__abstractmethods__ = frozenset(['computers', 'authinfos', 'users', 'query_manager', 'logs'])
__dict__ = dict_proxy({'__module__': 'aiida.orm.backend', 'computers': <abc.abstractproperty object>, 'logs': <abc.abstractproperty object>, '_abc_negative_cache': <_weakrefset.WeakSet object>, 'authinfos': <abc.abstractproperty object>, '__dict__': <attribute '__dict__' of 'Backend' objects>, '__weakref__': <attribute '__weakref__' of 'Backend' objects>, 'users': <abc.abstractproperty object>, '_abc_cache': <_weakrefset.WeakSet object>, '__abstractmethods__': frozenset(['computers', 'authinfos', 'users', 'query_manager', 'logs']), 'query_manager': <abc.abstractproperty object>, '_abc_negative_cache_version': 35, '_abc_registry': <_weakrefset.WeakSet object>, '__doc__': 'The public interface that defines a backend factory that creates backend specific concrete objects.'})
__module__ = 'aiida.orm.backend'
__weakref__

list of weak references to the object (if defined)

_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 35
_abc_registry = <_weakrefset.WeakSet object>
authinfos

Return the collection of authorisation information objects

Returns:the authinfo collection
Return type:aiida.orm.authinfo.AuthInfoCollection
computers

Return the collection of computer objects

Returns:the computers collection
Return type:aiida.orm.computer.ComputerCollection
logs

Return the collection of log entries

Returns:the log collection
Return type:aiida.orm.log.Log
query_manager

Return the query manager for the objects stored in the backend

Returns:The query manger
Return type:aiida.backends.general.abstractqueries.AbstractQueryManager
users

Return the collection of users

Returns:the users collection
Return type:aiida.orm.user.UserCollection
aiida.orm.construct_backend(backend_type=None)[source]

Construct a concrete backend instance based on the backend_type or use the global backend value if not specified.

Parameters:backend_type – get a backend instance based on the specified type (or default)
Returns:aiida.orm.backend.Backend
class aiida.orm.Collection(backend)[source]

Bases: object

Container class that represents a collection of entries of a particular backend entity.

__dict__ = dict_proxy({'__module__': 'aiida.orm.backend', '__dict__': <attribute '__dict__' of 'Collection' objects>, '__weakref__': <attribute '__weakref__' of 'Collection' objects>, '__doc__': 'Container class that represents a collection of entries of a particular backend entity.', '__init__': <function __init__>, 'backend': <property object>})
__init__(backend)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.orm.backend'
__weakref__

list of weak references to the object (if defined)

backend

Return the backend.

class aiida.orm.CollectionEntry(backend)[source]

Bases: object

Class that represents an entry within a collection of entries of a particular backend entity.

__dict__ = dict_proxy({'__module__': 'aiida.orm.backend', '__dict__': <attribute '__dict__' of 'CollectionEntry' objects>, '__weakref__': <attribute '__weakref__' of 'CollectionEntry' objects>, '__doc__': 'Class that represents an entry within a collection of entries of a particular backend entity.', '__init__': <function __init__>, 'backend': <property object>})
__init__(backend)[source]
Parameters:backend (aiida.orm.backend.Backend) – The backend instance
__module__ = 'aiida.orm.backend'
__weakref__

list of weak references to the object (if defined)

backend

Return the backend.

Submodules

class aiida.orm.authinfo.AuthInfo(backend)[source]

Bases: aiida.orm.backend.CollectionEntry

Base class to map a DbAuthInfo, that contains computer configuration specific to a given user (authorization info and other metadata, like how often to check on a given computer etc.)

__abstractmethods__ = frozenset(['enabled', 'is_stored', 'computer', 'set_metadata', 'user', 'get_metadata', 'get_auth_params', 'id', 'set_auth_params'])
__module__ = 'aiida.orm.authinfo'
__str__() <==> str(x)[source]
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 40
_abc_registry = <_weakrefset.WeakSet object>
computer
enabled

Is the computer enabled for this user?

Returns:Boolean
get_auth_params()[source]

Get the dictionary of auth_params

Returns:a dictionary
get_metadata()[source]

Get the metadata dictionary

Returns:a dictionary
get_transport()[source]

Return a configured transport to connect to the computer.

get_workdir()[source]

Get the workdir; defaults to the value of the corresponding computer, if not explicitly set

Returns:a string
id

Return the ID in the DB.

is_stored

Is it already stored or not?

Returns:Boolean
pk()[source]

Return the principal key in the DB.

set_auth_params(auth_params)[source]

Set the dictionary of auth_params

Parameters:auth_params – a dictionary with the new auth_params
set_metadata(metadata)[source]

Replace the metadata dictionary in the DB with the provided dictionary

user
class aiida.orm.authinfo.AuthInfoCollection(backend)[source]

Bases: aiida.orm.backend.Collection

The collection of AuthInfo entries.

__abstractmethods__ = frozenset(['create', 'remove', 'get'])
__module__ = 'aiida.orm.authinfo'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 40
_abc_registry = <_weakrefset.WeakSet object>
create(computer, user)[source]

Create a AuthInfo given a computer and a user

Parameters:
  • computer – a Computer instance
  • user – a User instance
Returns:

a AuthInfo object associated to the given computer and user

get(computer, user)[source]

Return a AuthInfo given a computer and a user

Parameters:
  • computer – a Computer instance
  • user – a User instance
Returns:

a AuthInfo object associated to the given computer and user

Raises:
  • NotExistent – if the user is not configured to use computer
  • sqlalchemy.orm.exc.MultipleResultsFound – if the user is configured more than once to use the computer! Should never happen
remove(authinfo_id)[source]

Remove an AuthInfo from the collection with the given id :param authinfo_id: The ID of the authinfo to delete

class aiida.orm.autogroup.Autogroup[source]

Bases: object

An object used for the autogrouping of objects. The autogrouping is checked by the Node.store() method. In the store(), the Node will check if current_autogroup is != None. If so, it will call Autogroup.is_to_be_grouped, and decide whether to put it in a group. Such autogroups are going to be of the VERDIAUTOGROUP_TYPE.

The exclude/include lists, can have values ‘all’ if you want to include/exclude all classes. Otherwise, they are lists of strings like: calculation.quantumespresso.pw, data.array.kpoints, … i.e.: a string identifying the base class, than the path to the class as in Calculation/Data -Factories

__dict__ = dict_proxy({'get_exclude_with_subclasses': <function get_exclude_with_subclasses>, 'set_exclude': <function set_exclude>, '__module__': 'aiida.orm.autogroup', 'get_group_name': <function get_group_name>, 'set_include_with_subclasses': <function set_include_with_subclasses>, '_validate': <function _validate>, 'get_include': <function get_include>, 'set_include': <function set_include>, 'set_group_name': <function set_group_name>, 'is_to_be_grouped': <function is_to_be_grouped>, 'get_exclude': <function get_exclude>, '__dict__': <attribute '__dict__' of 'Autogroup' objects>, 'set_exclude_with_subclasses': <function set_exclude_with_subclasses>, '__weakref__': <attribute '__weakref__' of 'Autogroup' objects>, '__doc__': "\n An object used for the autogrouping of objects.\n The autogrouping is checked by the Node.store() method.\n In the store(), the Node will check if current_autogroup is != None.\n If so, it will call Autogroup.is_to_be_grouped, and decide whether to put it in a group.\n Such autogroups are going to be of the VERDIAUTOGROUP_TYPE.\n\n The exclude/include lists, can have values 'all' if you want to include/exclude all classes.\n Otherwise, they are lists of strings like: calculation.quantumespresso.pw, data.array.kpoints, ...\n i.e.: a string identifying the base class, than the path to the class as in Calculation/Data -Factories\n ", 'get_include_with_subclasses': <function get_include_with_subclasses>})
__module__ = 'aiida.orm.autogroup'
__weakref__

list of weak references to the object (if defined)

_validate(param, is_exact=True)[source]

Used internally to verify the sanity of exclude, include lists

get_exclude()[source]

Return the list of classes to exclude from autogrouping

get_exclude_with_subclasses()[source]

Return the list of classes to exclude from autogrouping. Will also exclude their derived subclasses

get_group_name()[source]

Get the name of the group. If no group name was set, it will set a default one by itself.

get_include()[source]

Return the list of classes to include in the autogrouping

get_include_with_subclasses()[source]

Return the list of classes to include in the autogrouping. Will also include their derived subclasses

is_to_be_grouped(the_class)[source]

Return whether the given class has to be included in the autogroup according to include/exclude list

Return (bool):True if the_class is to be included in the autogroup
set_exclude(exclude)[source]

Return the list of classes to exclude from autogrouping.

set_exclude_with_subclasses(exclude)[source]

Set the list of classes to exclude from autogrouping. Will also exclude their derived subclasses

set_group_name(gname)[source]

Set the name of the group to be created

set_include(include)[source]

Set the list of classes to include in the autogrouping.

set_include_with_subclasses(include)[source]

Set the list of classes to include in the autogrouping. Will also include their derived subclasses.

class aiida.orm.backend.Backend[source]

Bases: object

The public interface that defines a backend factory that creates backend specific concrete objects.

__abstractmethods__ = frozenset(['computers', 'authinfos', 'users', 'query_manager', 'logs'])
__dict__ = dict_proxy({'__module__': 'aiida.orm.backend', 'computers': <abc.abstractproperty object>, 'logs': <abc.abstractproperty object>, '_abc_negative_cache': <_weakrefset.WeakSet object>, 'authinfos': <abc.abstractproperty object>, '__dict__': <attribute '__dict__' of 'Backend' objects>, '__weakref__': <attribute '__weakref__' of 'Backend' objects>, 'users': <abc.abstractproperty object>, '_abc_cache': <_weakrefset.WeakSet object>, '__abstractmethods__': frozenset(['computers', 'authinfos', 'users', 'query_manager', 'logs']), 'query_manager': <abc.abstractproperty object>, '_abc_negative_cache_version': 35, '_abc_registry': <_weakrefset.WeakSet object>, '__doc__': 'The public interface that defines a backend factory that creates backend specific concrete objects.'})
__module__ = 'aiida.orm.backend'
__weakref__

list of weak references to the object (if defined)

_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 35
_abc_registry = <_weakrefset.WeakSet object>
authinfos

Return the collection of authorisation information objects

Returns:the authinfo collection
Return type:aiida.orm.authinfo.AuthInfoCollection
computers

Return the collection of computer objects

Returns:the computers collection
Return type:aiida.orm.computer.ComputerCollection
logs

Return the collection of log entries

Returns:the log collection
Return type:aiida.orm.log.Log
query_manager

Return the query manager for the objects stored in the backend

Returns:The query manger
Return type:aiida.backends.general.abstractqueries.AbstractQueryManager
users

Return the collection of users

Returns:the users collection
Return type:aiida.orm.user.UserCollection
aiida.orm.backend.construct_backend(backend_type=None)[source]

Construct a concrete backend instance based on the backend_type or use the global backend value if not specified.

Parameters:backend_type – get a backend instance based on the specified type (or default)
Returns:aiida.orm.backend.Backend
class aiida.orm.backend.Collection(backend)[source]

Bases: object

Container class that represents a collection of entries of a particular backend entity.

__dict__ = dict_proxy({'__module__': 'aiida.orm.backend', '__dict__': <attribute '__dict__' of 'Collection' objects>, '__weakref__': <attribute '__weakref__' of 'Collection' objects>, '__doc__': 'Container class that represents a collection of entries of a particular backend entity.', '__init__': <function __init__>, 'backend': <property object>})
__init__(backend)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.orm.backend'
__weakref__

list of weak references to the object (if defined)

backend

Return the backend.

class aiida.orm.backend.CollectionEntry(backend)[source]

Bases: object

Class that represents an entry within a collection of entries of a particular backend entity.

__dict__ = dict_proxy({'__module__': 'aiida.orm.backend', '__dict__': <attribute '__dict__' of 'CollectionEntry' objects>, '__weakref__': <attribute '__weakref__' of 'CollectionEntry' objects>, '__doc__': 'Class that represents an entry within a collection of entries of a particular backend entity.', '__init__': <function __init__>, 'backend': <property object>})
__init__(backend)[source]
Parameters:backend (aiida.orm.backend.Backend) – The backend instance
__module__ = 'aiida.orm.backend'
__weakref__

list of weak references to the object (if defined)

backend

Return the backend.

class aiida.orm.computer.Computer(backend)[source]

Bases: aiida.orm.backend.CollectionEntry

Base class to map a node in the DB + its permanent repository counterpart.

Stores attributes starting with an underscore.

Caches files and attributes before the first save, and saves everything only on store(). After the call to store(), attributes cannot be changed.

Only after storing (or upon loading from uuid) metadata can be modified and in this case they are directly set on the db.

In the plugin, also set the _plugin_type_string, to be set in the DB in the ‘type’ field.

__abstractmethods__ = frozenset(['is_enabled', 'hostname', 'set_description', 'set_enabled_state', 'get_shebang', 'set_scheduler_type', 'set_workdir', 'set_transport_type', 'get_description', 'get_workdir', 'is_stored', 'get_calculations_on_computer', 'get_transport_params', '_get_metadata', 'set_transport_params', 'id', '_set_metadata', 'description', 'set_name', 'copy', 'uuid', 'get_scheduler_type', 'get_transport_type', 'get_name', 'name', 'set_hostname', 'set', 'full_text_info', 'store'])
__module__ = 'aiida.orm.computer'
__repr__() <==> repr(x)[source]
__str__() <==> str(x)[source]
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 35
_abc_registry = <_weakrefset.WeakSet object>
classmethod _append_text_validator(append_text)[source]

Validates the append text string.

classmethod _default_mpiprocs_per_machine_validator(def_cpus_per_machine)[source]

Validates the default number of CPUs per machine (node)

_del_property(k, raise_exception=True)[source]
classmethod _description_validator(description)[source]

Validates the description.

classmethod _enabled_state_validator(enabled_state)[source]

Validates the hostname.

_get_metadata()[source]
_get_property(k, *args)[source]
classmethod _hostname_validator(hostname)[source]

Validates the hostname.

_logger = <logging.Logger object>
_mpirun_command_validator(mpirun_cmd)[source]

Validates the mpirun_command variable. MUST be called after properly checking for a valid scheduler.

classmethod _name_validator(name)[source]

Validates the name.

classmethod _prepend_text_validator(prepend_text)[source]

Validates the prepend text string.

classmethod _scheduler_type_validator(scheduler_type)[source]

Validates the transport string.

_set_metadata(metadata_dict)[source]

Set the metadata.

_set_property(k, v)[source]
classmethod _transport_type_validator(transport_type)[source]

Validates the transport string.

classmethod _workdir_validator(workdir)[source]

Validates the transport string.

copy()[source]

Return a copy of the current object to work with, not stored yet.

description
full_text_info

Return a (multiline) string with a human-readable detailed information on this computer.

Rypte:str
get_append_text()[source]
get_authinfo(user)[source]

Return the aiida.orm.authinfo.AuthInfo instance for the given user on this computer, if the computer is configured for the given user.

Parameters:user – a User instance.
Returns:a AuthInfo instance
Raises:NotExistent – if the computer is not configured for the given user.
get_calculations_on_computer()[source]
get_default_mpiprocs_per_machine()[source]

Return the default number of CPUs per machine (node) for this computer, or None if it was not set.

get_description()[source]
get_hostname()[source]

Get this computer hostname :rtype: str

get_mpirun_command()[source]

Return the mpirun command. Must be a list of strings, that will be then joined with spaces when submitting.

I also provide a sensible default that may be ok in many cases.

get_name()[source]
get_prepend_text()[source]
get_scheduler()[source]
get_scheduler_type()[source]
static get_schema()[source]
Every node property contains:
  • display_name: display name of the property
  • help text: short help text of the property
  • is_foreign_key: is the property foreign key to other type of the node
  • type: type of the property. e.g. str, dict, int
Returns:get schema of the computer
get_shebang()[source]
get_transport(user=None)[source]

Return a Tranport class, configured with all correct parameters. The Transport is closed (meaning that if you want to run any operation with it, you have to open it first (i.e., e.g. for a SSH tranport, you have to open a connection). To do this you can call transport.open(), or simply run within a with statement:

transport = Computer.get_transport()
with transport:
    print(transport.whoami())
Parameters:user – if None, try to obtain a transport for the default user. Otherwise, pass a valid User.
Returns:a (closed) Transport, already configured with the connection parameters to the supercomputer, as configured with verdi computer configure for the user specified as a parameter user.
get_transport_class()[source]
get_transport_params()[source]
get_transport_type()[source]
get_workdir()[source]

Get the working directory for this computer :return: The currently configured working directory :rtype: str

hostname
id

Return the ID in the DB.

is_enabled()[source]
is_stored

Is the computer stored?

Returns:True if stored, False otherwise
Return type:bool
is_user_configured(user)[source]
is_user_enabled(user)[source]
label

The computer label

logger
name
pk()[source]

Return the principal key in the DB.

set(**kwargs)[source]
set_append_text(val)[source]
set_default_mpiprocs_per_machine(def_cpus_per_machine)[source]

Set the default number of CPUs per machine (node) for this computer. Accepts None if you do not want to set this value.

set_description(val)[source]
set_enabled_state(enabled)[source]
set_hostname(val)[source]

Set the hostname of this computer :param val: The new hostname :type val: str

set_mpirun_command(val)[source]

Set the mpirun command. It must be a list of strings (you can use string.split() if you have a single, space-separated string).

set_name(val)[source]
set_prepend_text(val)[source]
set_scheduler_type(val)[source]
set_shebang(val)[source]
Parameters:val (str) – A valid shebang line
set_transport_params(val)[source]
set_transport_type(val)[source]
set_workdir(val)[source]
store()[source]

Store the computer in the DB.

Differently from Nodes, a computer can be re-stored if its properties are to be changed (e.g. a new mpirun command, etc.)

uuid

Return the UUID in the DB.

validate()[source]

Check if the attributes and files retrieved from the DB are valid. Raise a ValidationError if something is wrong.

Must be able to work even before storing: therefore, use the get_attr and similar methods that automatically read either from the DB or from the internal attribute cache.

For the base class, this is always valid. Subclasses will reimplement this. In the subclass, always call the super().validate() method first!

class aiida.orm.computer.ComputerCollection(backend)[source]

Bases: aiida.orm.backend.Collection

The collection of Computer entries.

__abstractmethods__ = frozenset(['create', 'list_names', 'delete'])
__module__ = 'aiida.orm.computer'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 35
_abc_registry = <_weakrefset.WeakSet object>
create(name, hostname, description='', transport_type='', scheduler_type='', workdir=None, enabled_state=True)[source]

Create new a computer

Returns:the newly created computer
Return type:aiida.orm.Computer
delete(id)[source]

Delete the computer with the given id

get(id=None, name=None, uuid=None)[source]

Get a computer from one of it’s unique identifiers

Parameters:
  • id – the computer’s id
  • name (str) – the name of the computer
  • uuid – the uuid of the computer
Returns:

the corresponding computer

Return type:

aiida.orm.Computer

list_names()[source]

Return a list with all the names of the computers in the DB.

class aiida.orm.importexport.HTMLGetLinksParser(filter_extension=None)[source]

Bases: HTMLParser.HTMLParser

__init__(filter_extension=None)[source]

If a filter_extension is passed, only links with extension matching the given one will be returned.

__module__ = 'aiida.orm.importexport'

Return the links that were found during the parsing phase.

handle_starttag(tag, attrs)[source]

Store the urls encountered, if they match the request.

class aiida.orm.importexport.MyWritingZipFile(zipfile, fname)[source]

Bases: object

__dict__ = dict_proxy({'write': <function write>, '__module__': 'aiida.orm.importexport', '__weakref__': <attribute '__weakref__' of 'MyWritingZipFile' objects>, '__exit__': <function __exit__>, '__dict__': <attribute '__dict__' of 'MyWritingZipFile' objects>, 'close': <function close>, '__enter__': <function __enter__>, 'open': <function open>, '__doc__': None, '__init__': <function __init__>})
__enter__()[source]
__exit__(type, value, traceback)[source]
__init__(zipfile, fname)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

__module__ = 'aiida.orm.importexport'
__weakref__

list of weak references to the object (if defined)

close()[source]
open()[source]
write(data)[source]
class aiida.orm.importexport.ZipFolder(zipfolder_or_fname, mode=None, subfolder='.', use_compression=True, allowZip64=True)[source]

Bases: object

To improve: if zipfile is closed, do something (e.g. add explicit open method, rename open to openfile, set _zipfile to None, …)

__dict__ = dict_proxy({'__module__': 'aiida.orm.importexport', '__exit__': <function __exit__>, 'open': <function open>, '__enter__': <function __enter__>, '_get_internal_path': <function _get_internal_path>, 'pwd': <property object>, '__weakref__': <attribute '__weakref__' of 'ZipFolder' objects>, '__init__': <function __init__>, '__dict__': <attribute '__dict__' of 'ZipFolder' objects>, 'close': <function close>, 'insert_path': <function insert_path>, '__doc__': '\n To improve: if zipfile is closed, do something\n (e.g. add explicit open method, rename open to openfile,\n set _zipfile to None, ...)\n ', 'get_subfolder': <function get_subfolder>})
__enter__()[source]
__exit__(type, value, traceback)[source]
__init__(zipfolder_or_fname, mode=None, subfolder='.', use_compression=True, allowZip64=True)[source]
Parameters:
  • zipfolder_or_fname – either another ZipFolder instance, of which you want to get a subfolder, or a filename to create.
  • mode – the file mode; see the zipfile.ZipFile docs for valid strings. Note: can be specified only if zipfolder_or_fname is a string (the filename to generate)
  • subfolder – the subfolder that specified the “current working directory” in the zip file. If zipfolder_or_fname is a ZipFolder, subfolder is a relative path from zipfolder_or_fname.subfolder
  • use_compression – either True, to compress files in the Zip, or False if you just want to pack them together without compressing. It is ignored if zipfolder_or_fname is a ZipFolder isntance.
__module__ = 'aiida.orm.importexport'
__weakref__

list of weak references to the object (if defined)

_get_internal_path(filename)[source]
close()[source]
get_subfolder(subfolder, create=False, reset_limit=False)[source]
insert_path(src, dest_name=None, overwrite=True)[source]
open(fname, mode='r')[source]
pwd
aiida.orm.importexport.check_licences(node_licenses, allowed_licenses, forbidden_licenses)[source]
aiida.orm.importexport.deserialize_attributes(attributes_data, conversion_data)[source]
aiida.orm.importexport.deserialize_field(k, v, fields_info, import_unique_ids_mappings, foreign_ids_reverse_mappings)[source]
aiida.orm.importexport.export(what, outfile='export_data.aiida.tar.gz', overwrite=False, silent=False, **kwargs)[source]

Export the entries passed in the ‘what’ list to a file tree. :todo: limit the export to finished or failed calculations. :param what: a list of entity instances; they can belong to different models/entities. :param input_forward: Follow forward INPUT links (recursively) when calculating the node set to export. :param create_reversed: Follow reversed CREATE links (recursively) when calculating the node set to export. :param return_reversed: Follow reversed RETURN links (recursively) when calculating the node set to export. :param call_reversed: Follow reversed CALL links (recursively) when calculating the node set to export. :param allowed_licenses: a list or a function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise. :param forbidden_licenses: a list or a function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise. :param outfile: the filename of the file on which to export :param overwrite: if True, overwrite the output file without asking. if False, raise an IOError in this case. :param silent: suppress debug print

Raises:IOError – if overwrite==False and the filename already exists.
aiida.orm.importexport.export_tree(what, folder, allowed_licenses=None, forbidden_licenses=None, silent=False, input_forward=False, create_reversed=True, return_reversed=False, call_reversed=False, **kwargs)[source]

Export the entries passed in the ‘what’ list to a file tree. :todo: limit the export to finished or failed calculations. :param what: a list of entity instances; they can belong to different models/entities. :param folder: a Folder object :param input_forward: Follow forward INPUT links (recursively) when calculating the node set to export. :param create_reversed: Follow reversed CREATE links (recursively) when calculating the node set to export. :param return_reversed: Follow reversed RETURN links (recursively) when calculating the node set to export. :param call_reversed: Follow reversed CALL links (recursively) when calculating the node set to export. :param allowed_licenses: a list or a function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise. :param forbidden_licenses: a list or a function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise. :param silent: suppress debug prints :raises LicensingException: if any node is licensed under forbidden license

aiida.orm.importexport.export_zip(what, outfile='testzip', overwrite=False, silent=False, use_compression=True, **kwargs)[source]
aiida.orm.importexport.fill_in_query(partial_query, originating_entity_str, current_entity_str, tag_suffixes=[], entity_separator='_')[source]

This function recursively constructs QueryBuilder queries that are needed for the SQLA export function. To manage to construct such queries, the relationship dictionary is consulted (which shows how to reference different AiiDA entities in QueryBuilder. To find the dependencies of the relationships of the exported data, the get_all_fields_info_sqla (which described the exported schema and its dependencies) is consulted.

aiida.orm.importexport.get_all_fields_info()[source]

This method returns a description of the field names that should be used to describe the entity properties. Apart from of the listing of the fields per properties, it also shown the dependencies among different entities (and on which fields). It is also shown/return the unique identifiers used per entity.

aiida.orm.importexport.get_all_parents_dj(node_pks)[source]

Get all the parents of given nodes :param node_pks: one node pk or an iterable of node pks :return: a list of aiida objects with all the parents of the nodes

Open the given URL, parse the HTML and return a list of valid links where the link file has a .aiida extension.

aiida.orm.importexport.import_data(in_path, ignore_unknown_nodes=False, silent=False)[source]
aiida.orm.importexport.import_data_dj(in_path, ignore_unknown_nodes=False, silent=False)[source]

Import exported AiiDA environment to the AiiDA database. If the ‘in_path’ is a folder, calls export_tree; otherwise, tries to detect the compression format (zip, tar.gz, tar.bz2, …) and calls the correct function.

Parameters:in_path – the path to a file or folder that can be imported in AiiDA
aiida.orm.importexport.import_data_sqla(in_path, ignore_unknown_nodes=False, silent=False)[source]

Import exported AiiDA environment to the AiiDA database. If the ‘in_path’ is a folder, calls export_tree; otherwise, tries to detect the compression format (zip, tar.gz, tar.bz2, …) and calls the correct function.

Parameters:in_path – the path to a file or folder that can be imported in AiiDA
aiida.orm.importexport.schema_to_entity_names(class_string)[source]

Mapping from classes path to entity names (used by the SQLA import/export) This could have been written much simpler if it is only for SQLA but there is an attempt the SQLA import/export code to be used for Django too.

aiida.orm.importexport.serialize_dict(datadict, remove_fields=[], rename_fields={}, track_conversion=False)[source]

Serialize the dict using the serialize_field function to serialize each field.

Parameters:
  • remove_fields

    a list of strings. If a field with key inside the remove_fields list is found, it is removed from the dict.

    This is only used at level-0, no removal is possible at deeper levels.

  • rename_fields

    a dictionary in the format {"oldname": "newname"}.

    If the “oldname” key is found, it is replaced with the “newname” string in the output dictionary.

    This is only used at level-0, no renaming is possible at deeper levels.

  • track_conversion – if True, a tuple is returned, where the first element is the serialized dictionary, and the second element is a dictionary with the information on the serialized fields.
aiida.orm.importexport.serialize_field(data, track_conversion=False)[source]

Serialize a single field.

Todo:Generalize such that it the proper function is selected also during import
aiida.orm.importexport.validate_uuid(given_uuid)[source]

A simple check for the UUID validity.

class aiida.orm.log.Log(backend)[source]

Bases: aiida.orm.backend.CollectionEntry

__abstractmethods__ = frozenset(['objpk', 'loggername', 'objname', 'time', 'message', 'save', 'id', 'levelname', 'metadata'])
__module__ = 'aiida.orm.log'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 40
_abc_registry = <_weakrefset.WeakSet object>
id

Get the primary key of the entry

Returns:The entry primary key
Return type:int
levelname

The name of the log level

Returns:The entry log level name
Return type:basestring
loggername

The name of the logger that created this entry

Returns:The entry loggername
Return type:basestring
message

Get the message corresponding to the entry

Returns:The entry message
Return type:basestring
metadata

Get the metadata corresponding to the entry

Returns:The entry metadata
Return type:json.json
objname

Get the name of the object that created the log entry

Returns:The entry object name
Return type:basestring
objpk

Get the id of the object that created the log entry

Returns:The entry timestamp
Return type:int
save()[source]

Persist the log entry to the database

Returns:reference of self
Return type:
class:Log
time

Get the time corresponding to the entry

Returns:The entry timestamp
Return type:datetime.datetime
class aiida.orm.log.LogCollection(backend)[source]

Bases: aiida.orm.backend.Collection

This class represents the collection of logs and can be used to create and retrieve logs.

__abstractmethods__ = frozenset(['create_entry', 'delete_many', 'find'])
__module__ = 'aiida.orm.log'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 40
_abc_registry = <_weakrefset.WeakSet object>
create_entry(time, loggername, levelname, objname, objpk=None, message='', metadata=None)[source]

Create a log entry.

Parameters:
  • time (datetime.datetime) – The time of creation for the entry
  • loggername (basestring) – The name of the logger that generated the entry
  • levelname (basestring) – The log level
  • objname (basestring) – The object name (if any) that emitted the entry
  • objpk (int) – The object id that emitted the entry
  • message (basestring) – The message to log
  • metadata (dict) – Any (optional) metadata, should be JSON serializable dictionary
Returns:

An object implementing the log entry interface

Return type:

aiida.orm.log.Log

create_entry_from_record(record)[source]

Helper function to create a log entry from a record created as by the python logging lobrary

Parameters:record (logging.record) – The record created by the logging module
Returns:An object implementing the log entry interface
Return type:aiida.orm.log.Log
delete_many(filter)[source]

Delete all the log entries matching the given filter

find(filter_by=None, order_by=None, limit=None)[source]

Find all entries in the Log collection that conforms to the filter and optionally sort and/or apply a limit.

Parameters:
  • filter_by (dict) – A dictionary of key value pairs where the entries have to match all the criteria (i.e. an AND operation)
  • order_by (list) – A list of tuples of type OrderSpecifier
  • limit – The number of entries to retrieve
Returns:

An iterable of the matching entries

class aiida.orm.log.OrderSpecifier(field, direction)

Bases: tuple

__dict__ = dict_proxy({'__module__': 'aiida.orm.log', '__getstate__': <function __getstate__>, '__new__': <staticmethod object>, '_replace': <function _replace>, '_make': <classmethod object>, 'direction': <property object>, 'field': <property object>, '__slots__': (), '_asdict': <function _asdict>, '__repr__': <function __repr__>, '__dict__': <property object>, '_fields': ('field', 'direction'), '__getnewargs__': <function __getnewargs__>, '__doc__': 'OrderSpecifier(field, direction)'})
__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__module__ = 'aiida.orm.log'
static __new__(_cls, field, direction)

Create new instance of OrderSpecifier(field, direction)

__repr__()

Return a nicely formatted representation string

__slots__ = ()
_asdict()

Return a new OrderedDict which maps field names to their values

_fields = ('field', 'direction')
classmethod _make(iterable, new=<built-in method __new__ of type object at 0x906d60>, len=<built-in function len>)

Make a new OrderSpecifier object from a sequence or iterable

_replace(**kwds)

Return a new OrderSpecifier object replacing specified fields with new values

direction

Alias for field number 1

field

Alias for field number 0

class aiida.orm.mixins.FunctionCalculationMixin[source]

Bases: object

This mixin should be used for Calculation subclasses that are used to record the execution of a python function. For example the Calculation nodes that are used for a function that was wrapped by the workfunction or make_inline function decorators. The store_source_info method can then be called with the wrapped function to store information about that function in the calculation node through the inspect module. Various property getters are defined to later retrieve that information from the node

FUNCTION_NAMESPACE_KEY = 'function_namespace'
FUNCTION_NAME_KEY = 'function_name'
FUNCTION_SOURCE_FILE_PATH = 'source_file'
FUNCTION_STARTING_LINE_NUMBER_KEY = 'function_starting_line_number'
__dict__ = dict_proxy({'__module__': 'aiida.orm.mixins', 'FUNCTION_STARTING_LINE_NUMBER_KEY': 'function_starting_line_number', '_set_function_starting_line_number': <function _set_function_starting_line_number>, 'FUNCTION_SOURCE_FILE_PATH': 'source_file', '__weakref__': <attribute '__weakref__' of 'FunctionCalculationMixin' objects>, '_set_function_name': <function _set_function_name>, 'store_source_info': <function store_source_info>, 'function_name': <property object>, 'function_namespace': <property object>, 'function_source_file': <property object>, 'FUNCTION_NAME_KEY': 'function_name', '__dict__': <attribute '__dict__' of 'FunctionCalculationMixin' objects>, 'FUNCTION_NAMESPACE_KEY': 'function_namespace', '_set_function_namespace': <function _set_function_namespace>, '_set_source_file': <function _set_source_file>, '__doc__': '\n This mixin should be used for Calculation subclasses that are used to record the execution\n of a python function. For example the Calculation nodes that are used for a function that\n was wrapped by the `workfunction` or `make_inline` function decorators. The `store_source_info`\n method can then be called with the wrapped function to store information about that function\n in the calculation node through the inspect module. Various property getters are defined to\n later retrieve that information from the node\n ', 'function_starting_line_number': <property object>})
__module__ = 'aiida.orm.mixins'
__weakref__

list of weak references to the object (if defined)

_set_function_name(function_name)[source]

Set the function name of the wrapped function

Parameters:function_name – the function name
_set_function_namespace(function_namespace)[source]

Set the function namespace of the wrapped function

Parameters:function_namespace – the function namespace
_set_function_starting_line_number(function_starting_line_number)[source]

Set the starting line number of the wrapped function in its source file

Parameters:function_starting_line_number – the starting line number
_set_source_file(source_file_handle)[source]

Store a copy of the source file from source_file_handle in the repository

Parameters:source_file_handle – a file like object with the source file
function_name

Return the function name of the wrapped function

Returns:the function name or None
function_namespace

Return the function namespace of the wrapped function

Returns:the function namespace or None
function_source_file

Return the absolute path to the source file in the repository

Returns:the absolute path of the source file in the repository, or None if it does not exist
function_starting_line_number

Return the starting line number of the wrapped function in its source file

Returns:the starting line number or None
store_source_info(func)[source]

Retrieve source information about the wrapped function func through the inspect module, and store it in the attributes and repository of the node. The function name, namespace and the starting line number in the source file will be stored in the attributes. The source file itself will be copied into the repository

Parameters:func – the function to inspect and whose information to store in the node
class aiida.orm.mixins.Sealable[source]

Bases: object

SEALED_KEY = 'sealed'
__dict__ = dict_proxy({'__module__': 'aiida.orm.mixins', '_del_attr': <function _del_attr>, '_set_attr': <function _set_attr>, 'is_sealed': <property object>, 'add_link_from': <function add_link_from>, '_updatable_attributes': <aiida.common.utils.classproperty object>, 'SEALED_KEY': 'sealed', 'seal': <function seal>, '__dict__': <attribute '__dict__' of 'Sealable' objects>, '__weakref__': <attribute '__weakref__' of 'Sealable' objects>, '__doc__': None})
__module__ = 'aiida.orm.mixins'
__weakref__

list of weak references to the object (if defined)

_del_attr(key)[source]

Delete an attribute

Parameters:

key – attribute name

Raises:
  • AttributeError – if key does not exist
  • ModificationNotAllowed – if the node is already sealed or if the node is already stored and the attribute is not updatable
_set_attr(key, value, **kwargs)[source]

Set a new attribute

Parameters:
  • key – attribute name
  • value – attribute value
Raises:

ModificationNotAllowed – if the node is already sealed or if the node is already stored and the attribute is not updatable

_updatable_attributes = ('sealed',)

Add a link from a node

You can use the parameters of the base Node class, in particular the label parameter to label the link.

Parameters:
  • src – the node to add a link from
  • label (str) – name of the link
  • link_type – type of the link, must be one of the enum values from LinkType
is_sealed

Returns whether the node is sealed, i.e. whether the sealed attribute has been set to True

seal()[source]

Seal the node by setting the sealed attribute to True

The QueryBuilder: A class that allows you to query the AiiDA database, independent from backend. Note that the backend implementation is enforced and handled with a composition model! QueryBuilder() is the frontend class that the user can use. It inherits from object and contains backend-specific functionality. Backend specific functionality is provided by the implementation classes.

These inherit from aiida.backends.general.querybuilder_interface.QueryBuilderInterface(), an interface classes which enforces the implementation of its defined methods. An instance of one of the implementation classes becomes a member of the QueryBuilder() instance when instantiated by the user.

class aiida.orm.querybuilder.QueryBuilder(*args, **kwargs)[source]

Bases: object

The class to query the AiiDA database.

Usage:

from aiida.orm.querybuilder import QueryBuilder
qb = QueryBuilder()
# Querying nodes:
qb.append(Node)
# retrieving the results:
results = qb.all()
_EDGE_TAG_DELIM = '--'
_VALID_PROJECTION_KEYS = ('func', 'cast')
__dict__ = dict_proxy({'_add_to_projections': <function _add_to_projections>, '_get_json_compatible': <function _get_json_compatible>, '__module__': 'aiida.orm.querybuilder', '_join_outputs': <function _join_outputs>, '_join_ancestors_recursive': <function _join_ancestors_recursive>, '__str__': <function __str__>, 'get_query': <function get_query>, 'all': <function all>, '_EDGE_TAG_DELIM': '--', 'one': <function one>, '_join_group_members': <function _join_group_members>, '__dict__': <attribute '__dict__' of 'QueryBuilder' objects>, 'get_aliases': <function get_aliases>, '_build_filters': <function _build_filters>, 'add_filter': <function add_filter>, '_get_function_map': <function _get_function_map>, '__weakref__': <attribute '__weakref__' of 'QueryBuilder' objects>, 'children': <function children>, 'append': <function append>, 'get_used_tags': <function get_used_tags>, 'order_by': <function order_by>, '_get_ormclass': <function _get_ormclass>, 'distinct': <function distinct>, 'set_debug': <function set_debug>, '_join_to_computer_used': <function _join_to_computer_used>, '_build_projections': <function _build_projections>, 'dict': <function dict>, '__init__': <function __init__>, 'outputs': <function outputs>, 'iterall': <function iterall>, 'parents': <function parents>, '__doc__': '\n The class to query the AiiDA database. \n \n Usage::\n\n from aiida.orm.querybuilder import QueryBuilder\n qb = QueryBuilder()\n # Querying nodes:\n qb.append(Node)\n # retrieving the results:\n results = qb.all()\n\n ', 'iterdict': <function iterdict>, '_build_order': <function _build_order>, 'inputs': <function inputs>, '_VALID_PROJECTION_KEYS': ('func', 'cast'), '_join_group_user': <function _join_group_user>, '_join_masters': <function _join_masters>, '_join_inputs': <function _join_inputs>, '_join_slaves': <function _join_slaves>, 'get_results_dict': <function get_results_dict>, 'add_projection': <function add_projection>, '_join_user_group': <function _join_user_group>, '_join_descendants_recursive': <function _join_descendants_recursive>, 'inject_query': <function inject_query>, 'offset': <function offset>, '_get_projectable_entity': <function _get_projectable_entity>, '_join_creator_of': <function _join_creator_of>, 'except_if_input_to': <function except_if_input_to>, '_build': <function _build>, 'count': <function count>, '_join_computer': <function _join_computer>, 'get_json_compatible_queryhelp': <function get_json_compatible_queryhelp>, '_join_created_by': <function _join_created_by>, '_get_unique_tag': <function _get_unique_tag>, '_get_tag_from_specification': <function _get_tag_from_specification>, 'get_alias': <function get_alias>, '_get_connecting_node': <function _get_connecting_node>, '_join_groups': <function _join_groups>, 'limit': <function limit>, '_check_dbentities': <staticmethod object>, '_add_type_filter': <function _add_type_filter>, 'first': <function first>})
__init__(*args, **kwargs)[source]

Instantiates a QueryBuilder instance.

Which backend is used decided here based on backend-settings (taken from the user profile). This cannot be overriden so far by the user.

Parameters:
  • debug (bool) – Turn on debug mode. This feature prints information on the screen about the stages of the QueryBuilder. Does not affect results.
  • path (list) – A list of the vertices to traverse. Leave empty if you plan on using the method QueryBuilder.append().
  • filters – The filters to apply. You can specify the filters here, when appending to the query using QueryBuilder.append() or even later using QueryBuilder.add_filter(). Check latter gives API-details.
  • project – The projections to apply. You can specify the projections here, when appending to the query using QueryBuilder.append() or even later using QueryBuilder.add_projection(). Latter gives you API-details.
  • limit (int) – Limit the number of rows to this number. Check QueryBuilder.limit() for more information.
  • offset (int) – Set an offset for the results returned. Details in QueryBuilder.offset().
  • order_by – How to order the results. As the 2 above, can be set also at later stage, check QueryBuilder.order_by() for more information.
__module__ = 'aiida.orm.querybuilder'
__str__()[source]

When somebody hits: print(QueryBuilder) or print(str(QueryBuilder)) I want to print the SQL-query. Because it looks cool…

__weakref__

list of weak references to the object (if defined)

_add_to_projections(alias, projectable_entity_name, cast=None, func=None)[source]
Parameters:
  • alias – A instance of sqlalchemy.orm.util.AliasedClass, alias for an ormclass
  • projectable_entity_name – User specification of what to project. Appends to query’s entities what the user wants to project (have returned by the query)
_add_type_filter(tagspec, query_type_string, plugin_type_string, subclassing)[source]

Add a filter on the type based on the query_type_string

_build()[source]

build the query and return a sqlalchemy.Query instance

_build_filters(alias, filter_spec)[source]

Recurse through the filter specification and apply filter operations.

Parameters:
  • alias – The alias of the ORM class the filter will be applied on
  • filter_spec – the specification as given by the queryhelp
Returns:

an instance of sqlalchemy.sql.elements.BinaryExpression.

_build_order(alias, entitytag, entityspec)[source]
_build_projections(tag, items_to_project=None)[source]
static _check_dbentities(entities_cls_joined, entities_cls_to_join, relationship)[source]
Parameters:
  • entities_cls_joined (list) – A list (tuple) of the aliased class passed as joined_entity and the ormclass that was expected
  • entities_cls_joined – A list (tuple) of the aliased class passed as entity_to_join and the ormclass that was expected
  • relationship (str) – The relationship between the two entities to make the Exception comprehensible
_get_connecting_node(index, joining_keyword=None, joining_value=None, **kwargs)[source]
Parameters:
  • querydict – A dictionary specifying how the current node is linked to other nodes.
  • index – Index of this node within the path specification

Valid (currently implemented) keys are:

  • input_of
  • output_of
  • descendant_of
  • ancestor_of
  • direction
  • group_of
  • member_of
  • has_computer
  • computer_of
  • created_by
  • creator_of
  • owner_of
  • belongs_to

Future:

  • master_of
  • slave_of
_get_function_map()[source]
_get_json_compatible(inp)[source]
Parameters:inp – The input value that will be converted. Recurses into each value if inp is an iterable.
_get_ormclass(cls, ormclasstype)[source]

For testing purposes, I want to check whether the implementation gives the correct ormclass back. Just relaying to the implementation, details for this function in the interface.

_get_projectable_entity(alias, column_name, attrpath, **entityspec)[source]
_get_tag_from_specification(specification)[source]
Parameters:specification – If that is a string, I assume the user has deliberately specified it with tag=specification. In that case, I simply check that it’s not a duplicate. If it is a class, I check if it’s in the _cls_to_tag_map!
_get_unique_tag(ormclasstype)[source]

Using the function get_tag_from_type, I get a tag. I increment an index that is appended to that tag until I have an unused tag. This function is called in QueryBuilder.append() when autotag is set to True.

Parameters:ormclasstype (str) – The string that defines the type of the AiiDA ORM class. For subclasses of Node, this is the Node._plugin_type_string, for other they are as defined as returned by QueryBuilder._get_ormclass().
Returns:A tag, as a string.
_join_ancestors_recursive(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]

joining ancestors using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.

_join_computer(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – An entity that can use a computer (eg a node)
  • entity_to_join – aliased dbcomputer entity
_join_created_by(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – the aliased user you want to join to
  • entity_to_join – the (aliased) node or group in the DB to join with
_join_creator_of(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – the aliased node
  • entity_to_join – the aliased user to join to that node
_join_descendants_recursive(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]

joining descendants using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.

_join_group_members(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) ORMclass that is a group in the database
  • entity_to_join – The (aliased) ORMClass that is a node and member of the group

joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as group to enitity_to_join as node. (enitity_to_join is an member_of joined_entity)

_join_group_user(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – An aliased dbgroup
  • entity_to_join – aliased dbuser
_join_groups(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) node in the database
  • entity_to_join – The (aliased) Group

joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as node to enitity_to_join as group. (enitity_to_join is an group_of joined_entity)

_join_inputs(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) ORMclass that is an output
  • entity_to_join – The (aliased) ORMClass that is an input.

joined_entity and entity_to_join are joined with a link from joined_entity as output to enitity_to_join as input (enitity_to_join is an input_of joined_entity)

_join_masters(joined_entity, entity_to_join)[source]
_join_outputs(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – The (aliased) ORMclass that is an input
  • entity_to_join – The (aliased) ORMClass that is an output.

joined_entity and entity_to_join are joined with a link from joined_entity as input to enitity_to_join as output (enitity_to_join is an output_of joined_entity)

_join_slaves(joined_entity, entity_to_join)[source]
_join_to_computer_used(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – the (aliased) computer entity
  • entity_to_join – the (aliased) node entity
_join_user_group(joined_entity, entity_to_join, isouterjoin)[source]
Parameters:
  • joined_entity – An aliased user
  • entity_to_join – aliased group
add_filter(tagspec, filter_spec)[source]

Adding a filter to my filters.

Parameters:
  • tagspec – The tag, which has to exist already as a key in self._filters
  • filter_spec – The specifications for the filter, has to be a dictionary

Usage:

qb = QueryBuilder()         # Instantiating the QueryBuilder instance
qb.append(Node, tag='node') # Appending a Node
#let's put some filters:
qb.add_filter('node',{'id':{'>':12}})
# 2 filters together:
qb.add_filter('node',{'label':'foo', 'uuid':{'like':'ab%'}})
# Now I am overriding the first filter I set:
qb.add_filter('node',{'id':13})
add_projection(tag_spec, projection_spec)[source]

Adds a projection

Parameters:
  • tag_spec – A valid specification for a tag
  • projection_spec – The specification for the projection. A projection is a list of dictionaries, with each dictionary containing key-value pairs where the key is database entity (e.g. a column / an attribute) and the value is (optional) additional information on how to process this database entity.

If the given projection_spec is not a list, it will be expanded to a list. If the listitems are not dictionaries, but strings (No additional processing of the projected results desired), they will be expanded to dictionaries.

Usage:

qb = QueryBuilder()
qb.append(StructureData, tag='struc')

# Will project the uuid and the kinds
qb.add_projection('struc', ['uuid', 'attributes.kinds'])

The above example will project the uuid and the kinds-attribute of all matching structures. There are 2 (so far) special keys.

The single star * will project the ORM-instance:

qb = QueryBuilder()
qb.append(StructureData, tag='struc')
# Will project the ORM instance
qb.add_projection('struc', '*')
print type(qb.first()[0])
# >>> aiida.orm.data.structure.StructureData

The double start ** projects all possible projections of this entity:

QueryBuilder().append(StructureData,tag=’s’, project=’**’).limit(1).dict()[0][‘s’].keys()

# >>> u’user_id, description, ctime, label, extras, mtime, id, attributes, dbcomputer_id, nodeversion, type, public, uuid’

Be aware that the result of ** depends on the backend implementation.

all(batch_size=None)[source]

Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing!
Returns:a list of lists of all projected entities.
append(cls=None, type=None, tag=None, filters=None, project=None, subclassing=True, edge_tag=None, edge_filters=None, edge_project=None, outerjoin=False, **kwargs)[source]

Any iterative procedure to build the path for a graph query needs to invoke this method to append to the path.

Parameters:
  • cls

    The Aiida-class (or backend-class) defining the appended vertice. Also supports a tuple/list of classes. This results in an all instances of this class being accepted in a query. However the classes have to have the same orm-class for the joining to work. I.e. both have to subclasses of Node. Valid is:

    cls=(StructureData, ParameterData)
    

    This is invalid:

    cls=(Group, Node)
  • type (str) – The type of the class, if cls is not given. Also here, a tuple or list is accepted.
  • autotag (bool) – Whether to find automatically a unique tag. If this is set to True (default False),
  • tag (str) – A unique tag. If none is given, I will create a unique tag myself.
  • filters – Filters to apply for this vertice. See add_filter(), the method invoked in the background, or usage examples for details.
  • project – Projections to apply. See usage examples for details. More information also in add_projection().
  • subclassing (bool) – Whether to include subclasses of the given class (default True). E.g. Specifying a Calculation as cls will include JobCalculations, InlineCalculations, etc..
  • outerjoin (bool) – If True, (default is False), will do a left outerjoin instead of an inner join
  • edge_tag (str) – The tag that the edge will get. If nothing is specified (and there is a meaningful edge) the default is tag1–tag2 with tag1 being the entity joining from and tag2 being the entity joining to (this entity).
  • edge_filters (str) – The filters to apply on the edge. Also here, details in add_filter().
  • edge_project (str) – The project from the edges. API-details in add_projection().

A small usage example how this can be invoked:

qb = QueryBuilder()             # Instantiating empty querybuilder instance
qb.append(cls=StructureData)    # First item is StructureData node
# The
# next node in the path is a PwCalculation, with
# the structure joined as an input
qb.append(
    cls=PwCalculation,
    output_of=StructureData
)
Returns:self
children(**kwargs)[source]

Join to children/descendants of previous vertice in path.

Returns:self
count()[source]

Counts the number of rows returned by the backend.

Returns:the number of rows as an integer
dict(batch_size=None)[source]

Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing!
Returns:a list of dictionaries of all projected entities. Each dictionary consists of key value pairs, where the key is the tag of the vertice and the value a dictionary of key-value pairs where key is the entity description (a column name or attribute path) and the value the value in the DB.

Usage:

qb = QueryBuilder()
qb.append(
    StructureData,
    tag='structure',
    filters={'uuid':{'==':myuuid}},
)
qb.append(
    Node,
    descendant_of='structure',
    project=['type', 'id'],  # returns type (string) and id (string)
    tag='descendant'
)

# Return the dictionaries:
print "qb.iterdict()"
for d in qb.iterdict():
    print '>>>', d

results in the following output:

qb.iterdict()
>>> {'descendant': {
        'type': u'calculation.job.quantumespresso.pw.PwCalculation.',
        'id': 7716}
    }
>>> {'descendant': {
        'type': u'data.remote.RemoteData.',
        'id': 8510}
    }
distinct()[source]

Asks for distinct rows, which is the same as asking the backend to remove duplicates. Does not execute the query!

If you want a distinct query:

qb = QueryBuilder()
# append stuff!
qb.append(...)
qb.append(...)
...
qb.distinct().all() #or
qb.distinct().dict()
Returns:self
except_if_input_to(calc_class)[source]

Makes counterquery based on the own path, only selecting entries that have been input to calc_class

Parameters:calc_class – The calculation class to check against
Returns:self
first()[source]

Executes query asking for one instance. Use as follows:

qb = QueryBuilder(**queryhelp)
qb.first()
Returns:One row of results as a list
get_alias(tag)[source]

In order to continue a query by the user, this utility function returns the aliased ormclasses.

Parameters:tag – The tag for a vertice in the path
Returns:the alias given for that vertice
get_aliases()[source]
Returns:the list of aliases
get_json_compatible_queryhelp()[source]

Makes the queryhelp a json - compatible dictionary. In this way,the queryhelp can be stored in the database or a json-object, retrieved or shared and used later. See this usage:

qb = QueryBuilder(limit=3).append(StructureData, project='id').order_by({StructureData:'id'})
queryhelp  = qb.get_json_compatible_queryhelp()

# Now I could save this dictionary somewhere and use it later:

qb2=QueryBuilder(**queryhelp)

# This is True if no change has been made to the database.
# Note that such a comparison can only be True if the order of results is enforced
qb.all()==qb2.all()
Returns:the json-compatible queryhelp
get_query()[source]

Instantiates and manipulates a sqlalchemy.orm.Query instance if this is needed. First, I check if the query instance is still valid by hashing the queryhelp. In this way, if a user asks for the same query twice, I am not recreating an instance.

Returns:an instance of sqlalchemy.orm.Query that is specific to the backend used.
get_results_dict()[source]

Deprecated, use dict() instead

get_used_tags(vertices=True, edges=True)[source]

Returns a list of all the vertices that are being used. Some parameter allow to select only subsets. :param bool vertices: Defaults to True. If True, adds the tags of vertices to the returned list :param bool edges: Defaults to True. If True, adds the tags of edges to the returnend list.

Returns:A list of all tags, including (if there is) also the tag give for the edges
inject_query(query)[source]

Manipulate the query an inject it back. This can be done to add custom filters using SQLA. :param query: A sqlalchemy.orm.Query instance

inputs(**kwargs)[source]

Join to inputs of previous vertice in path.

Returns:self
iterall(batch_size=100)[source]

Same as all(), but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_per

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter.
Returns:a generator of lists
iterdict(batch_size=100)[source]

Same as dict(), but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_per

Parameters:batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter.
Returns:a generator of dictionaries
limit(limit)[source]

Set the limit (nr of rows to return)

Parameters:limit (int) – integers of number of rows of rows to return
offset(offset)[source]

Set the offset. If offset is set, that many rows are skipped before returning. offset = 0 is the same as omitting setting the offset. If both offset and limit appear, then offset rows are skipped before starting to count the limit rows that are returned.

Parameters:offset (int) – integers of nr of rows to skip
one()[source]

Executes the query asking for exactly one results. Will raise an exception if this is not the case :raises: MultipleObjectsError if more then one row can be returned :raises: NotExistent if no result was found

order_by(order_by)[source]

Set the entity to order by

Parameters:order_by – This is a list of items, where each item is a dictionary specifies what to sort for an entity

In each dictionary in that list, keys represent valid tags of entities (tables), and values are list of columns.

Usage:

#Sorting by id (ascending):
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':['id']})

# or
#Sorting by id (ascending):
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':[{'id':{'order':'asc'}}]})

# for descending order:
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':[{'id':{'order':'desc'}}]})

# or (shorter)
qb = QueryBuilder()
qb.append(Node, tag='node')
qb.order_by({'node':[{'id':'desc'}]})
outputs(**kwargs)[source]

Join to outputs of previous vertice in path.

Returns:self
parents(**kwargs)[source]

Join to parents/ancestors of previous vertice in path.

Returns:self
set_debug(debug)[source]

Run in debug mode. This does not affect functionality, but prints intermediate stages when creating a query on screen.

Parameters:debug (bool) – Turn debug on or off
aiida.orm.querybuilder.get_querybuilder_classifiers_from_cls(cls, obj)[source]

This utility function return the correct classifiers for the QueryBuilder, given a Class. :param cls: a class or tuple/set/list of classes that are either AiiDA-orm classes or ORM-classes. :param obj: The implementation of the QueryBuilder, with all the attributes correctly set.

aiida.orm.querybuilder.get_querybuilder_classifiers_from_type(ormclasstype, obj)[source]

Same as above get_querybuilder_classifiers_from_cls, but accepts a string instead of a class.

Module for the ORM user classes yo

class aiida.orm.user.User(backend)[source]

Bases: aiida.orm.backend.CollectionEntry

This is the base class for User information in AiiDA. An implementing backend needs to provide a concrete version.

REQUIRED_FIELDS = ['first_name', 'last_name', 'institution']
__abstractmethods__ = frozenset(['first_name', 'last_name', 'is_active', 'email', '_set_password', 'is_stored', 'last_login', '_get_password', 'id', 'pk', 'institution', 'store', 'date_joined'])
__module__ = 'aiida.orm.user'
__str__() <==> str(x)[source]
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
_get_password()[source]
_set_password(new_pass)[source]
date_joined
email
first_name
get_full_name()[source]

Return the user full name

Returns:the user full name
static get_schema()[source]

Every node property contains:

  • display_name: display name of the property
  • help text: short help text of the property
  • is_foreign_key: is the property foreign key to other type of the node
  • type: type of the property. e.g. str, dict, int
Returns:schema of the user
get_short_name()[source]

Return the user short name (typically, this returns the email)

Returns:The short name
has_usable_password()[source]
id
institution
is_active
is_stored

Is the user stored

Returns:True if stored, False otherwise
Return type:bool
last_login
last_name
password
pk
store()[source]
verify_password(password)[source]
class aiida.orm.user.UserCollection(backend)[source]

Bases: aiida.orm.backend.Collection

The collection of users stored in a backend

__abstractmethods__ = frozenset(['create'])
__module__ = 'aiida.orm.user'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 39
_abc_registry = <_weakrefset.WeakSet object>
all()[source]

Get all users

Returns:A collection of users matching the criteria
create(email, first_name='', last_name='', institution='')[source]

Create a user with the provided email address

Parameters:
  • email – An email address for the user
  • first_name (str) – The users first name
  • last_name (str) – The users last name
Institution:

The users instution

Returns:

A new user object

Return type:

User

find(email=None, id=None)[source]

Final all users matching the given criteria

Parameters:email – An email address to search for
Returns:A collection of users matching the criteria
get(email)[source]

Get a user using the email address :param email: The user’s email address :return: The corresponding user object :raises: aiida.common.exceptions.MultipleObjectsError, aiida.common.exceptions.NotExistent

get_automatic_user()[source]

Get the current automatic (default) user :return: The automatic user

get_or_create(email)[source]

Get the existing user with a given email address or create an unstored one :param email: The user’s email address :return: The corresponding user object :raises: aiida.common.exceptions.MultipleObjectsError, aiida.common.exceptions.NotExistent

aiida.orm.workflow.kill_from_pk(pk, verbose=False)[source]

Kills a workflow without loading the class, useful when there was a problem and the workflow definition module was changed/deleted (and the workflow cannot be reloaded).

Parameters:
  • pk – the principal key (id) of the workflow to kill
  • verbose – True to print the pk of each subworkflow killed