aiida.orm package¶
-
class
aiida.orm.
JobCalculation
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.calculation.job.AbstractJobCalculation
,aiida.orm.implementation.django.calculation.Calculation
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.implementation.django.calculation.job'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
classmethod
_list_calculations_old
(states=None, past_days=None, group=None, group_pk=None, all_users=False, pks=[], relative_ctime=True)[source]¶ Return a string with a description of the AiiDA calculations.
Todo
does not support the query for the IMPORTED state (since it checks the state in the Attributes, not in the DbCalcState table). Decide which is the correct logic and implement the correct query.
Parameters: - states – a list of string with states. If set, print only the calculations in the states “states”, otherwise shows all. Default = None.
- past_days – If specified, show only calculations that were created in the given number of past days.
- group – If specified, show only calculations belonging to a
user-defined group with the given name.
Can use colons to separate the group name from the type,
as specified in
aiida.orm.group.Group.get_from_string()
method. - group_pk – If specified, show only calculations belonging to a user-defined group with the given PK.
- pks – if specified, must be a list of integers, and only
calculations within that list are shown. Otherwise, all
calculations are shown.
If specified, sets state to None and ignores the
value of the
past_days
option.”) - relative_ctime – if true, prints the creation time relative from now. (like 2days ago). Default = True
- all_users – if True, list calculation belonging to all users. Default = False
Returns: a string with description of calculations.
-
_logger
= <logging.Logger object>¶
-
_plugin_type_string
= 'calculation.job.JobCalculation.'¶
-
_query_type_string
= 'calculation.job.'¶
-
_set_state
(state)[source]¶ Set the state of the calculation.
Set it in the DbCalcState to have also the uniqueness check. Moreover (except for the IMPORTED state) also store in the ‘state’ attribute, useful to know it also after importing, and for faster querying.
Todo
Add further checks to enforce that the states are set in order?
Parameters: state – a string with the state. This must be a valid string, from aiida.common.datastructures.calc_states
.Raise: ModificationNotAllowed if the given state was already set.
-
get_state
(from_attribute=False)[source]¶ Get the state of the calculation.
Note
this method returns the NOTFOUND state if no state is found in the DB.
Note
the ‘most recent’ state is obtained using the logic in the
aiida.common.datastructures.sort_states
function.Todo
Understand if the state returned when no state entry is found in the DB is the best choice.
Parameters: from_attribute – if set to True, read it from the attributes (the attribute is also set with set_state, unless the state is set to IMPORTED; in this way we can also see the state before storing). Returns: a string. If from_attribute is True and no attribute is found, return None. If from_attribute is False and no entry is found in the DB, return the “NOTFOUND” state.
-
-
class
aiida.orm.
WorkCalculation
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.django.calculation.Calculation
Used to represent a calculation generated by a Process from the new workflows system.
-
ABORTED_KEY
= '_aborted'¶
-
DO_ABORT_KEY
= '_do_abort'¶
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.implementation.general.calculation.work'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
_logger
= <logging.Logger object>¶
-
_plugin_type_string
= 'calculation.work.WorkCalculation.'¶
-
_query_type_string
= 'calculation.work.'¶
-
_updatable_attributes
= ('_sealed', 'state', '_finished', '_failed', '_aborted', '_do_abort')¶
-
has_aborted
()[source]¶ Returns True if the work calculation was killed and is
Returns: True if the calculation was killed, False otherwise. Return type: bool
-
has_failed
()[source]¶ Returns True if the work calculation failed because of an exception, False otherwise
Returns: True if the calculation has failed, False otherwise. Return type: bool
-
has_finished
()[source]¶ Determine if the calculation is finished for whatever reason. This may be because it finished successfully or because of a failure.
Returns: True if the job has finished running, False otherwise. Return type: bool
-
-
class
aiida.orm.
Code
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.code.AbstractCode
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.implementation.django.code'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
_logger
= <logging.Logger object>¶
-
_plugin_type_string
= 'code.Code.'¶
-
_query_type_string
= 'code.'¶
-
_set_local
()[source]¶ Set the code as a ‘local’ code, meaning that all the files belonging to the code will be copied to the cluster, and the file set with set_exec_filename will be run.
It also deletes the flags related to the local case (if any)
-
can_run_on
(computer)[source]¶ Return True if this code can run on the given computer, False otherwise.
Local codes can run on any machine; remote codes can run only on the machine on which they reside.
TODO: add filters to mask the remote machines on which a local code can run.
-
classmethod
get
(pk=None, label=None, machinename=None)[source]¶ Get a Computer object with given identifier string, that can either be the numeric ID (pk), or the label (and computername) (if unique).
Parameters: - pk – the numeric ID (pk) for code
- label – the code label identifying the code to load
- machinename – the machine name where code is setup
Raises: - NotExistent – if no code identified by the given string is found
- MultipleObjectsError – if the string cannot identify uniquely a code
- InputValidationError – if neither a pk nor a label was passed in
-
classmethod
get_from_string
(code_string)[source]¶ Get a Computer object with given identifier string in the format label@machinename. See the note below for details on the string detection algorithm.
Note
the (leftmost) ‘@’ symbol is always used to split code and computername. Therefore do not use ‘@’ in the code name if you want to use this function (‘@’ in the computer name are instead valid).
Parameters: code_string – the code string identifying the code to load
Raises: - NotExistent – if no code identified by the given string is found
- MultipleObjectsError – if the string cannot identify uniquely a code
- InputValidationError – if code_string is not of string type
-
classmethod
list_for_plugin
(plugin, labels=True)[source]¶ Return a list of valid code strings for a given plugin.
Parameters: - plugin – The string of the plugin.
- labels – if True, return a list of code names, otherwise return the code PKs (integers).
Returns: a list of string, with the code names if labels is True, otherwise a list of integers with the code PKs.
-
set_remote_computer_exec
(remote_computer_exec)[source]¶ Set the code as remote, and pass the computer on which it resides and the absolute path on that computer.
- Args:
- remote_computer_exec: a tuple (computer, remote_exec_path), where
- computer is a aiida.orm.Computer or an aiida.backends.djsite.db.models.DbComputer object, and remote_exec_path is the absolute path of the main executable on remote computer.
-
-
class
aiida.orm.
Computer
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.computer.AbstractComputer
-
__module__
= 'aiida.orm.implementation.django.computer'¶
-
dbcomputer
¶
-
description
¶
-
full_text_info
¶
-
classmethod
get
(computer)[source]¶ Return a computer from its name (or from another Computer or DbComputer instance)
-
get_dbauthinfo
(user)[source]¶ Return the aiida.backends.djsite.db.models.DbAuthInfo instance for the given user on this computer, if the computer is not configured for the given user.
Parameters: user – a DbUser instance. Returns: a aiida.backends.djsite.db.models.DbAuthInfo instance Raises: NotExistent – if the computer is not configured for the given user.
-
hostname
¶
-
id
¶
-
is_user_configured
(user)[source]¶ Return True if the computer is configured for the given user, False otherwise.
Parameters: user – a DbUser instance. Returns: a boolean.
-
is_user_enabled
(user)[source]¶ Return True if the computer is enabled for the given user (looking only at the per-user setting: the computer could still be globally disabled).
Note: Return False also if the user is not configured for the computer. Parameters: user – a DbUser instance. Returns: a boolean.
-
name
¶
-
pk
¶
-
store
()[source]¶ Store the computer in the DB.
Differently from Nodes, a computer can be re-stored if its properties are to be changed (e.g. a new mpirun command, etc.)
-
to_be_stored
¶
-
uuid
¶
-
-
aiida.orm.
CalculationFactory
(module, from_abstract=False)[source]¶ Return a suitable JobCalculation subclass.
Parameters: - module – a valid string recognized as a Calculation plugin
- from_abstract – A boolean. If False (default), actually look only to subclasses of JobCalculation, not to the base Calculation class. If True, check for valid strings for plugins of the Calculation base class.
-
class
aiida.orm.
QueryBuilder
(*args, **kwargs)[source]¶ Bases:
object
The class to query the AiiDA database.
Usage:
from aiida.orm.querybuilder import QueryBuilder qb = QueryBuilder() # Querying nodes: qb.append(Node) # retrieving the results: results = qb.all()
-
_EDGE_TAG_DELIM
= '--'¶
-
_VALID_PROJECTION_KEYS
= ('func', 'cast')¶
-
__dict__
= dict_proxy({'_add_to_projections': <function _add_to_projections>, '_get_json_compatible': <function _get_json_compatible>, '__module__': 'aiida.orm.querybuilder', '_join_outputs': <function _join_outputs>, '_join_ancestors_recursive': <function _join_ancestors_recursive>, '__str__': <function __str__>, 'get_query': <function get_query>, 'all': <function all>, '_EDGE_TAG_DELIM': '--', 'one': <function one>, '_join_group_members': <function _join_group_members>, '__dict__': <attribute '__dict__' of 'QueryBuilder' objects>, 'get_aliases': <function get_aliases>, '_build_filters': <function _build_filters>, 'add_filter': <function add_filter>, '_get_function_map': <function _get_function_map>, '__weakref__': <attribute '__weakref__' of 'QueryBuilder' objects>, 'children': <function children>, 'append': <function append>, 'get_used_tags': <function get_used_tags>, 'order_by': <function order_by>, '_get_ormclass': <function _get_ormclass>, 'distinct': <function distinct>, 'set_debug': <function set_debug>, '_join_to_computer_used': <function _join_to_computer_used>, '_build_projections': <function _build_projections>, 'dict': <function dict>, '__init__': <function __init__>, 'outputs': <function outputs>, 'iterall': <function iterall>, 'parents': <function parents>, '__doc__': '\n The class to query the AiiDA database. \n \n Usage::\n\n from aiida.orm.querybuilder import QueryBuilder\n qb = QueryBuilder()\n # Querying nodes:\n qb.append(Node)\n # retrieving the results:\n results = qb.all()\n\n ', 'iterdict': <function iterdict>, '_build_order': <function _build_order>, 'inputs': <function inputs>, '_VALID_PROJECTION_KEYS': ('func', 'cast'), '_join_group_user': <function _join_group_user>, '_join_masters': <function _join_masters>, '_join_inputs': <function _join_inputs>, '_join_slaves': <function _join_slaves>, 'get_results_dict': <function get_results_dict>, 'add_projection': <function add_projection>, '_join_user_group': <function _join_user_group>, '_join_descendants_recursive': <function _join_descendants_recursive>, 'inject_query': <function inject_query>, 'offset': <function offset>, '_get_projectable_entity': <function _get_projectable_entity>, '_join_creator_of': <function _join_creator_of>, 'except_if_input_to': <function except_if_input_to>, '_build': <function _build>, 'count': <function count>, '_join_computer': <function _join_computer>, 'get_json_compatible_queryhelp': <function get_json_compatible_queryhelp>, '_join_created_by': <function _join_created_by>, '_get_unique_tag': <function _get_unique_tag>, '_get_tag_from_specification': <function _get_tag_from_specification>, 'get_alias': <function get_alias>, '_get_connecting_node': <function _get_connecting_node>, '_join_groups': <function _join_groups>, 'limit': <function limit>, '_check_dbentities': <staticmethod object>, '_add_type_filter': <function _add_type_filter>, 'first': <function first>})¶
-
__init__
(*args, **kwargs)[source]¶ Instantiates a QueryBuilder instance.
Which backend is used decided here based on backend-settings (taken from the user profile). This cannot be overriden so far by the user.
Parameters: - debug (bool) – Turn on debug mode. This feature prints information on the screen about the stages of the QueryBuilder. Does not affect results.
- path (list) – A list of the vertices to traverse. Leave empty if you plan on using the method
QueryBuilder.append()
. - filters – The filters to apply. You can specify the filters here, when appending to the query
using
QueryBuilder.append()
or even later usingQueryBuilder.add_filter()
. Check latter gives API-details. - project – The projections to apply. You can specify the projections here, when appending to the query
using
QueryBuilder.append()
or even later usingQueryBuilder.add_projection()
. Latter gives you API-details. - limit (int) – Limit the number of rows to this number. Check
QueryBuilder.limit()
for more information. - offset (int) – Set an offset for the results returned. Details in
QueryBuilder.offset()
. - order_by – How to order the results. As the 2 above, can be set also at later stage,
check
QueryBuilder.order_by()
for more information.
-
__module__
= 'aiida.orm.querybuilder'¶
-
__str__
()[source]¶ When somebody hits: print(QueryBuilder) or print(str(QueryBuilder)) I want to print the SQL-query. Because it looks cool…
-
__weakref__
¶ list of weak references to the object (if defined)
-
_add_to_projections
(alias, projectable_entity_name, cast=None, func=None)[source]¶ Parameters: - alias – A instance of sqlalchemy.orm.util.AliasedClass, alias for an ormclass
- projectable_entity_name – User specification of what to project. Appends to query’s entities what the user wants to project (have returned by the query)
-
_add_type_filter
(tagspec, query_type_string, plugin_type_string, subclassing)[source]¶ Add a filter on the type based on the query_type_string
-
_build_filters
(alias, filter_spec)[source]¶ Recurse through the filter specification and apply filter operations.
Parameters: - alias – The alias of the ORM class the filter will be applied on
- filter_spec – the specification as given by the queryhelp
Returns: an instance of sqlalchemy.sql.elements.BinaryExpression.
-
static
_check_dbentities
(entities_cls_to_join, relationship)[source]¶ Parameters: - entities_cls_joined (list) – A list (tuple) of the aliased class passed as joined_entity and the ormclass that was expected
- entities_cls_joined – A list (tuple) of the aliased class passed as entity_to_join and the ormclass that was expected
- relationship (str) – The relationship between the two entities to make the Exception comprehensible
-
_get_connecting_node
(index, joining_keyword=None, joining_value=None, **kwargs)[source]¶ Parameters: - querydict – A dictionary specifying how the current node is linked to other nodes.
- index – Index of this node within the path specification
Valid (currently implemented) keys are:
- input_of
- output_of
- descendant_of
- ancestor_of
- direction
- group_of
- member_of
- has_computer
- computer_of
- created_by
- creator_of
- owner_of
- belongs_to
Future:
- master_of
- slave_of
-
_get_json_compatible
(inp)[source]¶ Parameters: inp – The input value that will be converted. Recurses into each value if inp is an iterable.
-
_get_ormclass
(cls, ormclasstype)[source]¶ For testing purposes, I want to check whether the implementation gives the currect ormclass back. Just relaying to the implementation, details for this function in the interface.
-
_get_tag_from_specification
(specification)[source]¶ Parameters: specification – If that is a string, I assume the user has deliberately specified it with tag=specification. In that case, I simply check that it’s not a duplicate. If it is a class, I check if it’s in the _cls_to_tag_map!
-
_get_unique_tag
(ormclasstype)[source]¶ Using the function get_tag_from_type, I get a tag. I increment an index that is appended to that tag until I have an unused tag. This function is called in
QueryBuilder.append()
when autotag is set to True.Parameters: ormclasstype (str) – The string that defines the type of the AiiDA ORM class. For subclasses of Node, this is the Node._plugin_type_string, for other they are as defined as returned by QueryBuilder._get_ormclass()
.Returns: A tag, as a string.
-
_join_ancestors_recursive
(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]¶ joining ancestors using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.
-
_join_computer
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – An entity that can use a computer (eg a node)
- entity_to_join – aliased dbcomputer entity
-
_join_created_by
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – the aliased user you want to join to
- entity_to_join – the (aliased) node or group in the DB to join with
-
_join_creator_of
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – the aliased node
- entity_to_join – the aliased user to join to that node
-
_join_descendants_recursive
(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]¶ joining descendants using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.
-
_join_group_members
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) ORMclass that is a group in the database
- entity_to_join – The (aliased) ORMClass that is a node and member of the group
joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as group to enitity_to_join as node. (enitity_to_join is an member_of joined_entity)
-
_join_group_user
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – An aliased dbgroup
- entity_to_join – aliased dbuser
-
_join_groups
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) node in the database
- entity_to_join – The (aliased) Group
joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as node to enitity_to_join as group. (enitity_to_join is an group_of joined_entity)
-
_join_inputs
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) ORMclass that is an output
- entity_to_join – The (aliased) ORMClass that is an input.
joined_entity and entity_to_join are joined with a link from joined_entity as output to enitity_to_join as input (enitity_to_join is an input_of joined_entity)
-
_join_outputs
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) ORMclass that is an input
- entity_to_join – The (aliased) ORMClass that is an output.
joined_entity and entity_to_join are joined with a link from joined_entity as input to enitity_to_join as output (enitity_to_join is an output_of joined_entity)
-
_join_to_computer_used
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – the (aliased) computer entity
- entity_to_join – the (aliased) node entity
-
_join_user_group
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – An aliased user
- entity_to_join – aliased group
-
add_filter
(tagspec, filter_spec)[source]¶ Adding a filter to my filters.
Parameters: - tagspec – The tag, which has to exist already as a key in self._filters
- filter_spec – The specifications for the filter, has to be a dictionary
Usage:
qb = QueryBuilder() # Instantiating the QueryBuilder instance qb.append(Node, tag='node') # Appending a Node #let's put some filters: qb.add_filter('node',{'id':{'>':12}}) # 2 filters together: qb.add_filter('node',{'label':'foo', 'uuid':{'like':'ab%'}}) # Now I am overriding the first filter I set: qb.add_filter('node',{'id':13})
-
add_projection
(tag_spec, projection_spec)[source]¶ Adds a projection
Parameters: - tag_spec – A valid specification for a tag
- projection_spec – The specification for the projection. A projection is a list of dictionaries, with each dictionary containing key-value pairs where the key is database entity (e.g. a column / an attribute) and the value is (optional) additional information on how to process this database entity.
If the given projection_spec is not a list, it will be expanded to a list. If the listitems are not dictionaries, but strings (No additional processing of the projected results desired), they will be expanded to dictionaries.
Usage:
qb = QueryBuilder() qb.append(StructureData, tag='struc') # Will project the uuid and the kinds qb.add_projection('struc', ['uuid', 'attributes.kinds'])
The above example will project the uuid and the kinds-attribute of all matching structures. There are 2 (so far) special keys.
The single star * will project the ORM-instance:
qb = QueryBuilder() qb.append(StructureData, tag='struc') # Will project the ORM instance qb.add_projection('struc', '*') print type(qb.first()[0]) # >>> aiida.orm.data.structure.StructureData
The double start ** projects all possible projections of this entity:
QueryBuilder().append(StructureData,tag=’s’, project=’**’).limit(1).dict()[0][‘s’].keys()
# >>> u’user_id, description, ctime, label, extras, mtime, id, attributes, dbcomputer_id, nodeversion, type, public, uuid’
Be aware that the result of ** depends on the backend implementation.
-
all
(batch_size=None)[source]¶ Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.
Parameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing! Returns: a list of lists of all projected entities.
-
append
(cls=None, type=None, tag=None, filters=None, project=None, subclassing=True, edge_tag=None, edge_filters=None, edge_project=None, outerjoin=False, **kwargs)[source]¶ Any iterative procedure to build the path for a graph query needs to invoke this method to append to the path.
Parameters: - cls – The Aiida-class (or backend-class) defining the appended vertice
- type (str) – The type of the class, if cls is not given
- autotag (bool) – Whether to find automatically a unique tag. If this is set to True (default False),
- tag (str) – A unique tag. If none is given, I will create a unique tag myself.
- filters – Filters to apply for this vertice.
See
add_filter()
, the method invoked in the background, or usage examples for details. - project – Projections to apply. See usage examples for details.
More information also in
add_projection()
. - subclassing (bool) – Whether to include subclasses of the given class (default True). E.g. Specifying a Calculation as cls will include JobCalculations, InlineCalculations, etc..
- outerjoin (bool) – If True, (default is False), will do a left outerjoin instead of an inner join
- edge_tag (str) – The tag that the edge will get. If nothing is specified (and there is a meaningful edge) the default is tag1–tag2 with tag1 being the entity joining from and tag2 being the entity joining to (this entity).
- edge_filters (str) – The filters to apply on the edge. Also here, details in
add_filter()
. - edge_project (str) – The project from the edges. API-details in
add_projection()
.
A small usage example how this can be invoked:
qb = QueryBuilder() # Instantiating empty querybuilder instance qb.append(cls=StructureData) # First item is StructureData node # The # next node in the path is a PwCalculation, with # the structure joined as an input qb.append( cls=PwCalculation, output_of=StructureData )
Returns: self
-
count
()[source]¶ Counts the number of rows returned by the backend.
Returns: the number of rows as an integer
-
dict
(batch_size=None)[source]¶ Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.
Parameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing! Returns: a list of dictionaries of all projected entities. Each dictionary consists of key value pairs, where the key is the tag of the vertice and the value a dictionary of key-value pairs where key is the entity description (a column name or attribute path) and the value the value in the DB. Usage:
qb = QueryBuilder() qb.append( StructureData, tag='structure', filters={'uuid':{'==':myuuid}}, ) qb.append( Node, descendant_of='structure', project=['type', 'id'], # returns type (string) and id (string) tag='descendant' ) # Return the dictionaries: print "qb.iterdict()" for d in qb.iterdict(): print '>>>', d
results in the following output:
qb.iterdict() >>> {'descendant': { 'type': u'calculation.job.quantumespresso.pw.PwCalculation.', 'id': 7716} } >>> {'descendant': { 'type': u'data.remote.RemoteData.', 'id': 8510} }
-
distinct
()[source]¶ Asks for distinct rows, which is the same as asking the backend to remove duplicates. Does not execute the query!
If you want a distinct query:
qb = QueryBuilder() # append stuff! qb.append(...) qb.append(...) ... qb.distinct().all() #or qb.distinct().dict()
Returns: self
-
except_if_input_to
(calc_class)[source]¶ Makes counterquery based on the own path, only selecting entries that have been input to calc_class
Parameters: calc_class – The calculation class to check against Returns: self
-
first
()[source]¶ Executes query asking for one instance. Use as follows:
qb = QueryBuilder(**queryhelp) qb.first()
Returns: One row of results as a list
-
get_alias
(tag)[source]¶ In order to continue a query by the user, this utility function returns the aliased ormclasses.
Parameters: tag – The tag for a vertice in the path Returns: the alias given for that vertice
-
get_json_compatible_queryhelp
()[source]¶ Makes the queryhelp a json - compatible dictionary. In this way,the queryhelp can be stored in the database or a json-object, retrieved or shared and used later. See this usage:
qb = QueryBuilder(limit=3).append(StructureData, project='id').order_by({StructureData:'id'}) queryhelp = qb.get_json_compatible_queryhelp() # Now I could save this dictionary somewhere and use it later: qb2=QueryBuilder(**queryhelp) # This is True if no change has been made to the database. # Note that such a comparison can only be True if the order of results is enforced qb.all()==qb2.all()
Returns: the json-compatible queryhelp
-
get_query
()[source]¶ Instantiates and manipulates a sqlalchemy.orm.Query instance if this is needed. First, I check if the query instance is still valid by hashing the queryhelp. In this way, if a user asks for the same query twice, I am not recreating an instance.
Returns: an instance of sqlalchemy.orm.Query that is specific to the backend used.
Returns a list of all the vertices that are being used. Some parameter allow to select only subsets. :param bool vertices: Defaults to True. If True, adds the tags of vertices to the returned list :param bool edges: Defaults to True. If True, adds the tags of edges to the returnend list.
Returns: A list of all tags, including (if there is) also the tag give for the edges
-
inject_query
(query)[source]¶ Manipulate the query an inject it back. This can be done to add custom filters using SQLA. :param query: A sqlalchemy.orm.Query instance
-
iterall
(batch_size=100)[source]¶ Same as
all()
, but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_perParameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Returns: a generator of lists
-
iterdict
(batch_size=100)[source]¶ Same as
dict()
, but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_perParameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Returns: a generator of dictionaries
-
limit
(limit)[source]¶ Set the limit (nr of rows to return)
Parameters: limit (int) – integers of number of rows of rows to return
-
offset
(offset)[source]¶ Set the offset. If offset is set, that many rows are skipped before returning. offset = 0 is the same as omitting setting the offset. If both offset and limit appear, then offset rows are skipped before starting to count the limit rows that are returned.
Parameters: offset (int) – integers of nr of rows to skip
-
one
()[source]¶ Executes the query asking for exactly one results. Will raise an exception if this is not the case :raises: MultipleObjectsError if more then one row can be returned :raises: NotExistent if no result was found
-
order_by
(order_by)[source]¶ Set the entity to order by
Parameters: order_by – This is a list of items, where each item is a dictionary specifies what to sort for an entity In each dictionary in that list, keys represent valid tags of entities (tables), and values are list of columns.
Usage:
#Sorting by id (ascending): qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':['id']}) # or #Sorting by id (ascending): qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':{'order':'asc'}}]}) # for descending order: qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':{'order':'desc'}}]}) # or (shorter) qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':'desc'}]})
-
-
class
aiida.orm.
Workflow
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.workflow.AbstractWorkflow
-
__init__
(**kwargs)[source]¶ Initializes the Workflow super class, store the instance in the DB and in case stores the starting parameters.
If initialized with an uuid the Workflow is loaded from the DB, if not a new workflow is generated and added to the DB following the stack frameworks. This means that only modules inside aiida.workflows are allowed to implements the workflow super calls and be stored. The caller names, modules and files are retrieved from the stack.
Parameters: - uuid – a string with the uuid of the object to be loaded.
- params – a dictionary of storable objects to initialize the specific workflow
Raise: NotExistent: if there is no entry of the desired workflow kind with the given uuid.
-
__module__
= 'aiida.orm.implementation.django.workflow'¶
-
_increment_version_number_db
()[source]¶ This function increments the version number in the DB. This should be called every time you need to increment the version (e.g. on adding a extra or attribute).
-
_update_db_description_field
(field_value)[source]¶ Safety method to store the description of the workflow
Returns: a string
-
_update_db_label_field
(field_value)[source]¶ Safety method to store the label of the workflow
Returns: a string
-
add_attribute
(_name, _value)[source]¶ Add one attributes to the Workflow. If another attribute is present with the same name it will be overwritten. :param name: a string with the attribute name to store :param value: a storable object to store
-
add_attributes
(_params)[source]¶ Add a set of attributes to the Workflow. If another attribute is present with the same name it will be overwritten. :param name: a string with the attribute name to store :param value: a storable object to store
-
add_result
(_name, _value)[source]¶ Add one result to the Workflow. If another result is present with the same name it will be overwritten. :param name: a string with the result name to store :param value: a storable object to store
-
add_results
(_params)[source]¶ Add a set of results to the Workflow. If another result is present with the same name it will be overwritten. :param name: a string with the result name to store :param value: a storable object to store
-
append_to_report
(text)[source]¶ Adds text to the Workflow report.
Note: Once, in case the workflow is a subworkflow of any other Workflow this method calls the parent append_to_report
method; now instead this is not the case anymore
-
clear_report
()[source]¶ Wipe the Workflow report. In case the workflow is a subworflow of any other Workflow this method calls the parent
clear_report
method.
-
ctime
¶
-
dbworkflowinstance
¶ Get the DbWorkflow object stored in the super class.
Returns: DbWorkflow object from the database
-
description
¶ Get the description of the workflow.
Returns: a string
-
get_attribute
(_name)[source]¶ Get one Workflow attribute :param name: a string with the attribute name to retrieve :return: a dictionary of storable objects
-
get_parameter
(_name)[source]¶ Get one Workflow paramenter :param name: a string with the parameters name to retrieve :return: a dictionary of storable objects
-
get_report
()[source]¶ Return the Workflow report.
Note: once, in case the workflow is a subworkflow of any other Workflow this method calls the parent get_report
method. This is not the case anymore.Returns: a list of strings
-
get_result
(_name)[source]¶ Get one Workflow result :param name: a string with the result name to retrieve :return: a dictionary of storable objects
-
get_state
()[source]¶ Get the Workflow’s state :return: a state from wf_states in aiida.common.datastructures
-
get_step
(step_method)[source]¶ Retrieves by name a step from the Workflow. :param step_method: a string with the name of the step to retrieve or a method :raise: ObjectDoesNotExist: if there is no step with the specific name. :return: a DbWorkflowStep object.
-
get_steps
(state=None)[source]¶ Retrieves all the steps from a specific workflow Workflow with the possibility to limit the list to a specific step’s state. :param state: a state from wf_states in aiida.common.datastructures :return: a list of DbWorkflowStep objects.
-
classmethod
get_subclass_from_dbnode
(wf_db)[source]¶ Loads the workflow object and reaoads the python script in memory with the importlib library, the main class is searched and then loaded. :param wf_db: a specific DbWorkflowNode object representing the Workflow :return: a Workflow subclass from the specific source code
-
classmethod
get_subclass_from_pk
(pk)[source]¶ Calls the
get_subclass_from_dbnode
selecting the DbWorkflowNode from the input pk. :param pk: a primary key index for the DbWorkflowNode :return: a Workflow subclass from the specific source code
-
classmethod
get_subclass_from_uuid
(uuid)[source]¶ Calls the
get_subclass_from_dbnode
selecting the DbWorkflowNode from the input uuid. :param uuid: a uuid for the DbWorkflowNode :return: a Workflow subclass from the specific source code
-
info
()[source]¶ Returns an array with all the informations about the modules, file, class to locate the workflow source code
-
is_subworkflow
()[source]¶ Return True is this is a subworkflow (i.e., if it has a parent), False otherwise.
-
label
¶ Get the label of the workflow.
Returns: a string
-
logger
¶ Get the logger of the Workflow object, so that it also logs to the DB.
Returns: LoggerAdapter object, that works like a logger, but also has the ‘extra’ embedded
-
pk
¶ Returns the DbWorkflow pk
-
classmethod
query
(*args, **kwargs)[source]¶ Map to the aiidaobjects manager of the DbWorkflow, that returns Workflow objects instead of DbWorkflow entities.
-
set_params
(params, force=False)[source]¶ Adds parameters to the Workflow that are both stored and used every time the workflow engine re-initialize the specific workflow to launch the new methods.
-
set_state
(state)[source]¶ Set the Workflow’s state :param name: a state from wf_states in aiida.common.datastructures
-
uuid
¶ Returns the DbWorkflow uuid
-
-
class
aiida.orm.
User
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.user.AbstractUser
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.implementation.django.user'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
date_joined
¶
-
email
¶
-
first_name
¶
-
id
¶
-
institution
¶
-
is_active
¶
-
is_staff
¶
-
is_superuser
¶
-
last_login
¶
-
last_name
¶
-
pk
¶
-
classmethod
search_for_users
(**kwargs)[source]¶ Search for a user the passed keys.
Parameters: kwargs – The keys to search for the user with. Returns: A list of users matching the search criteria.
-
to_be_stored
¶
-
-
class
aiida.orm.
Group
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.group.AbstractGroup
-
__abstractmethods__
= frozenset([])¶
-
__init__
(**kwargs)[source]¶ Create a new group. Either pass a dbgroup parameter, to reload ad group from the DB (and then, no further parameters are allowed), or pass the parameters for the Group creation.
Parameters: - dbgroup – the dbgroup object, if you want to reload the group from the DB rather than creating a new one.
- name – The group name, required on creation
- description – The group description (by default, an empty string)
- user – The owner of the group (by default, the automatic user)
- type_string – a string identifying the type of group (by default, an empty string, indicating an user-defined group.
-
__int__
()[source]¶ Convert the class to an integer. This is needed to allow querying with Django. Be careful, though, not to pass it to a wrong field! This only returns the local DB principal key (pk) value.
Returns: the integer pk of the node or None if not stored.
-
__module__
= 'aiida.orm.implementation.django.group'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
add_nodes
(nodes)[source]¶ Add a node or a set of nodes to the group.
Note: The group must be already stored. Note: each of the nodes passed to add_nodes must be already stored. Parameters: nodes – a Node or DbNode object to add to the group, or a list of Nodes or DbNodes to add.
-
dbgroup
¶
-
description
¶
-
id
¶
-
is_stored
¶
-
name
¶
-
nodes
¶
-
pk
¶
-
classmethod
query
(name=None, type_string='', pk=None, uuid=None, nodes=None, user=None, node_attributes=None, past_days=None, name_filters=None, **kwargs)[source]¶ Query for groups.
Note: By default, query for user-defined groups only (type_string==”“). If you want to query for all type of groups, pass type_string=None. If you want to query for a specific type of groups, pass a specific string as the type_string argument.
Parameters: - name – the name of the group
- nodes – a node or list of nodes that belongs to the group (alternatively, you can also pass a DbNode or list of DbNodes)
- pk – the pk of the group
- uuid – the uuid of the group
- type_string – the string for the type of node; by default, look only for user-defined groups (see note above).
- user – by default, query for groups of all users; if specified, must be a DbUser object, or a string for the user email.
- past_days – by default, query for all groups; if specified, query the groups created in the last past_days. Must be a datetime object.
- node_attributes – if not None, must be a dictionary with format {k: v}. It will filter and return only groups where there is at least a node with an attribute with key=k and value=v. Different keys of the dictionary are joined with AND (that is, the group should satisfy all requirements. v can be a base data type (str, bool, int, float, …) If it is a list or iterable, that the condition is checked so that there should be at least a node in the group with key=k and value=each of the values of the iterable.
- kwargs –
any other filter to be passed to DbGroup.objects.filter
- Example: if
node_attributes = {'elements': ['Ba', 'Ti'], 'md5sum': 'xxx'}
, - it will find groups that contain the node with md5sum = ‘xxx’, and moreover contain at least one node for element ‘Ba’ and one node for element ‘Ti’.
- Example: if
-
remove_nodes
(nodes)[source]¶ Remove a node or a set of nodes to the group.
Note: The group must be already stored. Note: each of the nodes passed to add_nodes must be already stored. Parameters: nodes – a Node or DbNode object to add to the group, or a list of Nodes or DbNodes to add.
-
type_string
¶
-
user
¶
-
uuid
¶
-
-
class
aiida.orm.
Calculation
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.calculation.AbstractCalculation
,aiida.orm.implementation.django.node.Node
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.implementation.django.calculation'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
_logger
= <logging.Logger object>¶
-
_plugin_type_string
= 'calculation.Calculation.'¶
-
_query_type_string
= 'calculation.'¶
-
-
class
aiida.orm.
JobCalculation
(**kwargs)[source] Bases:
aiida.orm.implementation.general.calculation.job.AbstractJobCalculation
,aiida.orm.implementation.django.calculation.Calculation
-
__abstractmethods__
= frozenset([])
-
__module__
= 'aiida.orm.implementation.django.calculation.job'
-
_abc_cache
= <_weakrefset.WeakSet object>
-
_abc_negative_cache
= <_weakrefset.WeakSet object>
-
_abc_negative_cache_version
= 39
-
_abc_registry
= <_weakrefset.WeakSet object>
-
classmethod
_list_calculations_old
(states=None, past_days=None, group=None, group_pk=None, all_users=False, pks=[], relative_ctime=True)[source] Return a string with a description of the AiiDA calculations.
Todo
does not support the query for the IMPORTED state (since it checks the state in the Attributes, not in the DbCalcState table). Decide which is the correct logic and implement the correct query.
Parameters: - states – a list of string with states. If set, print only the calculations in the states “states”, otherwise shows all. Default = None.
- past_days – If specified, show only calculations that were created in the given number of past days.
- group – If specified, show only calculations belonging to a
user-defined group with the given name.
Can use colons to separate the group name from the type,
as specified in
aiida.orm.group.Group.get_from_string()
method. - group_pk – If specified, show only calculations belonging to a user-defined group with the given PK.
- pks – if specified, must be a list of integers, and only
calculations within that list are shown. Otherwise, all
calculations are shown.
If specified, sets state to None and ignores the
value of the
past_days
option.”) - relative_ctime – if true, prints the creation time relative from now. (like 2days ago). Default = True
- all_users – if True, list calculation belonging to all users. Default = False
Returns: a string with description of calculations.
-
_logger
= <logging.Logger object>
-
_plugin_type_string
= 'calculation.job.JobCalculation.'
-
_query_type_string
= 'calculation.job.'
-
_set_state
(state)[source] Set the state of the calculation.
Set it in the DbCalcState to have also the uniqueness check. Moreover (except for the IMPORTED state) also store in the ‘state’ attribute, useful to know it also after importing, and for faster querying.
Todo
Add further checks to enforce that the states are set in order?
Parameters: state – a string with the state. This must be a valid string, from aiida.common.datastructures.calc_states
.Raise: ModificationNotAllowed if the given state was already set.
-
get_state
(from_attribute=False)[source] Get the state of the calculation.
Note
this method returns the NOTFOUND state if no state is found in the DB.
Note
the ‘most recent’ state is obtained using the logic in the
aiida.common.datastructures.sort_states
function.Todo
Understand if the state returned when no state entry is found in the DB is the best choice.
Parameters: from_attribute – if set to True, read it from the attributes (the attribute is also set with set_state, unless the state is set to IMPORTED; in this way we can also see the state before storing). Returns: a string. If from_attribute is True and no attribute is found, return None. If from_attribute is False and no entry is found in the DB, return the “NOTFOUND” state.
-
-
class
aiida.orm.
InlineCalculation
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.django.calculation.Calculation
Subclass used for calculations that are automatically generated using the make_inline wrapper/decorator.
This is used to automatically create a calculation node for a simple calculation
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.implementation.general.calculation.inline'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
_cacheable
= True¶
-
_logger
= <logging.Logger object>¶
-
_plugin_type_string
= 'calculation.inline.InlineCalculation.'¶
-
_query_type_string
= 'calculation.inline.'¶
-
-
aiida.orm.
CalculationFactory
(module, from_abstract=False)[source] Return a suitable JobCalculation subclass.
Parameters: - module – a valid string recognized as a Calculation plugin
- from_abstract – A boolean. If False (default), actually look only to subclasses of JobCalculation, not to the base Calculation class. If True, check for valid strings for plugins of the Calculation base class.
-
aiida.orm.
DataFactory
(module)[source] Return a suitable Data subclass.
-
aiida.orm.
WorkflowFactory
(module)[source] Return a suitable Workflow subclass.
-
aiida.orm.
load_group
(group_id=None, pk=None, uuid=None, query_with_dashes=True)[source]¶ Load a group by its pk or uuid
Parameters: - group_id – pk (integer) or uuid (string) of a group
- pk – pk of a group
- uuid – uuid of a group, or the beginning of the uuid
- query_with_dashes (bool) – allow to query for a uuid with dashes (default=True)
Returns: the requested group if existing and unique
Raises: - InputValidationError – if none or more than one of the arguments are supplied
- TypeError – if the wrong types are provided
- NotExistent – if no matching Node is found.
- MultipleObjectsError – if more than one Node was found
-
aiida.orm.
load_node
(node_id=None, pk=None, uuid=None, parent_class=None, query_with_dashes=True)[source]¶ Returns an AiiDA node given its PK or UUID.
Parameters: - node_id – PK (integer) or UUID (string) or a node
- pk – PK of a node
- uuid – UUID of a node, or the beginning of the uuid
- parent_class – if specified, checks whether the node loaded is a subclass of parent_class
- query_with_dashes (bool) – Specific if uuid is passed, allows to put the uuid in the correct form. Default=True
- return_node (bool) – lets the function return the AiiDA node referred by the input. Default=False
Returns: the required AiiDA node if existing, unique, and (sub)instance of parent_class
Raises: - InputValidationError – if none or more than one of parameters is supplied
- TypeError – I the wrong types are provided
- NotExistent – if no matching Node is found.
- MultipleObjectsError – If more than one Node was found
-
aiida.orm.
load_workflow
(wf_id=None, pk=None, uuid=None)[source]¶ Return an AiiDA workflow given PK or UUID.
Parameters: - wf_id – PK (integer) or UUID (string) or UUID instance or a workflow
- pk – PK of a workflow
- uuid – UUID of a workflow
Returns: an AiiDA workflow
Raises: ValueError if none or more than one of parameters is supplied or type of wf_id is neither string nor integer
Subpackages¶
Submodules¶
-
class
aiida.orm.autogroup.
Autogroup
[source]¶ Bases:
object
An object used for the autogrouping of objects. The autogrouping is checked by the Node.store() method. In the store(), the Node will check if current_autogroup is != None. If so, it will call Autogroup.is_to_be_grouped, and decide whether to put it in a group. Such autogroups are going to be of the VERDIAUTOGROUP_TYPE.
The exclude/include lists, can have values ‘all’ if you want to include/exclude all classes. Otherwise, they are lists of strings like: calculation.quantumespresso.pw, data.array.kpoints, … i.e.: a string identifying the base class, than the path to the class as in Calculation/Data -Factories
-
__dict__
= dict_proxy({'get_exclude_with_subclasses': <function get_exclude_with_subclasses>, 'set_exclude': <function set_exclude>, '__module__': 'aiida.orm.autogroup', 'get_group_name': <function get_group_name>, 'set_include_with_subclasses': <function set_include_with_subclasses>, '_validate': <function _validate>, 'get_include': <function get_include>, 'set_include': <function set_include>, 'set_group_name': <function set_group_name>, 'is_to_be_grouped': <function is_to_be_grouped>, 'get_exclude': <function get_exclude>, '__dict__': <attribute '__dict__' of 'Autogroup' objects>, 'set_exclude_with_subclasses': <function set_exclude_with_subclasses>, '__weakref__': <attribute '__weakref__' of 'Autogroup' objects>, '__doc__': "\n An object used for the autogrouping of objects.\n The autogrouping is checked by the Node.store() method.\n In the store(), the Node will check if current_autogroup is != None.\n If so, it will call Autogroup.is_to_be_grouped, and decide whether to put it in a group.\n Such autogroups are going to be of the VERDIAUTOGROUP_TYPE.\n\n The exclude/include lists, can have values 'all' if you want to include/exclude all classes.\n Otherwise, they are lists of strings like: calculation.quantumespresso.pw, data.array.kpoints, ...\n i.e.: a string identifying the base class, than the path to the class as in Calculation/Data -Factories\n ", 'get_include_with_subclasses': <function get_include_with_subclasses>})¶
-
__module__
= 'aiida.orm.autogroup'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
_validate
(param, is_exact=True)[source]¶ Used internally to verify the sanity of exclude, include lists
-
get_exclude_with_subclasses
()[source]¶ Return the list of classes to exclude from autogrouping. Will also exclude their derived subclasses
-
get_group_name
()[source]¶ Get the name of the group. If no group name was set, it will set a default one by itself.
-
get_include_with_subclasses
()[source]¶ Return the list of classes to include in the autogrouping. Will also include their derived subclasses
-
is_to_be_grouped
(the_class)[source]¶ Return whether the given class has to be included in the autogroup according to include/exclude list
Return (bool): True if the_class is to be included in the autogroup
-
-
class
aiida.orm.backend.
Backend
[source]¶ Bases:
object
The public interface that defines a backend factory that creates backend specific concrete objects.
-
__abstractmethods__
= frozenset(['log'])¶
-
__dict__
= dict_proxy({'_abc_cache': <_weakrefset.WeakSet object>, '__module__': 'aiida.orm.backend', '__metaclass__': <class 'abc.ABCMeta'>, 'log': <abc.abstractproperty object>, '_abc_registry': <_weakrefset.WeakSet object>, '__abstractmethods__': frozenset(['log']), '_abc_negative_cache_version': 95, '_abc_negative_cache': <_weakrefset.WeakSet object>, '__dict__': <attribute '__dict__' of 'Backend' objects>, '__weakref__': <attribute '__weakref__' of 'Backend' objects>, '__doc__': '\n The public interface that defines a backend factory that creates backend\n specific concrete objects.\n '})¶
-
__metaclass__
¶ alias of
abc.ABCMeta
-
__module__
= 'aiida.orm.backend'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 95¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
log
¶ Get an object that implements the logging utilities interface.
Returns: An concrete log utils object Return type: aiida.orm.log.Log
-
-
aiida.orm.backend.
construct
(backend_type=None)[source]¶ Construct a concrete backend instance based on the backend_type or use the global backend value if not specified.
Parameters: backend_type – Get a backend instance based on the specified type (or default) Returns: Backend
-
class
aiida.orm.computer.
Util
(backend)[source]¶ Bases:
aiida.orm.utils.BackendDelegateWithDefault
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.computer'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
-
class
aiida.orm.importexport.
HTMLGetLinksParser
(filter_extension=None)[source]¶ Bases:
HTMLParser.HTMLParser
-
__init__
(filter_extension=None)[source]¶ If a filter_extension is passed, only links with extension matching the given one will be returned.
-
__module__
= 'aiida.orm.importexport'¶
-
-
class
aiida.orm.importexport.
MyWritingZipFile
(zipfile, fname)[source]¶ Bases:
object
-
__dict__
= dict_proxy({'write': <function write>, '__module__': 'aiida.orm.importexport', '__weakref__': <attribute '__weakref__' of 'MyWritingZipFile' objects>, '__exit__': <function __exit__>, '__dict__': <attribute '__dict__' of 'MyWritingZipFile' objects>, 'close': <function close>, '__enter__': <function __enter__>, 'open': <function open>, '__doc__': None, '__init__': <function __init__>})¶
-
__module__
= 'aiida.orm.importexport'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
class
aiida.orm.importexport.
ZipFolder
(zipfolder_or_fname, mode=None, subfolder='.', use_compression=True, allowZip64=True)[source]¶ Bases:
object
To improve: if zipfile is closed, do something (e.g. add explicit open method, rename open to openfile, set _zipfile to None, …)
-
__dict__
= dict_proxy({'__module__': 'aiida.orm.importexport', '__exit__': <function __exit__>, 'open': <function open>, '__enter__': <function __enter__>, '_get_internal_path': <function _get_internal_path>, 'pwd': <property object>, '__weakref__': <attribute '__weakref__' of 'ZipFolder' objects>, '__init__': <function __init__>, '__dict__': <attribute '__dict__' of 'ZipFolder' objects>, 'close': <function close>, 'insert_path': <function insert_path>, '__doc__': '\n To improve: if zipfile is closed, do something\n (e.g. add explicit open method, rename open to openfile,\n set _zipfile to None, ...)\n ', 'get_subfolder': <function get_subfolder>})¶
-
__init__
(zipfolder_or_fname, mode=None, subfolder='.', use_compression=True, allowZip64=True)[source]¶ Parameters: - zipfolder_or_fname – either another ZipFolder instance, of which you want to get a subfolder, or a filename to create.
- mode – the file mode; see the zipfile.ZipFile docs for valid strings. Note: can be specified only if zipfolder_or_fname is a string (the filename to generate)
- subfolder – the subfolder that specified the “current working directory” in the zip file. If zipfolder_or_fname is a ZipFolder, subfolder is a relative path from zipfolder_or_fname.subfolder
- use_compression – either True, to compress files in the Zip, or False if you just want to pack them together without compressing. It is ignored if zipfolder_or_fname is a ZipFolder isntance.
-
__module__
= 'aiida.orm.importexport'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
pwd
¶
-
-
aiida.orm.importexport.
deserialize_field
(k, v, fields_info, import_unique_ids_mappings, foreign_ids_reverse_mappings)[source]¶
-
aiida.orm.importexport.
export
(what, outfile='export_data.aiida.tar.gz', overwrite=False, silent=False, **kwargs)[source]¶ Export the DB entries passed in the ‘what’ list on a file.
Todo: limit the export to finished or failed calculations.
Parameters: - what – a list of Django database entries; they can belong to different models.
- also_parents – if True, also all the parents are stored (from the transitive closure)
- also_calc_outputs – if True, any output of a calculation is also exported
- outfile – the filename of the file on which to export
- overwrite – if True, overwrite the output file without asking. if False, raise an IOError in this case.
- silent – suppress debug print
Raises: IOError – if overwrite==False and the filename already exists.
-
aiida.orm.importexport.
export_tree
(what, folder, also_parents=True, also_calc_outputs=True, allowed_licenses=None, forbidden_licenses=None, silent=False, use_querybuilder_ancestors=False)[source]¶ Export the DB entries passed in the ‘what’ list to a file tree.
Todo: limit the export to finished or failed calculations.
Parameters: - what – a list of Django database entries; they can belong to different models.
- folder – a
Folder
object - also_parents – if True, also all the parents are stored (from the transitive closure)
- also_calc_outputs – if True, any output of a calculation is also exported
- allowed_licenses – a list or a function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.
- forbidden_licenses – a list or a function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.
- silent – suppress debug prints
Raises: LicensingException – if any node is licensed under forbidden license
-
aiida.orm.importexport.
export_zip
(what, outfile='testzip', overwrite=False, silent=False, use_compression=True, **kwargs)[source]¶
-
aiida.orm.importexport.
fill_in_query
(partial_query, originating_entity_str, current_entity_str, tag_suffixes=[], entity_separator='_')[source]¶ This function recursively constructs QueryBuilder queries that are needed for the SQLA export function. To manage to construct such queries, the relationship dictionary is consulted (which shows how to reference different AiiDA entities in QueryBuilder. To find the dependencies of the relationships of the exported data, the get_all_fields_info_sqla (which described the exported schema and its dependencies) is consulted.
-
aiida.orm.importexport.
get_all_fields_info
()[source]¶ This method returns a description of the field names that should be used to describe the entity properties. Apart from of the listing of the fields per properties, it also shown the dependencies among different entities (and on which fields). It is also shown/return the unique identifiers used per entity.
-
aiida.orm.importexport.
get_all_parents_dj
(node_pks)[source]¶ Get all the parents of given nodes :param node_pks: one node pk or an iterable of node pks :return: a list of aiida objects with all the parents of the nodes
-
aiida.orm.importexport.
get_valid_import_links
(url)[source]¶ Open the given URL, parse the HTML and return a list of valid links where the link file has a .aiida extension.
-
aiida.orm.importexport.
import_data_dj
(in_path, ignore_unknown_nodes=False, silent=False)[source]¶ Import exported AiiDA environment to the AiiDA database. If the ‘in_path’ is a folder, calls export_tree; otherwise, tries to detect the compression format (zip, tar.gz, tar.bz2, …) and calls the correct function.
Parameters: in_path – the path to a file or folder that can be imported in AiiDA
-
aiida.orm.importexport.
import_data_sqla
(in_path, ignore_unknown_nodes=False, silent=False)[source]¶ Import exported AiiDA environment to the AiiDA database. If the ‘in_path’ is a folder, calls export_tree; otherwise, tries to detect the compression format (zip, tar.gz, tar.bz2, …) and calls the correct function.
Parameters: in_path – the path to a file or folder that can be imported in AiiDA
-
aiida.orm.importexport.
schema_to_entity_names
(class_string)[source]¶ Mapping from classes path to entity names (used by the SQLA import/export) This could have been written much simpler if it is only for SQLA but there is an attempt the SQLA import/export code to be used for Django too.
-
aiida.orm.importexport.
serialize_dict
(datadict, remove_fields=[], rename_fields={}, track_conversion=False)[source]¶ Serialize the dict using the serialize_field function to serialize each field.
Parameters: - remove_fields –
a list of strings. If a field with key inside the remove_fields list is found, it is removed from the dict.
This is only used at level-0, no removal is possible at deeper levels.
- rename_fields –
a dictionary in the format
{"oldname": "newname"}
.If the “oldname” key is found, it is replaced with the “newname” string in the output dictionary.
This is only used at level-0, no renaming is possible at deeper levels.
- track_conversion – if True, a tuple is returned, where the first element is the serialized dictionary, and the second element is a dictionary with the information on the serialized fields.
- remove_fields –
-
aiida.orm.importexport.
serialize_field
(data, track_conversion=False)[source]¶ Serialize a single field.
Todo: Generalize such that it the proper function is selected also during import
-
class
aiida.orm.log.
Log
[source]¶ Bases:
object
-
__abstractmethods__
= frozenset(['create_entry', 'delete_many', 'find'])¶
-
__dict__
= dict_proxy({'__module__': 'aiida.orm.log', '__metaclass__': <class 'abc.ABCMeta'>, 'create_entry': <function create_entry>, 'create_entry_from_record': <function create_entry_from_record>, '__dict__': <attribute '__dict__' of 'Log' objects>, 'delete_many': <function delete_many>, '__weakref__': <attribute '__weakref__' of 'Log' objects>, 'find': <function find>, '_abc_cache': <_weakrefset.WeakSet object>, '__abstractmethods__': frozenset(['create_entry', 'delete_many', 'find']), '_abc_negative_cache_version': 95, '_abc_negative_cache': <_weakrefset.WeakSet object>, '_abc_registry': <_weakrefset.WeakSet object>, '__doc__': None})¶
-
__metaclass__
¶ alias of
abc.ABCMeta
-
__module__
= 'aiida.orm.log'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 95¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
create_entry
(time, loggername, levelname, objname, objpk=None, message='', metadata=None)[source]¶ Create a log entry.
Parameters: - time (
datetime.datetime
) – The time of creation for the entry - loggername (basestring) – The name of the logger that generated the entry
- levelname (basestring) – The log level
- objname (basestring) – The object name (if any) that emitted the entry
- objpk (int) – The object id that emitted the entry
- message (basestring) – The message to log
- metadata (
dict
) – Any (optional) metadata, should be JSON serializable dictionary
Returns: An object implementing the log entry interface
Return type: - time (
-
create_entry_from_record
(record)[source]¶ Helper function to create a log entry from a record created as by the python logging lobrary
Parameters: record ( logging.record
) – The record created by the logging moduleReturns: An object implementing the log entry interface Return type: aiida.orm.log.LogEntry
-
find
(filter_by=None, order_by=None, limit=None)[source]¶ Find all entries in the Log collection that conforms to the filter and optionally sort and/or apply a limit.
Parameters: - filter_by (
dict
) – A dictionary of key value pairs where the entries have to match all the criteria (i.e. an AND operation) - order_by (list) – A list of tuples of type
OrderSpecifier
- limit – The number of entries to retrieve
Returns: An iterable of the matching entries
- filter_by (
-
-
class
aiida.orm.log.
LogEntry
[source]¶ Bases:
object
-
__abstractmethods__
= frozenset(['objpk', 'loggername', 'objname', 'time', 'message', 'save', 'id', 'levelname', 'metadata'])¶
-
__dict__
= dict_proxy({'_abc_cache': <_weakrefset.WeakSet object>, 'objpk': <abc.abstractproperty object>, '__module__': 'aiida.orm.log', '__abstractmethods__': frozenset(['objpk', 'loggername', 'objname', 'time', 'message', 'save', 'id', 'levelname', 'metadata']), '__metaclass__': <class 'abc.ABCMeta'>, '_abc_negative_cache': <_weakrefset.WeakSet object>, '_abc_registry': <_weakrefset.WeakSet object>, '__doc__': None, 'loggername': <abc.abstractproperty object>, '_abc_negative_cache_version': 95, 'objname': <abc.abstractproperty object>, 'time': <abc.abstractproperty object>, '__dict__': <attribute '__dict__' of 'LogEntry' objects>, 'message': <abc.abstractproperty object>, 'save': <function save>, '__weakref__': <attribute '__weakref__' of 'LogEntry' objects>, 'id': <abc.abstractproperty object>, 'levelname': <abc.abstractproperty object>, 'metadata': <abc.abstractproperty object>})¶
-
__metaclass__
¶ alias of
abc.ABCMeta
-
__module__
= 'aiida.orm.log'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 95¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
id
¶ Get the primary key of the entry
Returns: The entry primary key Return type: int
-
levelname
¶ The name of the log level
Returns: The entry log level name Return type: basestring
-
loggername
¶ The name of the logger that created this entry
Returns: The entry loggername Return type: basestring
-
message
¶ Get the message corresponding to the entry
Returns: The entry message Return type: basestring
-
metadata
¶ Get the metadata corresponding to the entry
Returns: The entry metadata Return type: json.json
-
objname
¶ Get the name of the object that created the log entry
Returns: The entry object name Return type: basestring
-
objpk
¶ Get the id of the object that created the log entry
Returns: The entry timestamp Return type: int
-
save
()[source]¶ Persist the log entry to the database
Returns: reference of self Return type: class: LogEntry
-
time
¶ Get the time corresponding to the entry
Returns: The entry timestamp Return type: datetime.datetime
-
-
class
aiida.orm.log.
OrderSpecifier
(field, direction)¶ Bases:
tuple
-
__dict__
= dict_proxy({'__module__': 'aiida.orm.log', '__getstate__': <function __getstate__>, '__new__': <staticmethod object>, '_replace': <function _replace>, '_make': <classmethod object>, 'direction': <property object>, 'field': <property object>, '__slots__': (), '_asdict': <function _asdict>, '__repr__': <function __repr__>, '__dict__': <property object>, '_fields': ('field', 'direction'), '__getnewargs__': <function __getnewargs__>, '__doc__': 'OrderSpecifier(field, direction)'})¶
-
__getnewargs__
()¶ Return self as a plain tuple. Used by copy and pickle.
-
__getstate__
()¶ Exclude the OrderedDict from pickling
-
__module__
= 'aiida.orm.log'¶
-
static
__new__
(field, direction)¶ Create new instance of OrderSpecifier(field, direction)
-
__repr__
()¶ Return a nicely formatted representation string
-
__slots__
= ()¶
-
_asdict
()¶ Return a new OrderedDict which maps field names to their values
-
_fields
= ('field', 'direction')¶
-
classmethod
_make
(iterable, new=<built-in method __new__ of type object at 0x906d60>, len=<built-in function len>)¶ Make a new OrderSpecifier object from a sequence or iterable
-
_replace
(**kwds)¶ Return a new OrderSpecifier object replacing specified fields with new values
-
direction
¶ Alias for field number 1
-
field
¶ Alias for field number 0
-
-
class
aiida.orm.mixins.
Sealable
[source]¶ Bases:
object
-
SEALED_KEY
= '_sealed'¶
-
__dict__
= dict_proxy({'__module__': 'aiida.orm.mixins', '_del_attr': <function _del_attr>, '_set_attr': <function _set_attr>, 'is_sealed': <property object>, 'add_link_from': <function add_link_from>, '_updatable_attributes': ('_sealed',), '_iter_updatable_attributes': <function _iter_updatable_attributes>, 'SEALED_KEY': '_sealed', 'seal': <function seal>, '__dict__': <attribute '__dict__' of 'Sealable' objects>, 'copy': <function copy>, '__weakref__': <attribute '__weakref__' of 'Sealable' objects>, '__doc__': None})¶
-
__module__
= 'aiida.orm.mixins'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
_del_attr
(key)[source]¶ Delete an attribute
Parameters: key – attribute name
Raises: - AttributeError – if key does not exist
- ModificationNotAllowed – if the node is already sealed or if the node is already stored and the attribute is not updatable
-
_iter_updatable_attributes
()[source]¶ Iterate over the updatable attributes and yield key value pairs
-
_set_attr
(key, value, **kwargs)[source]¶ Set a new attribute
Parameters: - key – attribute name
- value – attribute value
Raises: ModificationNotAllowed – if the node is already sealed or if the node is already stored and the attribute is not updatable
-
_updatable_attributes
= ('_sealed',)¶
-
add_link_from
(src, label=None, link_type=<LinkType.UNSPECIFIED: 'unspecified'>)[source]¶ Add a link from a node
You can use the parameters of the base Node class, in particular the label parameter to label the link.
Parameters: - src – the node to add a link from
- label (str) – name of the link
- link_type – type of the link, must be one of the enum values from
LinkType
-
copy
(include_updatable_attrs=False)[source]¶ Create a copy of the node minus the updatable attributes
-
is_sealed
¶ Returns whether the node is sealed, i.e. whether the sealed attribute has been set to True
-
The QueryBuilder: A class that allows you to query the AiiDA database, independent from backend.
Note that the backend implementation is enforced and handled with a composition model!
QueryBuilder()
is the frontend class that the user can use. It inherits from object and contains
backend-specific functionality. Backend specific functionality is provided by the implementation classes.
These inherit from aiida.backends.general.querybuilder_interface.QueryBuilderInterface()
,
an interface classes which enforces the implementation of its defined methods.
An instance of one of the implementation classes becomes a member of the QueryBuilder()
instance
when instantiated by the user.
-
class
aiida.orm.querybuilder.
QueryBuilder
(*args, **kwargs)[source]¶ Bases:
object
The class to query the AiiDA database.
Usage:
from aiida.orm.querybuilder import QueryBuilder qb = QueryBuilder() # Querying nodes: qb.append(Node) # retrieving the results: results = qb.all()
-
_EDGE_TAG_DELIM
= '--'¶
-
_VALID_PROJECTION_KEYS
= ('func', 'cast')¶
-
__dict__
= dict_proxy({'_add_to_projections': <function _add_to_projections>, '_get_json_compatible': <function _get_json_compatible>, '__module__': 'aiida.orm.querybuilder', '_join_outputs': <function _join_outputs>, '_join_ancestors_recursive': <function _join_ancestors_recursive>, '__str__': <function __str__>, 'get_query': <function get_query>, 'all': <function all>, '_EDGE_TAG_DELIM': '--', 'one': <function one>, '_join_group_members': <function _join_group_members>, '__dict__': <attribute '__dict__' of 'QueryBuilder' objects>, 'get_aliases': <function get_aliases>, '_build_filters': <function _build_filters>, 'add_filter': <function add_filter>, '_get_function_map': <function _get_function_map>, '__weakref__': <attribute '__weakref__' of 'QueryBuilder' objects>, 'children': <function children>, 'append': <function append>, 'get_used_tags': <function get_used_tags>, 'order_by': <function order_by>, '_get_ormclass': <function _get_ormclass>, 'distinct': <function distinct>, 'set_debug': <function set_debug>, '_join_to_computer_used': <function _join_to_computer_used>, '_build_projections': <function _build_projections>, 'dict': <function dict>, '__init__': <function __init__>, 'outputs': <function outputs>, 'iterall': <function iterall>, 'parents': <function parents>, '__doc__': '\n The class to query the AiiDA database. \n \n Usage::\n\n from aiida.orm.querybuilder import QueryBuilder\n qb = QueryBuilder()\n # Querying nodes:\n qb.append(Node)\n # retrieving the results:\n results = qb.all()\n\n ', 'iterdict': <function iterdict>, '_build_order': <function _build_order>, 'inputs': <function inputs>, '_VALID_PROJECTION_KEYS': ('func', 'cast'), '_join_group_user': <function _join_group_user>, '_join_masters': <function _join_masters>, '_join_inputs': <function _join_inputs>, '_join_slaves': <function _join_slaves>, 'get_results_dict': <function get_results_dict>, 'add_projection': <function add_projection>, '_join_user_group': <function _join_user_group>, '_join_descendants_recursive': <function _join_descendants_recursive>, 'inject_query': <function inject_query>, 'offset': <function offset>, '_get_projectable_entity': <function _get_projectable_entity>, '_join_creator_of': <function _join_creator_of>, 'except_if_input_to': <function except_if_input_to>, '_build': <function _build>, 'count': <function count>, '_join_computer': <function _join_computer>, 'get_json_compatible_queryhelp': <function get_json_compatible_queryhelp>, '_join_created_by': <function _join_created_by>, '_get_unique_tag': <function _get_unique_tag>, '_get_tag_from_specification': <function _get_tag_from_specification>, 'get_alias': <function get_alias>, '_get_connecting_node': <function _get_connecting_node>, '_join_groups': <function _join_groups>, 'limit': <function limit>, '_check_dbentities': <staticmethod object>, '_add_type_filter': <function _add_type_filter>, 'first': <function first>})¶
-
__init__
(*args, **kwargs)[source]¶ Instantiates a QueryBuilder instance.
Which backend is used decided here based on backend-settings (taken from the user profile). This cannot be overriden so far by the user.
Parameters: - debug (bool) – Turn on debug mode. This feature prints information on the screen about the stages of the QueryBuilder. Does not affect results.
- path (list) – A list of the vertices to traverse. Leave empty if you plan on using the method
QueryBuilder.append()
. - filters – The filters to apply. You can specify the filters here, when appending to the query
using
QueryBuilder.append()
or even later usingQueryBuilder.add_filter()
. Check latter gives API-details. - project – The projections to apply. You can specify the projections here, when appending to the query
using
QueryBuilder.append()
or even later usingQueryBuilder.add_projection()
. Latter gives you API-details. - limit (int) – Limit the number of rows to this number. Check
QueryBuilder.limit()
for more information. - offset (int) – Set an offset for the results returned. Details in
QueryBuilder.offset()
. - order_by – How to order the results. As the 2 above, can be set also at later stage,
check
QueryBuilder.order_by()
for more information.
-
__module__
= 'aiida.orm.querybuilder'¶
-
__str__
()[source]¶ When somebody hits: print(QueryBuilder) or print(str(QueryBuilder)) I want to print the SQL-query. Because it looks cool…
-
__weakref__
¶ list of weak references to the object (if defined)
-
_add_to_projections
(alias, projectable_entity_name, cast=None, func=None)[source]¶ Parameters: - alias – A instance of sqlalchemy.orm.util.AliasedClass, alias for an ormclass
- projectable_entity_name – User specification of what to project. Appends to query’s entities what the user wants to project (have returned by the query)
-
_add_type_filter
(tagspec, query_type_string, plugin_type_string, subclassing)[source]¶ Add a filter on the type based on the query_type_string
-
_build_filters
(alias, filter_spec)[source]¶ Recurse through the filter specification and apply filter operations.
Parameters: - alias – The alias of the ORM class the filter will be applied on
- filter_spec – the specification as given by the queryhelp
Returns: an instance of sqlalchemy.sql.elements.BinaryExpression.
-
static
_check_dbentities
(entities_cls_to_join, relationship)[source]¶ Parameters: - entities_cls_joined (list) – A list (tuple) of the aliased class passed as joined_entity and the ormclass that was expected
- entities_cls_joined – A list (tuple) of the aliased class passed as entity_to_join and the ormclass that was expected
- relationship (str) – The relationship between the two entities to make the Exception comprehensible
-
_get_connecting_node
(index, joining_keyword=None, joining_value=None, **kwargs)[source]¶ Parameters: - querydict – A dictionary specifying how the current node is linked to other nodes.
- index – Index of this node within the path specification
Valid (currently implemented) keys are:
- input_of
- output_of
- descendant_of
- ancestor_of
- direction
- group_of
- member_of
- has_computer
- computer_of
- created_by
- creator_of
- owner_of
- belongs_to
Future:
- master_of
- slave_of
-
_get_json_compatible
(inp)[source]¶ Parameters: inp – The input value that will be converted. Recurses into each value if inp is an iterable.
-
_get_ormclass
(cls, ormclasstype)[source]¶ For testing purposes, I want to check whether the implementation gives the currect ormclass back. Just relaying to the implementation, details for this function in the interface.
-
_get_tag_from_specification
(specification)[source]¶ Parameters: specification – If that is a string, I assume the user has deliberately specified it with tag=specification. In that case, I simply check that it’s not a duplicate. If it is a class, I check if it’s in the _cls_to_tag_map!
-
_get_unique_tag
(ormclasstype)[source]¶ Using the function get_tag_from_type, I get a tag. I increment an index that is appended to that tag until I have an unused tag. This function is called in
QueryBuilder.append()
when autotag is set to True.Parameters: ormclasstype (str) – The string that defines the type of the AiiDA ORM class. For subclasses of Node, this is the Node._plugin_type_string, for other they are as defined as returned by QueryBuilder._get_ormclass()
.Returns: A tag, as a string.
-
_join_ancestors_recursive
(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]¶ joining ancestors using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.
-
_join_computer
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – An entity that can use a computer (eg a node)
- entity_to_join – aliased dbcomputer entity
-
_join_created_by
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – the aliased user you want to join to
- entity_to_join – the (aliased) node or group in the DB to join with
-
_join_creator_of
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – the aliased node
- entity_to_join – the aliased user to join to that node
-
_join_descendants_recursive
(joined_entity, entity_to_join, isouterjoin, filter_dict, expand_path=False)[source]¶ joining descendants using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.
-
_join_group_members
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) ORMclass that is a group in the database
- entity_to_join – The (aliased) ORMClass that is a node and member of the group
joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as group to enitity_to_join as node. (enitity_to_join is an member_of joined_entity)
-
_join_group_user
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – An aliased dbgroup
- entity_to_join – aliased dbuser
-
_join_groups
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) node in the database
- entity_to_join – The (aliased) Group
joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as node to enitity_to_join as group. (enitity_to_join is an group_of joined_entity)
-
_join_inputs
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) ORMclass that is an output
- entity_to_join – The (aliased) ORMClass that is an input.
joined_entity and entity_to_join are joined with a link from joined_entity as output to enitity_to_join as input (enitity_to_join is an input_of joined_entity)
-
_join_outputs
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – The (aliased) ORMclass that is an input
- entity_to_join – The (aliased) ORMClass that is an output.
joined_entity and entity_to_join are joined with a link from joined_entity as input to enitity_to_join as output (enitity_to_join is an output_of joined_entity)
-
_join_to_computer_used
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – the (aliased) computer entity
- entity_to_join – the (aliased) node entity
-
_join_user_group
(joined_entity, entity_to_join, isouterjoin)[source]¶ Parameters: - joined_entity – An aliased user
- entity_to_join – aliased group
-
add_filter
(tagspec, filter_spec)[source]¶ Adding a filter to my filters.
Parameters: - tagspec – The tag, which has to exist already as a key in self._filters
- filter_spec – The specifications for the filter, has to be a dictionary
Usage:
qb = QueryBuilder() # Instantiating the QueryBuilder instance qb.append(Node, tag='node') # Appending a Node #let's put some filters: qb.add_filter('node',{'id':{'>':12}}) # 2 filters together: qb.add_filter('node',{'label':'foo', 'uuid':{'like':'ab%'}}) # Now I am overriding the first filter I set: qb.add_filter('node',{'id':13})
-
add_projection
(tag_spec, projection_spec)[source]¶ Adds a projection
Parameters: - tag_spec – A valid specification for a tag
- projection_spec – The specification for the projection. A projection is a list of dictionaries, with each dictionary containing key-value pairs where the key is database entity (e.g. a column / an attribute) and the value is (optional) additional information on how to process this database entity.
If the given projection_spec is not a list, it will be expanded to a list. If the listitems are not dictionaries, but strings (No additional processing of the projected results desired), they will be expanded to dictionaries.
Usage:
qb = QueryBuilder() qb.append(StructureData, tag='struc') # Will project the uuid and the kinds qb.add_projection('struc', ['uuid', 'attributes.kinds'])
The above example will project the uuid and the kinds-attribute of all matching structures. There are 2 (so far) special keys.
The single star * will project the ORM-instance:
qb = QueryBuilder() qb.append(StructureData, tag='struc') # Will project the ORM instance qb.add_projection('struc', '*') print type(qb.first()[0]) # >>> aiida.orm.data.structure.StructureData
The double start ** projects all possible projections of this entity:
QueryBuilder().append(StructureData,tag=’s’, project=’**’).limit(1).dict()[0][‘s’].keys()
# >>> u’user_id, description, ctime, label, extras, mtime, id, attributes, dbcomputer_id, nodeversion, type, public, uuid’
Be aware that the result of ** depends on the backend implementation.
-
all
(batch_size=None)[source]¶ Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.
Parameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing! Returns: a list of lists of all projected entities.
-
append
(cls=None, type=None, tag=None, filters=None, project=None, subclassing=True, edge_tag=None, edge_filters=None, edge_project=None, outerjoin=False, **kwargs)[source]¶ Any iterative procedure to build the path for a graph query needs to invoke this method to append to the path.
Parameters: - cls – The Aiida-class (or backend-class) defining the appended vertice
- type (str) – The type of the class, if cls is not given
- autotag (bool) – Whether to find automatically a unique tag. If this is set to True (default False),
- tag (str) – A unique tag. If none is given, I will create a unique tag myself.
- filters – Filters to apply for this vertice.
See
add_filter()
, the method invoked in the background, or usage examples for details. - project – Projections to apply. See usage examples for details.
More information also in
add_projection()
. - subclassing (bool) – Whether to include subclasses of the given class (default True). E.g. Specifying a Calculation as cls will include JobCalculations, InlineCalculations, etc..
- outerjoin (bool) – If True, (default is False), will do a left outerjoin instead of an inner join
- edge_tag (str) – The tag that the edge will get. If nothing is specified (and there is a meaningful edge) the default is tag1–tag2 with tag1 being the entity joining from and tag2 being the entity joining to (this entity).
- edge_filters (str) – The filters to apply on the edge. Also here, details in
add_filter()
. - edge_project (str) – The project from the edges. API-details in
add_projection()
.
A small usage example how this can be invoked:
qb = QueryBuilder() # Instantiating empty querybuilder instance qb.append(cls=StructureData) # First item is StructureData node # The # next node in the path is a PwCalculation, with # the structure joined as an input qb.append( cls=PwCalculation, output_of=StructureData )
Returns: self
-
count
()[source]¶ Counts the number of rows returned by the backend.
Returns: the number of rows as an integer
-
dict
(batch_size=None)[source]¶ Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.
Parameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing! Returns: a list of dictionaries of all projected entities. Each dictionary consists of key value pairs, where the key is the tag of the vertice and the value a dictionary of key-value pairs where key is the entity description (a column name or attribute path) and the value the value in the DB. Usage:
qb = QueryBuilder() qb.append( StructureData, tag='structure', filters={'uuid':{'==':myuuid}}, ) qb.append( Node, descendant_of='structure', project=['type', 'id'], # returns type (string) and id (string) tag='descendant' ) # Return the dictionaries: print "qb.iterdict()" for d in qb.iterdict(): print '>>>', d
results in the following output:
qb.iterdict() >>> {'descendant': { 'type': u'calculation.job.quantumespresso.pw.PwCalculation.', 'id': 7716} } >>> {'descendant': { 'type': u'data.remote.RemoteData.', 'id': 8510} }
-
distinct
()[source]¶ Asks for distinct rows, which is the same as asking the backend to remove duplicates. Does not execute the query!
If you want a distinct query:
qb = QueryBuilder() # append stuff! qb.append(...) qb.append(...) ... qb.distinct().all() #or qb.distinct().dict()
Returns: self
-
except_if_input_to
(calc_class)[source]¶ Makes counterquery based on the own path, only selecting entries that have been input to calc_class
Parameters: calc_class – The calculation class to check against Returns: self
-
first
()[source]¶ Executes query asking for one instance. Use as follows:
qb = QueryBuilder(**queryhelp) qb.first()
Returns: One row of results as a list
-
get_alias
(tag)[source]¶ In order to continue a query by the user, this utility function returns the aliased ormclasses.
Parameters: tag – The tag for a vertice in the path Returns: the alias given for that vertice
-
get_json_compatible_queryhelp
()[source]¶ Makes the queryhelp a json - compatible dictionary. In this way,the queryhelp can be stored in the database or a json-object, retrieved or shared and used later. See this usage:
qb = QueryBuilder(limit=3).append(StructureData, project='id').order_by({StructureData:'id'}) queryhelp = qb.get_json_compatible_queryhelp() # Now I could save this dictionary somewhere and use it later: qb2=QueryBuilder(**queryhelp) # This is True if no change has been made to the database. # Note that such a comparison can only be True if the order of results is enforced qb.all()==qb2.all()
Returns: the json-compatible queryhelp
-
get_query
()[source]¶ Instantiates and manipulates a sqlalchemy.orm.Query instance if this is needed. First, I check if the query instance is still valid by hashing the queryhelp. In this way, if a user asks for the same query twice, I am not recreating an instance.
Returns: an instance of sqlalchemy.orm.Query that is specific to the backend used.
Returns a list of all the vertices that are being used. Some parameter allow to select only subsets. :param bool vertices: Defaults to True. If True, adds the tags of vertices to the returned list :param bool edges: Defaults to True. If True, adds the tags of edges to the returnend list.
Returns: A list of all tags, including (if there is) also the tag give for the edges
-
inject_query
(query)[source]¶ Manipulate the query an inject it back. This can be done to add custom filters using SQLA. :param query: A sqlalchemy.orm.Query instance
-
iterall
(batch_size=100)[source]¶ Same as
all()
, but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_perParameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Returns: a generator of lists
-
iterdict
(batch_size=100)[source]¶ Same as
dict()
, but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_perParameters: batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Returns: a generator of dictionaries
-
limit
(limit)[source]¶ Set the limit (nr of rows to return)
Parameters: limit (int) – integers of number of rows of rows to return
-
offset
(offset)[source]¶ Set the offset. If offset is set, that many rows are skipped before returning. offset = 0 is the same as omitting setting the offset. If both offset and limit appear, then offset rows are skipped before starting to count the limit rows that are returned.
Parameters: offset (int) – integers of nr of rows to skip
-
one
()[source]¶ Executes the query asking for exactly one results. Will raise an exception if this is not the case :raises: MultipleObjectsError if more then one row can be returned :raises: NotExistent if no result was found
-
order_by
(order_by)[source]¶ Set the entity to order by
Parameters: order_by – This is a list of items, where each item is a dictionary specifies what to sort for an entity In each dictionary in that list, keys represent valid tags of entities (tables), and values are list of columns.
Usage:
#Sorting by id (ascending): qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':['id']}) # or #Sorting by id (ascending): qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':{'order':'asc'}}]}) # for descending order: qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':{'order':'desc'}}]}) # or (shorter) qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':'desc'}]})
-
-
class
aiida.orm.user.
User
(**kwargs)[source]¶ Bases:
aiida.orm.implementation.general.user.AbstractUser
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.implementation.django.user'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 39¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
date_joined
¶
-
email
¶
-
first_name
¶
-
id
¶
-
institution
¶
-
is_active
¶
-
is_staff
¶
-
is_superuser
¶
-
last_login
¶
-
last_name
¶
-
pk
¶
-
classmethod
search_for_users
(**kwargs)[source]¶ Search for a user the passed keys.
Parameters: kwargs – The keys to search for the user with. Returns: A list of users matching the search criteria.
-
to_be_stored
¶
-
-
class
aiida.orm.user.
Util
(backend)[source]¶ Bases:
aiida.orm.utils.BackendDelegateWithDefault
-
__abstractmethods__
= frozenset([])¶
-
__module__
= 'aiida.orm.user'¶
-
_abc_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache
= <_weakrefset.WeakSet object>¶
-
_abc_negative_cache_version
= 40¶
-
_abc_registry
= <_weakrefset.WeakSet object>¶
-
-
aiida.orm.utils.
CalculationFactory
(module, from_abstract=False)[source]¶ Return a suitable JobCalculation subclass.
Parameters: - module – a valid string recognized as a Calculation plugin
- from_abstract – A boolean. If False (default), actually look only to subclasses of JobCalculation, not to the base Calculation class. If True, check for valid strings for plugins of the Calculation base class.
-
aiida.orm.utils.
load_group
(group_id=None, pk=None, uuid=None, query_with_dashes=True)[source]¶ Load a group by its pk or uuid
Parameters: - group_id – pk (integer) or uuid (string) of a group
- pk – pk of a group
- uuid – uuid of a group, or the beginning of the uuid
- query_with_dashes (bool) – allow to query for a uuid with dashes (default=True)
Returns: the requested group if existing and unique
Raises: - InputValidationError – if none or more than one of the arguments are supplied
- TypeError – if the wrong types are provided
- NotExistent – if no matching Node is found.
- MultipleObjectsError – if more than one Node was found
-
aiida.orm.utils.
load_node
(node_id=None, pk=None, uuid=None, parent_class=None, query_with_dashes=True)[source]¶ Returns an AiiDA node given its PK or UUID.
Parameters: - node_id – PK (integer) or UUID (string) or a node
- pk – PK of a node
- uuid – UUID of a node, or the beginning of the uuid
- parent_class – if specified, checks whether the node loaded is a subclass of parent_class
- query_with_dashes (bool) – Specific if uuid is passed, allows to put the uuid in the correct form. Default=True
- return_node (bool) – lets the function return the AiiDA node referred by the input. Default=False
Returns: the required AiiDA node if existing, unique, and (sub)instance of parent_class
Raises: - InputValidationError – if none or more than one of parameters is supplied
- TypeError – I the wrong types are provided
- NotExistent – if no matching Node is found.
- MultipleObjectsError – If more than one Node was found
-
aiida.orm.utils.
load_workflow
(wf_id=None, pk=None, uuid=None)[source]¶ Return an AiiDA workflow given PK or UUID.
Parameters: - wf_id – PK (integer) or UUID (string) or UUID instance or a workflow
- pk – PK of a workflow
- uuid – UUID of a workflow
Returns: an AiiDA workflow
Raises: ValueError if none or more than one of parameters is supplied or type of wf_id is neither string nor integer
-
aiida.orm.workflow.
kill_from_pk
(pk, verbose=False)[source]¶ Kills a workflow without loading the class, useful when there was a problem and the workflow definition module was changed/deleted (and the workflow cannot be reloaded).
Parameters: - pk – the principal key (id) of the workflow to kill
- verbose – True to print the pk of each subworkflow killed