Common data structures, utility classes and functions
Note
Modules in this sub package have to run without a loaded database environment
aiida.common.
CalcJobState
Bases: enum.Enum
enum.Enum
The sub state of a CalcJobNode while its Process is in an active state (i.e. Running or Waiting).
PARSING
RETRIEVING
SUBMITTING
UPLOADING
WITHSCHEDULER
__module__
CalcInfo
Bases: aiida.common.extendeddicts.DefaultFieldsAttributeDict
aiida.common.extendeddicts.DefaultFieldsAttributeDict
This object will store the data returned by the calculation plugin and to be passed to the ExecManager.
In the following descriptions all paths have to be considered relative
after the calculation has finished and stored in the repository in a FolderData. If the entry in the list is just a string, it is assumed to be the filepath on the remote and it will be copied to ‘.’ of the repository with name os.path.split(item)[1] If the entry is a tuple it is expected to have the following format
(‘remotepath’, ‘localpath’, depth)
If the ‘remotepath’ is a file or folder, it will be copied in the repository to ‘localpath’. However, if the ‘remotepath’ contains file patterns with wildcards, the ‘localpath’ should be set to ‘.’ and the depth parameter should be an integer that decides the localname. The ‘remotepath’ will be split on file separators and the local filename will be determined by joining the N last elements, where N is given by the depth variable.
Example: (‘some/remote/path/files/pattern*[0-9].xml’, ‘.’, 2)
Will result in all files that match the pattern to be copied to the local repository with path
‘files/pattern*[0-9].xml’
and stored temporarily in a FolderData, that will be available only during the parsing call. The format of the list is the same as that of ‘retrieve_list’
(‘linkname_from calc to singlefile’, ‘subclass of singlefile’, ‘filename’) Each tuple represents a file that will be retrieved from cluster and saved in SinglefileData nodes
Deprecated since version 1.0.0: Will be removed in v2.0.0, use retrieve_temporary_list instead.
local_copy_list: a list of tuples with format (‘node_uuid’, ‘filename’, relativedestpath’)
remote_copy_list: a list of tuples with format (‘remotemachinename’, ‘remoteabspath’, ‘relativedestpath’)
remote_symlink_list: a list of tuples with format (‘remotemachinename’, ‘remoteabspath’, ‘relativedestpath’)
should not be stored permanantly in the repository folder of the corresponding CalcJobNode that will be created, but should only be copied to the remote working directory on the target computer. This is useful for input files that should be copied to the working directory but should not be copied as well to the repository either, for example, because they contain proprietary information or because they are big and their content is already indirectly present in the repository through one of the data nodes passed as input to the calculation.
codes_info: a list of dictionaries used to pass the info of the execution of a code
codes_run_mode: a string used to specify the order in which multi codes can be executed
_default_fields
CodeInfo
This attribute-dictionary contains the information needed to execute a code. Possible attributes are:
cmdline_params: a list of strings, containing parameters to be written on the command line right after the call to the code, as for example:
cmdline_params
code.x cmdline_params[0] cmdline_params[1] ... < stdin > stdout
stdin_name: (optional) the name of the standard input file. Note, it is only possible to use the stdin with the syntax:
stdin_name
code.x < stdin_name
If no stdin_name is specified, the string “< stdin_name” will not be passed to the code. Note: it is not possible to substitute/remove the ‘<’ if stdin_name is specified; if that is needed, avoid stdin_name and use instead the cmdline_params to specify a suitable syntax.
stdout_name: (optional) the name of the standard output file. Note, it is only possible to pass output to stdout_name with the syntax:
stdout_name
code.x ... > stdout_name
If no stdout_name is specified, the string “> stdout_name” will not be passed to the code. Note: it is not possible to substitute/remove the ‘>’ if stdout_name is specified; if that is needed, avoid stdout_name and use instead the cmdline_params to specify a suitable syntax.
stderr_name: (optional) a string, the name of the error file of the code.
stderr_name
join_files: (optional) if True, redirects the error to the output file. If join_files=True, the code will be called as:
join_files
code.x ... > stdout_name 2>&1
otherwise, if join_files=False and stderr is passed:
code.x ... > stdout_name 2> stderr_name
withmpi: if True, executes the code with mpirun (or another MPI installed on the remote computer)
withmpi
code_uuid: the uuid of the code associated to the CodeInfo
code_uuid
CodeRunMode
Bases: enum.IntEnum
enum.IntEnum
Enum to indicate the way the codes of a calculation should be run.
For PARALLEL, the codes for a given calculation will be run in parallel by running them in the background:
code1.x & code2.x &
For the SERIAL option, codes will be executed sequentially by running for example the following:
code1.x code2.x
PARALLEL
SERIAL
AiidaException
Bases: Exception
Exception
Base class for all AiiDA exceptions.
Each module will have its own subclass, inherited from this (e.g. ExecManagerException, TransportException, …)
__weakref__
list of weak references to the object (if defined)
NotExistent
Bases: aiida.common.exceptions.AiidaException
aiida.common.exceptions.AiidaException
Raised when the required entity does not exist.
NotExistentAttributeError
Bases: AttributeError, aiida.common.exceptions.NotExistent
AttributeError
aiida.common.exceptions.NotExistent
Raised when the required entity does not exist, when fetched as an attribute.
NotExistentKeyError
Bases: KeyError, aiida.common.exceptions.NotExistent
KeyError
Raised when the required entity does not exist, when fetched as a dictionary key.
MultipleObjectsError
Raised when more than one entity is found in the DB, but only one was expected.
RemoteOperationError
Raised when an error in a remote operation occurs, as in a failed kill() of a scheduler job.
ContentNotExistent
Bases: aiida.common.exceptions.NotExistent
Raised when trying to access an attribute, a key or a file in the result nodes that is not present
FailedError
Raised when accessing a calculation that is in the FAILED status
StoringNotAllowed
Raised when the user tries to store an unstorable node (e.g. a base Node class)
ModificationNotAllowed
Raised when the user tries to modify a field, object, property, … that should not be modified.
IntegrityError
Raised when there is an underlying data integrity error. This can be database related or a general data integrity error. This can happen if, e.g., a foreign key check fails. See PEP 249 for details.
UniquenessError
Raised when the user tries to violate a uniqueness constraint (on the DB, for instance).
EntryPointError
Raised when an entry point cannot be uniquely resolved and imported.
MissingEntryPointError
Bases: aiida.common.exceptions.EntryPointError
aiida.common.exceptions.EntryPointError
Raised when the requested entry point is not registered with the entry point manager.
MultipleEntryPointError
Raised when the requested entry point cannot uniquely be resolved by the entry point manager.
LoadingEntryPointError
Raised when the resource corresponding to requested entry point cannot be imported.
InvalidEntryPointTypeError
Raised when a loaded entry point has a type that is not supported by the corresponding entry point group.
InvalidOperation
The allowed operation is not valid (e.g., when trying to add a non-internal attribute before saving the entry), or deleting an entry that is protected (e.g., because it is referenced by foreign keys)
ParsingError
Generic error raised when there is a parsing error
InternalError
Error raised when there is an internal error of AiiDA.
PluginInternalError
Bases: aiida.common.exceptions.InternalError
aiida.common.exceptions.InternalError
Error raised when there is an internal error which is due to a plugin and not to the AiiDA infrastructure.
ValidationError
Error raised when there is an error during the validation phase of a property.
ConfigurationError
Error raised when there is a configuration error in AiiDA.
ProfileConfigurationError
Bases: aiida.common.exceptions.ConfigurationError
aiida.common.exceptions.ConfigurationError
Configuration error raised when a wrong/inexistent profile is requested.
MissingConfigurationError
Configuration error raised when the configuration file is missing.
ConfigurationVersionError
Configuration error raised when the configuration file version is not compatible with the current version.
IncompatibleDatabaseSchema
Raised when the database schema is incompatible with that of the code.
DbContentError
Raised when the content of the DB is not valid. This should never happen if the user does not play directly with the DB.
InputValidationError
Bases: aiida.common.exceptions.ValidationError
aiida.common.exceptions.ValidationError
The input data for a calculation did not validate (e.g., missing required input data, wrong data, …)
FeatureNotAvailable
Raised when a feature is requested from a plugin, that is not available.
FeatureDisabled
Raised when a feature is requested, but the user has chosen to disable it (e.g., for submissions on disabled computers).
LicensingException
Raised when requirements for data licensing are not met.
TestsNotAllowedError
Raised when tests are required to be run/loaded, but we are not in a testing environment.
This is to prevent data loss.
UnsupportedSpeciesError
Bases: ValueError
ValueError
Raised when StructureData operations are fed species that are not supported by AiiDA such as Deuterium
TransportTaskException
Raised when a TransportTask, an task to be completed by the engine that requires transport, fails
OutputParsingError
Bases: aiida.common.exceptions.ParsingError
aiida.common.exceptions.ParsingError
Can be raised by a Parser when it fails to parse the output generated by a CalcJob process.
AttributeDict
Bases: dict
dict
This class internally stores values in a dictionary, but exposes the keys also as attributes, i.e. asking for attrdict.key will return the value of attrdict[‘key’] and so on.
Raises an AttributeError if the key does not exist, when called as an attribute, while the usual KeyError if the key does not exist and the dictionary syntax is used.
__deepcopy__
Deep copy.
__delattr__
Delete a key as an attribute.
AttributeError – if the attribute does not correspond to an existing key.
__dict__
__dir__
Default dir() implementation.
__getattr__
Read a key as an attribute.
__getstate__
Needed for pickling this class.
__init__
Recursively turn the dict and all its nested dictionaries into AttributeDict instance.
__repr__
Representation of the object.
__setattr__
Set a key as an attribute.
__setstate__
FixedFieldsAttributeDict
Bases: aiida.common.extendeddicts.AttributeDict
aiida.common.extendeddicts.AttributeDict
A dictionary with access to the keys as attributes, and with filtering of valid attributes. This is only the base class, without valid attributes; use a derived class to do the actual work. E.g.:
class TestExample(FixedFieldsAttributeDict): _valid_fields = ('a','b','c')
Overridden to allow direct access to fields with underscore.
__setitem__
_valid_fields
get_valid_fields
Return the list of valid fields.
DefaultFieldsAttributeDict
A dictionary with access to the keys as attributes, and with an internal value storing the ‘default’ keys to be distinguished from extra fields.
Extra methods defaultkeys() and extrakeys() divide the set returned by keys() in default keys (i.e. those defined at definition time) and other keys. There is also a method get_default_fields() to return the internal list.
Moreover, for undefined default keys, it returns None instead of raising a KeyError/AttributeError exception.
Remember to define the _default_fields in a subclass! E.g.:
class TestExample(DefaultFieldsAttributeDict): _default_fields = ('a','b','c')
When the validate() method is called, it calls in turn all validate_KEY methods, where KEY is one of the default keys. If the method is not present, the field is considered to be always valid. Each validate_KEY method should accept a single argument ‘value’ that will contain the value to be checked.
It raises a ValidationError if any of the validate_KEY function raises an exception, otherwise it simply returns. NOTE: the validate_* functions are called also for unset fields, so if the field can be empty on validation, you have to start your validation function with something similar to:
if value is None: return
__getitem__
Return None instead of raising an exception if the key does not exist but is in the list of default fields.
defaultkeys
Return the default keys defined in the instance.
extrakeys
Return the extra keys defined in the instance.
get_default_fields
Return the list of default fields, either defined in the instance or not.
validate
Validate the keys, if any validate_* method is available.
validate_*
GraphTraversalRule
Bases: tuple
tuple
A namedtuple that defines a graph traversal rule.
When starting from a certain sub set of nodes, the graph traversal rules specify which links should be followed to add adjacent nodes to finally arrive at a set of nodes that represent a valid and consistent sub graph.
link_type – the LinkType that the rule applies to
direction – whether the link type should be followed backwards or forwards
toggleable – boolean to indicate whether the rule can be changed from the default value. If this is False it means the default value can never be changed as it will result in an inconsistent graph.
default – boolean, the default value of the rule, if True means that the link type for the given direction should be followed.
__getnewargs__
Return self as a plain tuple. Used by copy and pickle.
__new__
Create new instance of GraphTraversalRule(link_type, direction, toggleable, default)
Return a nicely formatted representation string
__slots__
_asdict
Return a new dict which maps field names to their values.
_field_defaults
_fields
_fields_defaults
_make
Make a new GraphTraversalRule object from a sequence or iterable
_replace
Return a new GraphTraversalRule object replacing specified fields with new values
default
Alias for field number 3
direction
Alias for field number 1
link_type
Alias for field number 0
toggleable
Alias for field number 2
GraphTraversalRules
Graph traversal rules when deleting or exporting nodes.
DEFAULT
DELETE
EXPORT
LinkType
A simple enum of allowed link types.
CALL_CALC
CALL_WORK
CREATE
INPUT_CALC
INPUT_WORK
RETURN
validate_link_label
Validate the given link label.
Valid link labels adhere to the following restrictions:
Has to be a valid python identifier Can only contain alphanumeric characters and underscores Can not start or end with an underscore
Has to be a valid python identifier
Can only contain alphanumeric characters and underscores
Can not start or end with an underscore
TypeError – if the link label is not a string type
ValueError – if the link label is invalid
override_log_level
Temporarily adjust the log-level of logger.
override_log_formatter
Temporarily use a different formatter for all handlers.
NOTE: One can _only_ set fmt (not datefmt or style). Be aware! This may fail if the number of handlers is changed within the decorated function/method.
Module to define the (physical) constants used throughout the code.
Module to define commonly used data structures.
aiida.common.datastructures.
Miscellaneous functions for escaping strings.
aiida.common.escaping.
escape_for_bash
This function takes any string and escapes it in a way that bash will interpret it as a single string.
Explanation:
At the end, in the return statement, the string is put within single quotes. Therefore, the only thing that I have to escape in bash is the single quote character. To do this, I substitute every single quote ‘ with ‘”’”’ which means:
First single quote: exit from the enclosing single quotes
Second, third and fourth character: “’” is a single quote character, escaped by double quotes
Last single quote: reopen the single quote to continue the string
Finally, note that for python I have to enclose the string ‘”’”’ within triple quotes to make it work, getting finally: the complicated string found below.
escape_for_sql_like
Function that escapes % or _ symbols provided by user
SQL LIKE syntax summary:
% -> match any number of characters
%
_ -> match exactly one character
_
get_regex_pattern_from_sql
Convert a string providing a pattern to match in SQL syntax into a string performing the same match as a regex.
Moreover, \ is the escape character (by default), so:
\
\\ -> single backslash
\\
\% -> literal % symbol
\%
\_ -> literal _ symbol
\_
and moreover the string should begin at the beginning of the line and end at the end of the line.
sql_pattern – the string with the pattern in SQL syntax
a string with the pattern in regex syntax
sql_string_match
Check if the string matches the provided pattern, specified using SQL syntax.
See documentation of get_regex_pattern_from_sql() for an explanation of the syntax.
get_regex_pattern_from_sql()
string – the string to check
pattern – the SQL pattern
True if the string matches, False otherwise
Module that define the exceptions that are thrown by AiiDA’s internal code.
aiida.common.exceptions.
Various dictionary types with extended functionality.
aiida.common.extendeddicts.
Utility functions to operate on filesystem files.
aiida.common.files.
md5_file
Create the hexdigested md5 checksum of the contents from
filepath – the filepath of the file for which we want the md5sum
block_size_factor – the file is read at chunks of size block_size_factor * md5.block_size, where md5.block_size is the block_size used internally by the hashlib module.
block_size_factor * md5.block_size
md5.block_size
a string with the hexdigest md5.
No checks are done on the file, so if it doesn’t exists it may raise IOError.
md5_from_filelike
Create the hexdigested md5 checksum of the contents from a filelike object.
filelike – the filelike object for whose contents to generate the md5 checksum
no checks are done on the filelike object, so it may raise IOError if it cannot be read from.
sha1_file
Open a file and return its sha1sum (hexdigested).
filename – the filename of the file for which we want the sha1sum
block_size_factor – the file is read at chunks of size block_size_factor * sha1.block_size, where sha1.block_size is the block_size used internally by the hashlib module.
block_size_factor * sha1.block_size
sha1.block_size
a string with the hexdigest sha1.
Utility functions to operate on filesystem folders.
aiida.common.folders.
Folder
Bases: object
object
A class to manage generic folders, avoiding to get out of specific given folder borders.
Initialize self. See help(type(self)) for accurate signature.
abspath
The absolute path of the folder.
create
Creates the folder, if it does not exist on the disk yet.
It will also create top directories, if absent.
It is always safe to call it, it will do nothing if the folder already exists.
create_file_from_filelike
Create a file with the given filename from a filelike object.
filelike – a filelike object whose contents to copy
filename – the filename for the file that is to be created
mode – the mode with which the target file will be written
encoding – the encoding with which the target file will be written
the absolute filepath of the created file
create_symlink
Create a symlink inside the folder to the location ‘src’.
src – the location to which the symlink must point. Can be either a relative or an absolute path. Should, however, be relative to work properly also when the repository is moved!
name – the filename of the symlink to be created.
erase
Erases the folder. Should be called only in very specific cases, in general folder should not be erased!
Doesn’t complain if the folder does not exist.
create_empty_folder – if True, after erasing, creates an empty dir.
exists
Return True if the folder exists, False otherwise.
folder_limit
The folder limit that cannot be crossed when creating files and folders.
get_abs_path
Return an absolute path for a file or folder in this folder.
The advantage of using this method is that it checks that filename is a valid filename within this folder, and not something e.g. containing slashes.
filename – The file or directory.
check_existence – if False, just return the file path. Otherwise, also check if the file or directory actually exists. Raise OSError if it does not.
get_content_list
Return a list of files (and subfolders) in the folder, matching a given pattern.
Example: If you want to exclude files starting with a dot, you can call this method with pattern='[!.]*'
pattern='[!.]*'
pattern – a pattern for the file/folder names, using Unix filename pattern matching (see Python standard module fnmatch). By default, pattern is ‘*’, matching all files and folders.
only_paths – if False (default), return pairs (name, is_file). if True, return only a flat list.
a list of tuples of two elements, the first is the file name and the second is True if the element is a file, False if it is a directory.
get_subfolder
Return a Folder object pointing to a subfolder.
subfolder – a string with the relative path of the subfolder, relative to the absolute path of this object. Note that this may also contain ‘..’ parts, as far as this does not go beyond the folder_limit.
create – if True, the new subfolder is created, if it does not exist.
reset_limit – when doing b = a.get_subfolder('xxx', reset_limit=False), the limit of b will be the same limit of a. if True, the limit will be set to the boundaries of folder b.
b = a.get_subfolder('xxx', reset_limit=False)
a Folder object pointing to the subfolder.
insert_path
Copy a file to the folder.
src – the source filename to copy
dest_name – if None, the same basename of src is used. Otherwise, the destination filename will have this file name.
overwrite – if False, raises an error on existing destination; otherwise, delete it first.
False
isdir
Return True if ‘relpath’ exists inside the folder and is a directory, False otherwise.
isfile
Return True if ‘relpath’ exists inside the folder and is a file, False otherwise.
mode_dir
Return the mode with which the folders should be created
mode_file
Return the mode with which the files should be created
open
Open a file in the current folder and return the corresponding file object.
remove_path
Remove a file or folder from the folder.
filename – the relative path name to remove
replace_with_folder
This routine copies or moves the source folder ‘srcdir’ to the local folder pointed to by this Folder.
srcdir (str) – the source folder on the disk; this must be an absolute path
move (bool) – if True, the srcdir is moved to the repository. Otherwise, it is only copied.
overwrite (bool) – if True, the folder will be erased first. if False, an IOError is raised if the folder already exists. Whatever the value of this flag, parent directories will be created, if needed.
IOError – in case of problems accessing or writing the files.
OSError – in case of problems accessing or writing the files (from shutil module).
shutil
ValueError – if the section is not recognized.
RepositoryFolder
Bases: aiida.common.folders.Folder
aiida.common.folders.Folder
A class to manage the local AiiDA repository folders.
Initializes the object by pointing it to a folder in the repository.
Pass the uuid as a string.
get_topdir
Returns the top directory, i.e., the section/uuid folder object.
section
The section to which this folder belongs.
subfolder
The subfolder within the section/uuid folder.
uuid
The uuid to which this folder belongs.
SandboxFolder
A class to manage the creation and management of a sandbox folder.
Note: this class must be used within a context manager, i.e.:
## do something with f
In this way, the sandbox folder is removed from disk (if it wasn’t removed already) when exiting the ‘with’ block.
__enter__
Called when entering in the with statement
__exit__
In exit, I remove the sandbox folder from disk, if it still exists
Initializes the object by creating a new temporary folder in the sandbox.
sandbox_in_repo (bool) – If True (default), creates the folder in the repository. If false, relies on the defaults of tempfile.mkdtemp
SubmitTestFolder
Sandbox folder that can be used for the test submission of CalcJobs.
The directory will be created in the current working directory with a configurable basename. Then a sub folder will be created within this base folder based on the current date and an index in order to not overwrite already existing created test folders.
Return the sub folder that should be Called when entering in the with statement.
When context manager is exited, do not delete the folder.
Construct and create the sandbox folder.
The directory will be created in the current working directory with the name given by basepath. Then a sub folder will be created within this base folder based on the current date and an index in order to not overwrite already existing created test folders.
basepath – name of the base directory that will be created in the current working directory
_sub_folder
Common password and hash generation functions.
aiida.common.hashing.
Hash the content of a Folder object. The name of the folder itself is actually ignored :param ignored_folder_content: list of filenames to be ignored for the hashing
_make_hash
Implementation of the make_hash function. The hash is created as a 28 byte integer, and only later converted to a string.
make_hash
_single_digest
float_to_text
Convert float to text string for computing hash. Preseve up to N significant number given by sig.
value – the float value to convert
sig – choose how many digits after the comma should be output
get_random_string
Returns a securely generated random string.
The default length of 12 with the a-z, A-Z, 0-9 character set returns a 71-bit value. log_2((26+26+10)^12) =~ 71 bits
Makes a hash from a dictionary, list, tuple or set to any level, that contains only other hashable or nonhashable types (including lists, tuples, sets, and dictionaries).
object_to_hash – the object to hash
a unique hash
There are a lot of modules providing functionalities to create unique hashes for hashable values. However, getting hashes for nonhashable items like sets or dictionaries is not easily doable because order is not fixed. This leads to the peril of getting different hashes for the same dictionary.
This function avoids this by recursing through nonhashable items and hashing iteratively. Uses python’s sorted function to sort unsorted sets and dictionaries by sorting the hashed keys.
Abstracts JSON usage to ensure compatibility with Python2 and Python3.
Use this module prefentially over standard json to ensure compatibility.
aiida.common.json.
dump
Write JSON encoded ‘data’ to a file-like object, fhandle Use open(filename, ‘wb’) to write. The utf8write object is used to ensure that the resulting serialised data is encoding as UTF8. Any strings with non-ASCII characters need to be unicode strings. We use ensure_ascii=False to write unicode characters specifically as this improves the readability of the json and reduces the file size.
dumps
Write JSON encoded ‘data’ to a string. simplejson is useful here as it always returns unicode if ensure_ascii=False is used, unlike the standard library json, rather than being dependant on the input. We use also ensure_ascii=False to write unicode characters specifically as this improves the readability of the json and reduces the file size. When writing to file, use open(filename, ‘w’, encoding=’utf8’)
load
Deserialise a JSON file.
For encoding consistency, open(filename, ‘r’, encoding=’utf8’) should be used.
ValueError – if no valid JSON object could be decoded
loads
Deserialise a JSON string.
Utilities that extend the basic python language.
aiida.common.lang.
classproperty
A class that, when used as a decorator, works as if the two decorators @property and @classmethod where applied together (i.e., the object works as a property, both for the Class and for any of its instance; and is called with the class cls rather than with the instance as its first argument).
__get__
isidentifier
Return whether the given string is a valid python identifier.
boolean, True if identifier is valid, False otherwise
TypeError – if identifier is not string type
override
override_decorator
Decorator to signal that a method from a base class is being overridden completely.
type_check
Verify that object ‘what’ is of type ‘of_type’ and if not the case, raise a TypeError.
what – the object to check
of_type – the type (or tuple of types) to compare to
msg – if specified, allows to customize the message that is passed within the TypeError exception
allow_none – boolean, if True will not raise if the passed what is None
what or None
Module with utilities and data structures pertaining to links between nodes in the provenance graph.
aiida.common.links.
Module for all logging methods/classes that don’t need the ORM.
aiida.common.log.
Utility functions to operate on datetime objects.
aiida.common.timezone.
datetime_to_isoformat
Convert a datetime object to string representations in ISO format.
value – a datetime object
delta
Return the datetime object representing the different between two datetime objects.
from_time – starting datetime object
to_time – end datetime object
the delta datetime object
get_current_timezone
Return the current timezone.
current timezone
is_aware
Return whether the given datetime object is timezone aware
boolean, True if aware, False otherwise
is_naive
Return whether the given datetime object is timezone naive
boolean, True if naive, False otherwise
isoformat_to_datetime
Convert string representation of a datetime in ISO format to a datetime object.
value – a ISO format string representation of a datetime object
localtime
Converts an aware datetime.datetime to local time.
Local time is defined by the current time zone, unless another time zone is specified.
make_aware
Make the given datetime object timezone aware.
value – datetime object to make aware
timezone –
is_dst –
now
Return the datetime object of the current time.
datetime object represeting current time
Miscellaneous generic utility functions and classes.
aiida.common.utils.
ArrayCounter
A counter & a method that increments it and returns its value. It is used in various tests.
array_counter
seq
Capturing
This class captures stdout and returns it (as a list, split by lines).
Note: if you raise a SystemExit, you have to catch it outside. E.g., in our tests, this works:
import sys with self.assertRaises(SystemExit): with Capturing() as output: sys.exit()
But out of the testing environment, the code instead just exits.
To use it, access the obj.stdout_lines, or just iterate over the object
capture_stderr – if True, also captures sys.stderr. To access the lines, use obj.stderr_lines. If False, obj.stderr_lines is None.
Enter the context where all output is captured.
Exit the context where all output is captured.
__iter__
__str__
Return str(self).
ErrorAccumulator
Allows to run a number of functions and collect all the errors they raise
This allows to validate multiple things and tell the user about all the errors encountered at once. Works best if the individual functions do not depend on each other.
Does not allow to trace the stack of each error, therefore do not use for debugging, but for semantical checking with user friendly error messages.
raise_errors
result
run
success
Prettifier
Class to manage prettifiers (typically for labels of kpoints in band plots)
Create a class to pretttify strings of a given format
format – a string with the format to use to prettify. Valid formats are obtained from self.prettifiers
_prettify_label_agr
Prettifier for XMGrace
label – a string to prettify
_prettify_label_agr_simple
Prettifier for XMGrace (for old label names)
_prettify_label_gnuplot
Prettifier for Gnuplot
uses unicode, returns unicode strings (potentially, if needed)
_prettify_label_gnuplot_simple
Prettifier for Gnuplot (for old label names)
_prettify_label_latex
Prettifier for matplotlib, using LaTeX syntax
_prettify_label_latex_simple
Prettifier for matplotlib, using LaTeX syntax (for old label names)
_prettify_label_pass
No-op prettifier, simply returns the same label
get_prettifiers
Return a list of valid prettifier strings
a list of strings
prettifiers
prettify
Prettify a label using the format passed in the initializer
label – the string to prettify
a prettified string
are_dir_trees_equal
Compare two directories recursively. Files in each directory are assumed to be equal if their names and contents are equal.
@param dir1: First directory path @param dir2: Second directory path
there were no errors while accessing the directories or files, False otherwise.
get_class_string
Return the string identifying the class of the object (module + object name, joined by dots).
It works both for classes and for class instances.
get_new_uuid
Return a new UUID (typically to be used for new nodes). It uses the UUID version specified in aiida.backends.settings.AIIDANODES_UUID_VERSION
get_object_from_string
Given a string identifying an object (as returned by the get_class_string method) load and return the actual object.
get_unique_filename
Return a unique filename that can be added to the list_of_filenames.
If filename is not in list_of_filenames, it simply returns the filename string itself. Otherwise, it appends a integer number to the filename (before the extension) until it finds a unique filename.
filename – the filename to add
list_of_filenames – the list of filenames to which filename should be added, without name duplicates
Either filename or its modification, with a number appended between the name and the extension.
grouper
Given an iterable, returns an iterable that returns tuples of groups of elements from iterable of length n, except the last one that has the required length to exaust iterable (i.e., there is no filling applied).
n – length of each tuple (except the last one,that will have length <= n
iterable – the iterable to divide in groups
join_labels
Join labels with a joining symbol when they are very close
labels – a list of length-2 tuples, in the format(position, label)
join_symbol – the string to use to join different paths. By default, a pipe
threshold – the threshold to decide if two float values are the same and should be joined
the same list as labels, but with the second value possibly replaced with strings joined when close enough
prettify_labels
Prettify label for typesetting in various formats
format – a string with the format for the prettifier (e.g. ‘agr’, ‘matplotlib’, …)
the same list as labels, but with the second value possibly replaced with a prettified version that typesets nicely in the selected format
str_timedelta
Given a dt in seconds, return it in a HH:MM:SS format.
dt – a TimeDelta object
max_num_fields – maximum number of non-zero fields to show (for instance if the number of days is non-zero, shows only days, hours and minutes, but not seconds)
short – if False, print always max_num_fields fields, even if they are zero. If True, do not print the first fields, if they are zero.
max_num_fields
negative_to_zero – if True, set dt = 0 if dt < 0.
strip_prefix
Strip the prefix from the given string and return it. If the prefix is not present the original string will be returned unaltered
full_string – the string from which to remove the prefix
prefix – the prefix to remove
the string with prefix removed
validate_list_of_string_tuples
Check that:
val is a list or tuple
val
each element of the list:
is a list or tuple is of length equal to the parameter tuple_length each of the two elements is a string
is a list or tuple
is of length equal to the parameter tuple_length
each of the two elements is a string
Return if valid, raise ValidationError if invalid
Define warnings that can be thrown by AiiDA.
aiida.common.warnings.
AiidaDeprecationWarning
Bases: Warning
Warning
Class for AiiDA deprecations.
It does not inherit, on purpose, from DeprecationWarning as this would be filtered out by default. Enabled by default, you can disable it by running in the shell:
verdi config warnings.showdeprecations False
AiidaEntryPointWarning
Class for warnings concerning AiiDA entry points.
AiidaTestWarning
Class for warnings concerning the AiiDA testing infrastructure.