aiida.storage.sqlite_temp package#

A temporary backend, using an in-memory sqlite database.

This backend is intended for testing and demonstration purposes. Whenever it is instantiated, it creates a fresh storage backend, and destroys it when it is garbage collected.

Submodules#

Definition of the SqliteTempBackend backend.

class aiida.storage.sqlite_temp.backend.SandboxShaRepositoryBackend(filepath: str | None = None)[source]#

Bases: SandboxRepositoryBackend

A sandbox repository backend that uses the sha256 of the file as the key.

This allows for compatibility with the archive format (i.e. SqliteZipBackend). Which allows for temporary profiles to be exported and imported.

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_temp.backend'#
_abc_impl = <_abc._abc_data object>#
_filepath: str | None#
_put_object_from_filelike(handle: BinaryIO) str[source]#

Store the byte contents of a file in the repository.

Parameters:

handle – filelike object with the byte content to be stored.

Returns:

the generated fully qualified identifier for the object within the repository.

Raises:

TypeError – if the handle is not a byte stream.

_sandbox: SandboxFolder | None#
get_info(detailed: bool = False, **kwargs) dict[source]#

Returns relevant information about the content of the repository.

Parameters:

detailed – flag to enable extra information (detailed=False by default, only returns basic information).

Returns:

a dictionary with the information.

get_object_hash(key: str) str[source]#

Return the SHA-256 hash of an object stored under the given key.

Important

A SHA-256 hash should always be returned, to ensure consistency across different repository implementations.

Parameters:

key – fully qualified identifier for the object within the repository.

Raises:
property key_format: str | None#

Return the format for the keys of the repository.

Important for when migrating between backends (e.g. archive -> main), as if they are not equal then it is necessary to re-compute all the Node.base.repository.metadata before importing (otherwise they will not match with the repository).

maintain(dry_run: bool = False, live: bool = True, **kwargs) None[source]#

Performs maintenance operations.

Parameters:
  • dry_run – flag to only print the actions that would be taken without actually executing them.

  • live – flag to indicate to the backend whether AiiDA is live or not (i.e. if the profile of the backend is currently being used/accessed). The backend is expected then to only allow (and thus set by default) the operations that are safe to perform in this state.

class aiida.storage.sqlite_temp.backend.SqliteTempBackend(profile: Profile)[source]#

Bases: StorageBackend

A temporary backend, using an in-memory sqlite database.

This backend is intended for testing and demonstration purposes. Whenever it is instantiated, it creates a fresh storage backend, and destroys it when it is garbage collected.

class Configuration(*, filepath: str = None)[source]#

Bases: BaseModel

__abstractmethods__ = frozenset({})#
__annotations__ = {'__class_vars__': 'ClassVar[set[str]]', '__private_attributes__': 'ClassVar[dict[str, ModelPrivateAttr]]', '__pydantic_complete__': 'ClassVar[bool]', '__pydantic_core_schema__': 'ClassVar[CoreSchema]', '__pydantic_custom_init__': 'ClassVar[bool]', '__pydantic_decorators__': 'ClassVar[_decorators.DecoratorInfos]', '__pydantic_extra__': 'dict[str, Any] | None', '__pydantic_fields_set__': 'set[str]', '__pydantic_generic_metadata__': 'ClassVar[_generics.PydanticGenericMetadata]', '__pydantic_parent_namespace__': 'ClassVar[dict[str, Any] | None]', '__pydantic_post_init__': "ClassVar[None | Literal['model_post_init']]", '__pydantic_private__': 'dict[str, Any] | None', '__pydantic_root_model__': 'ClassVar[bool]', '__pydantic_serializer__': 'ClassVar[SchemaSerializer]', '__pydantic_validator__': 'ClassVar[SchemaValidator]', '__signature__': 'ClassVar[Signature]', 'filepath': 'str', 'model_computed_fields': 'ClassVar[dict[str, ComputedFieldInfo]]', 'model_config': 'ClassVar[ConfigDict]', 'model_fields': 'ClassVar[dict[str, FieldInfo]]'}#
__class_vars__: ClassVar[set[str]] = {}#
__dict__#
__module__ = 'aiida.storage.sqlite_temp.backend'#
__private_attributes__: ClassVar[dict[str, ModelPrivateAttr]] = {}#
__pydantic_complete__: ClassVar[bool] = True#
__pydantic_core_schema__: ClassVar[CoreSchema] = {'cls': <class 'aiida.storage.sqlite_temp.backend.SqliteTempBackend.Configuration'>, 'config': {'title': 'Configuration'}, 'custom_init': False, 'metadata': {'pydantic.internal.needs_apply_discriminated_union': False, 'pydantic_js_annotation_functions': [], 'pydantic_js_functions': [functools.partial(<function modify_model_json_schema>, cls=<class 'aiida.storage.sqlite_temp.backend.SqliteTempBackend.Configuration'>), <bound method BaseModel.__get_pydantic_json_schema__ of <class 'aiida.storage.sqlite_temp.backend.SqliteTempBackend.Configuration'>>]}, 'ref': 'aiida.storage.sqlite_temp.backend.SqliteTempBackend.Configuration:94907073394320', 'root_model': False, 'schema': {'computed_fields': [], 'fields': {'filepath': {'metadata': {'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>], 'pydantic_js_functions': []}, 'schema': {'default_factory': <function mkdtemp>, 'schema': {'metadata': {'pydantic.internal.needs_apply_discriminated_union': False}, 'type': 'str'}, 'type': 'default'}, 'type': 'model-field'}}, 'model_name': 'Configuration', 'type': 'model-fields'}, 'type': 'model'}#
__pydantic_custom_init__: ClassVar[bool] = False#
__pydantic_decorators__: ClassVar[_decorators.DecoratorInfos] = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})#
__pydantic_extra__: dict[str, Any] | None#
__pydantic_fields_set__: set[str]#
__pydantic_generic_metadata__: ClassVar[_generics.PydanticGenericMetadata] = {'args': (), 'origin': None, 'parameters': ()}#
__pydantic_parent_namespace__: ClassVar[dict[str, Any] | None] = {'__doc__': 'A temporary backend, using an in-memory sqlite database.\n\n    This backend is intended for testing and demonstration purposes.\n    Whenever it is instantiated, it creates a fresh storage backend,\n    and destroys it when it is garbage collected.\n    ', '__module__': 'aiida.storage.sqlite_temp.backend', '__qualname__': 'SqliteTempBackend'}#
__pydantic_post_init__: ClassVar[None | Literal['model_post_init']] = None#
__pydantic_private__: dict[str, Any] | None#
__pydantic_serializer__: ClassVar[SchemaSerializer] = SchemaSerializer(serializer=Model(     ModelSerializer {         class: Py(             0x0000565146657690,         ),         serializer: Fields(             GeneralFieldsSerializer {                 fields: {                     "filepath": SerField {                         key_py: Py(                             0x00007fcf8362ad70,                         ),                         alias: None,                         alias_py: None,                         serializer: Some(                             WithDefault(                                 WithDefaultSerializer {                                     default: DefaultFactory(                                         Py(                                             0x00007fcf8265a560,                                         ),                                     ),                                     serializer: Str(                                         StrSerializer,                                     ),                                 },                             ),                         ),                         required: true,                     },                 },                 computed_fields: Some(                     ComputedFields(                         [],                     ),                 ),                 mode: SimpleDict,                 extra_serializer: None,                 filter: SchemaFilter {                     include: None,                     exclude: None,                 },                 required_fields: 1,             },         ),         has_extra: false,         root_model: false,         name: "Configuration",     }, ), definitions=[])#
__pydantic_validator__: ClassVar[SchemaValidator] = SchemaValidator(title="Configuration", validator=Model(     ModelValidator {         revalidate: Never,         validator: ModelFields(             ModelFieldsValidator {                 fields: [                     Field {                         name: "filepath",                         lookup_key: Simple {                             key: "filepath",                             py_key: Py(                                 0x00007fcf48885f70,                             ),                             path: LookupPath(                                 [                                     S(                                         "filepath",                                         Py(                                             0x00007fcf48885eb0,                                         ),                                     ),                                 ],                             ),                         },                         name_py: Py(                             0x00007fcf48885e30,                         ),                         validator: WithDefault(                             WithDefaultValidator {                                 default: DefaultFactory(                                     Py(                                         0x00007fcf8265a560,                                     ),                                 ),                                 on_error: Raise,                                 validator: Str(                                     StrValidator {                                         strict: false,                                         coerce_numbers_to_str: false,                                     },                                 ),                                 validate_default: false,                                 copy_default: false,                                 name: "default[str]",                                 undefined: Py(                                     0x00007fcf81911510,                                 ),                             },                         ),                         frozen: false,                     },                 ],                 model_name: "Configuration",                 extra_behavior: Ignore,                 extras_validator: None,                 strict: false,                 from_attributes: false,                 loc_by_alias: true,             },         ),         class: Py(             0x0000565146657690,         ),         post_init: None,         frozen: false,         custom_init: false,         root_model: false,         undefined: Py(             0x00007fcf81911510,         ),         name: "Configuration",     }, ), definitions=[])#
__signature__: ClassVar[Signature] = <Signature (*, filepath: str = None) -> None>#
__weakref__#

list of weak references to the object (if defined)

_abc_impl = <_abc._abc_data object>#
filepath: str#
model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'filepath': FieldInfo(annotation=str, required=False, default_factory=mkdtemp, title='Temporary directory', description='Temporary directory in which to store data for this backend.')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

__abstractmethods__ = frozenset({})#
__init__(profile: Profile)[source]#

Initialize the backend, for this profile.

Raises:

~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed

Raises:

~aiida.common.exceptions.IncompatibleStorageSchema if the profile’s storage schema is not at the latest version (and thus should be migrated)

Raises:
raises:

aiida.common.exceptions.CorruptStorage if the storage is internally inconsistent

__module__ = 'aiida.storage.sqlite_temp.backend'#
__str__() str[source]#

Return a string showing connection details for this instance.

_abc_impl = <_abc._abc_data object>#
_clear() None[source]#

Clear the storage, removing all data.

Warning

This is a destructive operation, and should only be used for testing purposes.

_default_user: 'User' | None#
static _get_mapper_from_entity(entity_type: EntityTypes, with_pk: bool)[source]#

Return the Sqlalchemy mapper and fields corresponding to the given entity.

Parameters:

with_pk – if True, the fields returned will include the primary key

property authinfos#

Return the collection of authorisation information objects

bulk_insert(entity_type: EntityTypes, rows: list[dict], allow_defaults: bool = False) list[int][source]#

Insert a list of entities into the database, directly into a backend transaction.

Parameters:
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically

  • allow_defaults – If False, assert that each row contains all fields (except primary key(s)), otherwise, allow default values for missing fields.

Raises:

IntegrityError if the keys in a row are not a subset of the columns in the table

Returns:

The list of generated primary keys for the entities

bulk_update(entity_type: EntityTypes, rows: list[dict]) None[source]#

Update a list of entities in the database, directly with a backend transaction.

Parameters:
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing fields of the backend model to update, and the id field (a.k.a primary key)

Raises:

IntegrityError if the keys in a row are not a subset of the columns in the table

cli_exposed = False#

Ensure this plugin is not exposed in verdi profile setup.

close()[source]#

Close the storage access.

property comments#

Return the collection of comments

property computers#

Return the collection of computers

static create_profile(name: str = 'temp', default_user_email='user@email.com', options: dict | None = None, debug: bool = False, filepath: str | Path | None = None) Profile[source]#

Create a new profile instance for this backend, from the path to the zip file.

delete() None[source]#

Delete the storage and all the data.

delete_nodes_and_connections(pks_to_delete: Sequence[int])[source]#

Delete all nodes corresponding to pks in the input and any links to/from them.

This method is intended to be used within a transaction context.

Parameters:

pks_to_delete – a sequence of node pks to delete

Raises:

AssertionError if a transaction is not active

get_backend_entity(model) BackendEntity[source]#

Return the backend entity that corresponds to the given Model instance.

get_global_variable(key: str)[source]#

Return a global variable from the storage.

Parameters:

key – the key of the setting

Raises:

KeyError if the setting does not exist

get_info(detailed: bool = False) dict[source]#

Return general information on the storage.

Parameters:

detailed – flag to request more detailed information about the content of the storage.

Returns:

a nested dict with the relevant information.

get_repository() SandboxShaRepositoryBackend[source]#

Return the object repository configured for this backend.

get_session() Session[source]#

Return an SQLAlchemy session.

property groups#

Return the collection of groups

property in_transaction: bool#

Return whether a transaction is currently active.

classmethod initialise(profile: Profile, reset: bool = False) bool[source]#

Initialise the storage backend.

This is typically used once when a new storage backed is created. If this method returns without exceptions the storage backend is ready for use. If the backend already seems initialised, this method is a no-op.

Parameters:

reset – If true, destroy the backend if it already exists including all of its data before recreating and initialising it. This is useful for example for test profiles that need to be reset before or after tests having run.

Returns:

True if the storage was initialised by the function call, False if it was already initialised.

property is_closed: bool#

Return whether the storage is closed.

property logs#

Return the collection of logs

maintain(dry_run: bool = False, live: bool = True, **kwargs) None[source]#

Perform maintenance tasks on the storage.

If full == True, then this method may attempt to block the profile associated with the storage to guarantee the safety of its procedures. This will not only prevent any other subsequent process from accessing that profile, but will also first check if there is already any process using it and raise if that is the case. The user will have to manually stop any processes that is currently accessing the profile themselves or wait for it to finish on its own.

Parameters:
  • full – flag to perform operations that require to stop using the profile to be maintained.

  • dry_run – flag to only print the actions that would be taken without actually executing them.

classmethod migrate(profile: Profile)[source]#

Migrate the storage of a profile to the latest schema version.

If the schema version is already the latest version, this method does nothing. If the storage is uninitialised, this method will raise an exception.

Raises:

:class`~aiida.common.exceptions.UnreachableStorage` if the storage cannot be accessed.

Raises:

StorageMigrationError if the storage is not initialised.

property nodes#

Return the collection of nodes

query() SqliteQueryBuilder[source]#

Return an instance of a query builder implementation for this backend

set_global_variable(key: str, value, description: str | None = None, overwrite=True) None[source]#

Set a global variable in the storage.

Parameters:
  • key – the key of the setting

  • value – the value of the setting

  • description – the description of the setting (optional)

  • overwrite – if True, overwrite the setting if it already exists

Raises:

ValueError if the key already exists and overwrite is False

transaction() Iterator[Session][source]#

Open a transaction to be used as a context manager.

If there is an exception within the context then the changes will be rolled back and the state will be as before entering. Transactions can be nested.

property users#

Return the collection of users

classmethod version_head() str[source]#

Return the head schema version of this storage backend type.

classmethod version_profile(profile: Profile) str | None[source]#

Return the schema version of the given profile’s storage, or None for empty/uninitialised storage.

Raises:

~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed