aiida.schedulers.plugins package¶
Submodules¶
Plugin for direct execution.
-
class
aiida.schedulers.plugins.direct.
DirectJobResource
(**kwargs)[source]¶ Bases:
aiida.schedulers.datastructures.NodeNumberJobResource
-
__module__
= 'aiida.schedulers.plugins.direct'¶
-
-
class
aiida.schedulers.plugins.direct.
DirectScheduler
[source]¶ Bases:
aiida.schedulers.scheduler.Scheduler
Support for the direct execution bypassing schedulers.
-
__abstractmethods__
= frozenset({})¶
-
__module__
= 'aiida.schedulers.plugins.direct'¶
-
_abc_impl
= <_abc_data object>¶
-
_features
= {'can_query_by_user': True}¶
-
_get_joblist_command
(jobs=None, user=None)[source]¶ The command to report full information on existing jobs.
- TODO: in the case of job arrays, decide what to do (i.e., if we want
to pass the -t options to list each subjob).
-
_get_submit_command
(submit_script)[source]¶ Return the string to execute to submit a given script.
Note
One needs to redirect stdout and stderr to /dev/null otherwise the daemon remains hanging for the script to run
- Parameters
submit_script – the path of the submit script relative to the working directory. IMPORTANT: submit_script should be already escaped.
-
_get_submit_script_header
(job_tmpl)[source]¶ Return the submit script header, using the parameters from the job_tmpl.
- Args:
job_tmpl: an JobTemplate instance with relevant parameters set.
-
_job_resource_class
¶ alias of
DirectJobResource
-
_logger
= <Logger aiida.scheduler.direct (REPORT)>¶
-
_parse_joblist_output
(retval, stdout, stderr)[source]¶ Parse the queue output string, as returned by executing the command returned by _get_joblist_command command (qstat -f).
Return a list of JobInfo objects, one of each job, each relevant parameters implemented.
Note
depending on the scheduler configuration, finished jobs may either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.
-
_parse_kill_output
(retval, stdout, stderr)[source]¶ Parse the output of the kill command.
To be implemented by the plugin.
- Returns
True if everything seems ok, False otherwise.
-
Plugin for LSF. This has been tested on the CERN lxplus cluster (LSF 9.1.3)
-
class
aiida.schedulers.plugins.lsf.
LsfJobResource
(**kwargs)[source]¶ Bases:
aiida.schedulers.datastructures.JobResource
An implementation of JobResource for LSF, that supports the OPTIONAL specification of a parallel environment (a string) + the total number of processors.
‘parallel_env’ should contain a string of the form “host1 host2! hostgroupA! host3 host4” where the “!” symbol indicates the first execution host candidates. Other hosts are added only if the number of processors asked is more than those of the first execution host. See https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_command_ref/bsub.1.dita?lang=en for more details about the parallel environment definition (the -m option of bsub).
-
__init__
(**kwargs)[source]¶ Initialize the job resources from the passed arguments (the valid keys can be obtained with the function self.get_valid_keys()).
- Raises
ValueError – on invalid parameters.
TypeError – on invalid parameters.
aiida.common.ConfigurationError – if default_mpiprocs_per_machine was set for this computer, since LsfJobResource cannot accept this parameter.
-
__module__
= 'aiida.schedulers.plugins.lsf'¶
-
_default_fields
= ('parallel_env', 'tot_num_mpiprocs', 'default_mpiprocs_per_machine')¶
-
-
class
aiida.schedulers.plugins.lsf.
LsfScheduler
[source]¶ Bases:
aiida.schedulers.scheduler.Scheduler
Support for the IBM LSF scheduler ‘https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_welcome.html’
-
__abstractmethods__
= frozenset({})¶
-
__module__
= 'aiida.schedulers.plugins.lsf'¶
-
_abc_impl
= <_abc_data object>¶
-
_features
= {'can_query_by_user': False}¶
-
_get_detailed_jobinfo_command
(jobid)[source]¶ Return the command to run to get the detailed information on a job, even after the job has finished.
The output text is just retrieved, and returned for logging purposes.
-
_get_joblist_command
(jobs=None, user=None)[source]¶ The command to report full information on existing jobs.
Separates the fields with the _field_separator string order: jobnum, state, walltime, queue[=partition], user, numnodes, numcores, title
-
_get_submit_command
(submit_script)[source]¶ Return the string to execute to submit a given script.
- Parameters
submit_script – the path of the submit script relative to the working directory. IMPORTANT: submit_script should be already escaped.
Return the submit script final part, using the parameters from the job_tmpl.
- Parameters
job_tmpl – a JobTemplate instance with relevant parameters set.
-
_get_submit_script_header
(job_tmpl)[source]¶ Return the submit script header, using the parameters from the job_tmpl. See the following manual https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_command_ref/bsub.1.dita?lang=en for more details about the possible options to bsub, in particular for the parallel environment definition (with the -m option).
- Parameters
job_tmpl – an JobTemplate instance with relevant parameters set.
-
_job_resource_class
¶ alias of
LsfJobResource
-
_joblist_fields
= ['id', 'stat', 'exit_reason', 'exec_host', 'user', 'slots', 'max_req_proc', 'exec_host', 'queue', 'finish_time', 'start_time', '%complete', 'submit_time', 'name']¶
-
_logger
= <Logger aiida.scheduler.lsf (REPORT)>¶
-
_parse_joblist_output
(retval, stdout, stderr)[source]¶ Parse the queue output string, as returned by executing the command returned by _get_joblist_command command, that is here implemented as a list of lines, one for each job, with _field_separator as separator. The order is described in the _get_joblist_command function.
Return a list of JobInfo objects, one of each job, each relevant parameters implemented.
- Note: depending on the scheduler configuration, finished jobs may
either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.
-
_parse_kill_output
(retval, stdout, stderr)[source]¶ Parse the output of the kill command.
- Returns
True if everything seems ok, False otherwise.
-
Base classes for PBSPro and PBS/Torque plugins.
-
class
aiida.schedulers.plugins.pbsbaseclasses.
PbsBaseClass
[source]¶ Bases:
aiida.schedulers.scheduler.Scheduler
Base class with support for the PBSPro scheduler (http://www.pbsworks.com/) and for PBS and Torque (http://www.adaptivecomputing.com/products/open-source/torque/).
Only a few properties need to be redefined, see examples of the pbspro and torque plugins
-
__abstractmethods__
= frozenset({})¶
-
__module__
= 'aiida.schedulers.plugins.pbsbaseclasses'¶
-
_abc_impl
= <_abc_data object>¶
-
static
_convert_time
(string)[source]¶ Convert a string in the format HH:MM:SS to a number of seconds.
-
_features
= {'can_query_by_user': False}¶
-
_get_detailed_jobinfo_command
(jobid)[source]¶ Return the command to run to get the detailed information on a job, even after the job has finished.
The output text is just retrieved, and returned for logging purposes.
-
_get_joblist_command
(jobs=None, user=None)[source]¶ The command to report full information on existing jobs.
- TODO: in the case of job arrays, decide what to do (i.e., if we want
to pass the -t options to list each subjob).
-
_get_resource_lines
(num_machines, num_mpiprocs_per_machine, num_cores_per_machine, max_memory_kb, max_wallclock_seconds)[source]¶ Return a set a list of lines (possibly empty) with the header lines relative to:
num_machines
num_mpiprocs_per_machine
num_cores_per_machine
max_memory_kb
max_wallclock_seconds
This is done in an external function because it may change in different subclasses.
-
_get_submit_command
(submit_script)[source]¶ Return the string to execute to submit a given script.
- Args:
- submit_script: the path of the submit script relative to the working
directory. IMPORTANT: submit_script should be already escaped.
-
_get_submit_script_header
(job_tmpl)[source]¶ Return the submit script header, using the parameters from the job_tmpl.
- Args:
job_tmpl: an JobTemplate instance with relevant parameters set.
TODO: truncate the title if too long
-
_job_resource_class
¶ alias of
PbsJobResource
-
_map_status
= {'B': <JobState.RUNNING: 'running'>, 'C': <JobState.DONE: 'done'>, 'E': <JobState.RUNNING: 'running'>, 'F': <JobState.DONE: 'done'>, 'H': <JobState.QUEUED_HELD: 'queued held'>, 'M': <JobState.UNDETERMINED: 'undetermined'>, 'Q': <JobState.QUEUED: 'queued'>, 'R': <JobState.RUNNING: 'running'>, 'S': <JobState.SUSPENDED: 'suspended'>, 'T': <JobState.QUEUED: 'queued'>, 'U': <JobState.SUSPENDED: 'suspended'>, 'W': <JobState.QUEUED: 'queued'>, 'X': <JobState.DONE: 'done'>}¶
-
_parse_joblist_output
(retval, stdout, stderr)[source]¶ Parse the queue output string, as returned by executing the command returned by _get_joblist_command command (qstat -f).
Return a list of JobInfo objects, one of each job, each relevant parameters implemented.
- Note: depending on the scheduler configuration, finished jobs may
either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.
-
_parse_kill_output
(retval, stdout, stderr)[source]¶ Parse the output of the kill command.
To be implemented by the plugin.
- Returns
True if everything seems ok, False otherwise.
-
-
class
aiida.schedulers.plugins.pbsbaseclasses.
PbsJobResource
(**kwargs)[source]¶ Bases:
aiida.schedulers.datastructures.NodeNumberJobResource
Base class for PBS job resources
-
__init__
(**kwargs)[source]¶ It extends the base class init method and calculates the num_cores_per_machine fields to pass to PBSlike schedulers.
Checks that num_cores_per_machine is a multiple of num_cores_per_mpiproc and/or num_mpiprocs_per_machine
Check sequence
If num_cores_per_mpiproc and num_cores_per_machine both are specified check whether it satisfies the check
If only num_cores_per_mpiproc is passed, calculate num_cores_per_machine
If only num_cores_per_machine is passed, use it
-
__module__
= 'aiida.schedulers.plugins.pbsbaseclasses'¶
-
Plugin for PBSPro. This has been tested on PBSPro v. 12.
-
class
aiida.schedulers.plugins.pbspro.
PbsproScheduler
[source]¶ Bases:
aiida.schedulers.plugins.pbsbaseclasses.PbsBaseClass
Subclass to support the PBSPro scheduler (http://www.pbsworks.com/).
I redefine only what needs to change from the base class
-
__abstractmethods__
= frozenset({})¶
-
__module__
= 'aiida.schedulers.plugins.pbspro'¶
-
_abc_impl
= <_abc_data object>¶
-
Plugin for SGE. This has been tested on GE 6.2u3.
Plugin originally written by Marco Dorigo. Email: marco(DOT)dorigo(AT)rub(DOT)de
-
class
aiida.schedulers.plugins.sge.
SgeJobResource
(**kwargs)[source]¶ Bases:
aiida.schedulers.datastructures.ParEnvJobResource
-
__module__
= 'aiida.schedulers.plugins.sge'¶
-
-
class
aiida.schedulers.plugins.sge.
SgeScheduler
[source]¶ Bases:
aiida.schedulers.scheduler.Scheduler
Support for the Sun Grid Engine scheduler and its variants/forks (Son of Grid Engine, Oracle Grid Engine, …)
-
__abstractmethods__
= frozenset({})¶
-
__module__
= 'aiida.schedulers.plugins.sge'¶
-
_abc_impl
= <_abc_data object>¶
-
_features
= {'can_query_by_user': True}¶
-
_get_detailed_jobinfo_command
(jobid)[source]¶ Return the command to run to get the detailed information on a job. This is typically called after the job has finished, to retrieve the most detailed information possible about the job. This is done because most schedulers just make finished jobs disappear from the ‘qstat’ command, and instead sometimes it is useful to know some more detailed information about the job exit status, etc.
-
_get_joblist_command
(jobs=None, user=None)[source]¶ The command to report full information on existing jobs.
- TODO: in the case of job arrays, decide what to do (i.e., if we want
to pass the -t options to list each subjob).
!!!ALL COPIED FROM PBSPRO!!! TODO: understand if it is worth escaping the username, or rather leave it unescaped to allow to pass $USER
-
_get_submit_command
(submit_script)[source]¶ Return the string to execute to submit a given script.
- Args:
- submit_script: the path of the submit script relative to the working
directory. IMPORTANT: submit_script should be already escaped.
-
_get_submit_script_header
(job_tmpl)[source]¶ Return the submit script header, using the parameters from the job_tmpl.
- Args:
job_tmpl: an JobTemplate instance with relevant parameters set.
TODO: truncate the title if too long
-
_job_resource_class
¶ alias of
SgeJobResource
-
_logger
= <Logger aiida.scheduler.sge (REPORT)>¶
-
_parse_joblist_output
(retval, stdout, stderr)[source]¶ Parse the joblist output (‘qstat’), as returned by executing the command returned by _get_joblist_command method.
To be implemented by the plugin.
Return a list of JobInfo objects, one of each job, each with at least its default params implemented.
-
_parse_kill_output
(retval, stdout, stderr)[source]¶ Parse the output of the kill command.
To be implemented by the plugin.
- Returns
True if everything seems ok, False otherwise.
-
Plugin for SLURM. This has been tested on SLURM 14.03.7 on the CSCS.ch machines.
-
class
aiida.schedulers.plugins.slurm.
SlurmJobResource
(*args, **kwargs)[source]¶ Bases:
aiida.schedulers.datastructures.NodeNumberJobResource
Slurm job resources object
-
__init__
(*args, **kwargs)[source]¶ It extends the base class init method and calculates the num_cores_per_mpiproc fields to pass to Slurm schedulers.
Checks that num_cores_per_machine should be a multiple of num_cores_per_mpiproc and/or num_mpiprocs_per_machine
Check sequence
If num_cores_per_mpiproc and num_cores_per_machine both are specified check whether it satisfies the check
If only num_cores_per_machine is passed, calculate num_cores_per_mpiproc which should always be an integer value
If only num_cores_per_mpiproc is passed, use it
-
__module__
= 'aiida.schedulers.plugins.slurm'¶
-
-
class
aiida.schedulers.plugins.slurm.
SlurmScheduler
[source]¶ Bases:
aiida.schedulers.scheduler.Scheduler
Support for the SLURM scheduler (http://slurm.schedmd.com/).
-
__abstractmethods__
= frozenset({})¶
-
__module__
= 'aiida.schedulers.plugins.slurm'¶
-
_abc_impl
= <_abc_data object>¶
-
_features
= {'can_query_by_user': False}¶
-
_get_detailed_jobinfo_command
(jobid)[source]¶ Return the command to run to get the detailed information on a job, even after the job has finished.
The output text is just retrieved, and returned for logging purposes. –parsable split the fields with a pipe (|), adding a pipe also at the end.
-
_get_joblist_command
(jobs=None, user=None)[source]¶ The command to report full information on existing jobs.
Separate the fields with the _field_separator string order: jobnum, state, walltime, queue[=partition], user, numnodes, numcores, title
-
_get_submit_command
(submit_script)[source]¶ Return the string to execute to submit a given script.
- Args:
- submit_script: the path of the submit script relative to the working
directory. IMPORTANT: submit_script should be already escaped.
-
_get_submit_script_header
(job_tmpl)[source]¶ Return the submit script header, using the parameters from the job_tmpl.
- Args:
job_tmpl: an JobTemplate instance with relevant parameters set.
TODO: truncate the title if too long
-
_job_resource_class
¶ alias of
SlurmJobResource
-
_logger
= <Logger aiida.scheduler.slurm (REPORT)>¶
-
_parse_joblist_output
(retval, stdout, stderr)[source]¶ Parse the queue output string, as returned by executing the command returned by _get_joblist_command command, that is here implemented as a list of lines, one for each job, with _field_separator as separator. The order is described in the _get_joblist_command function.
Return a list of JobInfo objects, one of each job, each relevant parameters implemented.
- Note: depending on the scheduler configuration, finished jobs may
either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.
-
_parse_kill_output
(retval, stdout, stderr)[source]¶ Parse the output of the kill command.
To be implemented by the plugin.
- Returns
True if everything seems ok, False otherwise.
-
_parse_submit_output
(retval, stdout, stderr)[source]¶ Parse the output of the submit command, as returned by executing the command returned by _get_submit_command command.
To be implemented by the plugin.
Return a string with the JobID.
-
_parse_time_string
(string, fmt='%Y-%m-%dT%H:%M:%S')[source]¶ Parse a time string in the format returned from qstat -f and returns a datetime object.
-
fields
= [('%i', 'job_id'), ('%t', 'state_raw'), ('%r', 'annotation'), ('%B', 'executing_host'), ('%u', 'username'), ('%D', 'number_nodes'), ('%C', 'number_cpus'), ('%R', 'allocated_machines'), ('%P', 'partition'), ('%l', 'time_limit'), ('%M', 'time_used'), ('%S', 'dispatch_time'), ('%j', 'job_name'), ('%V', 'submission_time')]¶
-
-
class
aiida.schedulers.plugins.test_direct.
TestParserGetJobList
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline
-
__module__
= 'aiida.schedulers.plugins.test_direct'¶
-
-
class
aiida.schedulers.plugins.test_lsf.
TestParserBjobs
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
Tests to verify if the function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline
-
__module__
= 'aiida.schedulers.plugins.test_lsf'¶
-
-
class
aiida.schedulers.plugins.test_lsf.
TestParserBkill
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_lsf'¶
-
-
class
aiida.schedulers.plugins.test_lsf.
TestParserSubmit
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_lsf'¶
-
-
class
aiida.schedulers.plugins.test_lsf.
TestSubmitScript
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_lsf'¶
-
-
class
aiida.schedulers.plugins.test_pbspro.
TestParserQstat
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline
-
__module__
= 'aiida.schedulers.plugins.test_pbspro'¶
-
-
class
aiida.schedulers.plugins.test_pbspro.
TestSubmitScript
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_pbspro'¶
-
test_submit_script_with_num_cores_per_machine
()[source]¶ Test to verify if script works fine if we specify only num_cores_per_machine value.
-
test_submit_script_with_num_cores_per_machine_and_mpiproc1
()[source]¶ Test to verify if scripts works fine if we pass both num_cores_per_machine and num_cores_per_mpiproc correct values. It should pass in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine
-
-
class
aiida.schedulers.plugins.test_sge.
TestCommand
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_sge'¶
-
-
class
aiida.schedulers.plugins.test_slurm.
TestParserSqueue
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline
-
__module__
= 'aiida.schedulers.plugins.test_slurm'¶
-
-
class
aiida.schedulers.plugins.test_slurm.
TestSubmitScript
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_slurm'¶
-
test_submit_script_with_num_cores_per_machine
()[source]¶ Test to verify if script works fine if we specify only num_cores_per_machine value.
-
test_submit_script_with_num_cores_per_machine_and_mpiproc1
()[source]¶ Test to verify if scripts works fine if we pass both num_cores_per_machine and num_cores_per_mpiproc correct values. It should pass in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine
-
-
class
aiida.schedulers.plugins.test_slurm.
TestTimes
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_slurm'¶
-
-
class
aiida.schedulers.plugins.test_torque.
TestParserQstat
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline
-
__module__
= 'aiida.schedulers.plugins.test_torque'¶
-
-
class
aiida.schedulers.plugins.test_torque.
TestSubmitScript
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
__module__
= 'aiida.schedulers.plugins.test_torque'¶
-
test_submit_script_with_num_cores_per_machine
()[source]¶ Test to verify if script works fine if we specify only num_cores_per_machine value.
-
test_submit_script_with_num_cores_per_machine_and_mpiproc1
()[source]¶ Test to verify if scripts works fine if we pass both num_cores_per_machine and num_cores_per_mpiproc correct values It should pass in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine
-
Plugin for PBS/Torque. This has been tested on Torque v.2.4.16 (from Ubuntu).
-
class
aiida.schedulers.plugins.torque.
TorqueScheduler
[source]¶ Bases:
aiida.schedulers.plugins.pbsbaseclasses.PbsBaseClass
Subclass to support the Torque scheduler..
I redefine only what needs to change from the base class
-
__abstractmethods__
= frozenset({})¶
-
__module__
= 'aiida.schedulers.plugins.torque'¶
-
_abc_impl
= <_abc_data object>¶
-