Queue Management & Job History#

This tutorial covers the Python API for interacting with the SLURM job queue and accounting system:

  • ``SQueue`` — inspect and filter the live queue, wait for jobs to finish

  • ``SAcct`` — query job history and CPU-hour usage via sacct

These classes are especially useful for automating multi-job workflows directly from Python: generate scripts, submit them, monitor progress, and analyse accounting data — all without leaving your notebook or script.

Demo mode — the cell below automatically detects whether SLURM is available. If squeue is not on your PATH (e.g. when building the docs or running locally), it installs lightweight patches that return realistic synthetic data so every cell executes without a real cluster. On an actual cluster the patches are skipped and all calls hit your live queue.

[1]:

# ── Demo mode ───────────────────────────────────────────────────────────────── # Patches subprocess.run and the blocking-wait methods so this notebook runs # end-to-end without a real SLURM installation. # # Detection is automatic: if `squeue` is on PATH the patch is skipped and # the notebook hits your live cluster instead. Remove this cell entirely if # you prefer to control the behaviour manually. # ───────────────────────────────────────────────────────────────────────────── import shutil from unittest.mock import MagicMock, patch from slurm_script_generator.squeue import SAcct, SQueue, SQueueJob _SLURM_AVAILABLE = shutil.which("squeue") is not None if _SLURM_AVAILABLE: print("SLURM detected — using live data. Demo patches are inactive.") else: # ── Fake squeue output ──────────────────────────────────────────────────── # squeue --format uses ASCII unit-separator (\x1f) between 11 fields: # job_id user name state partition num_nodes num_cpus # time_used time_limit reason priority _S = "\x1f" def _sq(*fields): return _S.join(str(f) for f in fields) _SQUEUE_OUTPUT = "\n".join( [ _sq(1001, "alice", "train_resnet", "R", "gpu", 2, 64, "2:13:05", "24:00:00", "(None)", 100), _sq(1002, "alice", "train_bert", "R", "gpu", 2, 64, "1:07:22", "24:00:00", "(None)", 90), _sq(1003, "alice", "train_vit", "PD", "gpu", 2, 64, "0:00:00", "24:00:00", "Priority", 80), _sq(1004, "bob", "preprocess", "R", "cpu", 1, 8, "0:45:11", "2:00:00", "(None)", 70), _sq(1005, "bob", "postprocess", "PD", "cpu", 1, 8, "0:00:00", "2:00:00", "Resources", 60), _sq(1006, "carol", "eval_run", "R", "gpu", 1, 32, "0:44:11", "8:00:00", "(None)", 50), _sq(1007, "carol", "sweep_01", "PD", "gpu", 1, 16, "0:00:00", "4:00:00", "Priority", 40), _sq(1008, "dave", "mpi_benchmark", "R", "cpu", 4, 128, "5:02:00", "12:00:00", "(None)", 30), ] ) # ── Fake sacct output ───────────────────────────────────────────────────── # sacct --parsable2 uses "|" between 10 fields: # JobID User JobName State Partition AllocNodes AllocCPUS # Elapsed CPUTimeRAW ExitCode # CPUTimeRAW = AllocCPUS * elapsed_seconds _SACCT_OUTPUT = "\n".join( [ "2001|alice|train_resnet|COMPLETED|gpu|2|64|3:15:00|748800|0:0", "2002|alice|train_bert|COMPLETED|gpu|2|64|2:30:00|576000|0:0", "2003|alice|train_vit|FAILED|gpu|2|64|0:45:00|172800|1:0", "2004|alice|train_resnet|COMPLETED|gpu|2|64|3:00:00|691200|0:0", "2005|bob|preprocess|COMPLETED|cpu|1|8|0:55:00|26400|0:0", "2006|bob|postprocess|TIMEOUT|cpu|1|8|2:00:00|57600|0:0", "2007|bob|mpi_job|COMPLETED|cpu|4|64|6:10:00|1416960|0:0", "2008|carol|eval_run|COMPLETED|gpu|1|32|1:22:00|157440|0:0", "2009|carol|sweep_01|COMPLETED|gpu|1|16|1:05:00|62400|0:0", "2010|carol|sweep_02|CANCELLED by 1234|gpu|1|16|0:12:00|11520|0:0", "2011|dave|mpi_benchmark|COMPLETED|cpu|4|128|5:02:00|2319360|0:0", ] ) _job_counter = [9999] def _fake_run(cmd, **kwargs): result = MagicMock(returncode=0, stderr="") name = cmd[0] if cmd else "" if name == "squeue": result.stdout = _SQUEUE_OUTPUT elif name == "sacct": result.stdout = _SACCT_OUTPUT elif name == "sbatch": _job_counter[0] += 1 result.stdout = f"Submitted batch job {_job_counter[0]}\n" else: result.stdout = "" return result def _fake_sq_wait( self, *, job_id=None, job_name=None, user=None, poll_interval=30, timeout=None, verbose=True, ): desc = job_name or (f"job {job_id}" if job_id else f"user {user}") if verbose: print(f"~ [demo] Waiting for {desc!r} — 0 active jobs found.") print("✓ All matching jobs have finished.") def _fake_sqjob_wait(self, poll_interval=30, timeout=None, verbose=True): if verbose: print(f"~ [demo] Waiting for job {self.job_id} — already finished.") print("✓ Done.") patch("subprocess.run", side_effect=_fake_run).start() patch.object(SQueue, "wait_until_done", _fake_sq_wait).start() patch.object(SQueueJob, "wait_until_done", _fake_sqjob_wait).start() print("Demo mode active — synthetic SLURM data in use (no real cluster needed).")
Demo mode active — synthetic SLURM data in use (no real cluster needed).
[2]:
from slurm_script_generator.squeue import SQueue, SAcct, SQueueJob, SAcctJob

1. Inspecting the live queue#

SQueue() runs squeue immediately and caches the result. Printing it renders a coloured per-user summary table.

[3]:
q = SQueue()
print(q)
SLURM Queue  ·  8 jobs total  ·  5 running  ·  3 pending
═════════════════════════════════════════════════════════
  User    Jobs   Running   Pending   Nodes (R)   CPUs (R)
─────────────────────────────────────────────────────────
  alice      3         2         1           4        128
  dave       1         1         0           4        128
  bob        2         1         1           1          8
  carol      2         1         1           1         32
─────────────────────────────────────────────────────────
  TOTAL      8         5         3          10        296
═════════════════════════════════════════════════════════

Basic counts are available directly:

[4]:
print(f"Total jobs in queue : {len(q)}")
print(f"Running             : {len(q.running_jobs())}")
print(f"Pending             : {len(q.pending_jobs())}")
print(f"Users with jobs     : {q.users()}")
Total jobs in queue : 8
Running             : 5
Pending             : 3
Users with jobs     : ['alice', 'bob', 'carol', 'dave']

Use refresh() to re-poll squeue and update the cached data in-place:

[5]:
q.refresh()
print(f"{len(q)} jobs after refresh")
8 jobs after refresh

Restrict all squeue calls to a single user at construction time — useful when you only care about your own jobs:

[6]:
import os

my_user = os.environ.get("USER", "alice")
my_queue = SQueue(user=my_user)
print(my_queue)
SLURM Queue  ·  8 jobs total  ·  5 running  ·  3 pending
═════════════════════════════════════════════════════════
  User    Jobs   Running   Pending   Nodes (R)   CPUs (R)
─────────────────────────────────────────────────────────
  alice      3         2         1           4        128
  dave       1         1         0           4        128
  bob        2         1         1           1          8
  carol      2         1         1           1         32
─────────────────────────────────────────────────────────
  TOTAL      8         5         3          10        296
═════════════════════════════════════════════════════════

2. Filtering jobs#

q.jobs() returns a list of SQueueJob objects and accepts any combination of filters.

[7]:
# All jobs from a specific user
alice_jobs = q.jobs(user="alice")
print(f"alice has {len(alice_jobs)} job(s) in the queue")

# Running jobs only
running = q.jobs(state="R")
print(f"{len(running)} job(s) currently running")

# Pending jobs on the GPU partition
gpu_pending = q.jobs(partition="gpu", state="PD")
print(f"{len(gpu_pending)} job(s) pending on GPU partition")

# Name glob — match all 'train_*' jobs
training = q.jobs(job_name="train_*")
print(f"{len(training)} training job(s) in queue")

# Combine filters: alice's running training jobs
alice_running_training = q.jobs(user="alice", job_name="train_*", state="R")
print(f"alice has {len(alice_running_training)} running training job(s)")
alice has 3 job(s) in the queue
5 job(s) currently running
2 job(s) pending on GPU partition
3 training job(s) in queue
alice has 2 running training job(s)

Each result is an SQueueJob with all the fields you’d get from squeue:

[8]:
for job in q.jobs(user="alice"):
    print(
        f"  [{job.job_id}] {job.name:30s}  {job.state_name:12s}"
        f"  {job.partition:8s}  nodes={job.num_nodes}  cpus={job.num_cpus}"
        f"  used={job.time_used}  limit={job.time_limit}"
    )
  [1001] train_resnet                    Running       gpu       nodes=2  cpus=64  used=2:13:05  limit=24:00:00
  [1002] train_bert                      Running       gpu       nodes=2  cpus=64  used=1:07:22  limit=24:00:00
  [1003] train_vit                       Pending       gpu       nodes=2  cpus=64  used=0:00:00  limit=24:00:00

Convenience boolean properties on each job:

[9]:
for job in q.jobs(user="alice"):
    print(f"  {job.name}: running={job.is_running}, pending={job.is_pending}, active={job.is_active}")
  train_resnet: running=True, pending=False, active=True
  train_bert: running=True, pending=False, active=True
  train_vit: running=False, pending=True, active=True

3. Per-user and per-partition breakdowns#

The grouping methods return dicts mapping a key to a list of SQueueJob objects — handy for building custom reports.

[10]:
# Per-user summary
print(f"{'User':<12} {'Jobs':>6} {'Running':>9} {'Pending':>9} {'Nodes':>7} {'CPUs':>6}")
print("-" * 52)
for user, jobs in q.jobs_by_user().items():
    running = [j for j in jobs if j.is_running]
    pending = [j for j in jobs if j.is_pending]
    nodes   = sum(j.num_nodes for j in running)
    cpus    = sum(j.num_cpus  for j in running)
    print(f"{user:<12} {len(jobs):>6} {len(running):>9} {len(pending):>9} {nodes:>7} {cpus:>6}")
User           Jobs   Running   Pending   Nodes   CPUs
----------------------------------------------------
alice             3         2         1       4    128
bob               2         1         1       1      8
carol             2         1         1       1     32
dave              1         1         0       4    128
[11]:
# Per-partition breakdown
for partition, jobs in q.jobs_by_partition().items():
    running = [j for j in jobs if j.is_running]
    nodes   = sum(j.num_nodes for j in running)
    print(f"  {partition}: {len(jobs)} jobs, {len(running)} running, {nodes} nodes in use")
  gpu: 5 jobs, 3 running, 5 nodes in use
  cpu: 3 jobs, 2 running, 5 nodes in use
[12]:
# Per-state breakdown
for state, jobs in q.jobs_by_state().items():
    print(f"  {state}: {len(jobs)} job(s)")
  R: 5 job(s)
  PD: 3 job(s)
[13]:
# The summary() dict gives you the same data as a flat dict
s = q.summary()
print(s)
{'total_jobs': 8, 'running': 5, 'pending': 3, 'users': {'alice': 3, 'bob': 2, 'carol': 2, 'dave': 1}, 'by_state': {'PD': 3, 'R': 5}}

4. Submitting a job from Python and waiting for it#

The simplest workflow: generate a script, submit it with sbatch, capture the job ID, then block until it finishes.

[14]:
import subprocess
from slurm_script_generator.slurm_script import SlurmScript

# 1. Build the script
script = SlurmScript(
    job_name="my_run",
    nodes=1,
    ntasks_per_node=8,
    time="02:00:00",
    custom_commands=["python train.py --config config.yaml"],
)
script.save("my_run.sh")

# 2. Submit and capture the job ID
result = subprocess.run(
    ["sbatch", "my_run.sh"],
    capture_output=True, text=True, check=True,
)
job_id = int(result.stdout.strip().split()[-1])  # "Submitted batch job 12345"
print(f"Submitted job {job_id}")

# 3. Wait for it to finish (polls every 30 s by default)
q = SQueue()
q.wait_until_done(job_id=job_id, poll_interval=30)
print("Job finished — proceeding with post-processing")
Submitted job 10000
~ [demo] Waiting for 'job 10000' — 0 active jobs found.
✓ All matching jobs have finished.
Job finished — proceeding with post-processing

5. Submitting a parameter sweep and waiting for all jobs#

Give every job in the sweep a common name prefix so you can wait for the whole batch with a single glob pattern.

[15]:
import subprocess
from slurm_script_generator.slurm_script import SlurmScript
from slurm_script_generator.squeue import SQueue

learning_rates = [1e-4, 5e-4, 1e-3, 5e-3]
job_ids = []

for i, lr in enumerate(learning_rates):
    job_name = f"sweep_{i:02d}"
    script = SlurmScript(
        job_name=job_name,
        nodes=1,
        ntasks_per_node=8,
        time="04:00:00",
        custom_commands=[f"python train.py --lr {lr} --output results/{job_name}"],
    )
    script.save(f"{job_name}.sh")

    result = subprocess.run(
        ["sbatch", f"{job_name}.sh"],
        capture_output=True, text=True, check=True,
    )
    job_id = int(result.stdout.strip().split()[-1])
    job_ids.append(job_id)
    print(f"  {job_name}  (lr={lr})  →  job {job_id}")

print(f"\n{len(job_ids)} jobs submitted. Waiting for all sweep_* jobs to finish...")

q = SQueue()
q.wait_until_done(job_name="sweep_*", poll_interval=60)
print("Sweep complete — all jobs finished.")
  sweep_00  (lr=0.0001)  →  job 10001
  sweep_01  (lr=0.0005)  →  job 10002
  sweep_02  (lr=0.001)  →  job 10003
  sweep_03  (lr=0.005)  →  job 10004

4 jobs submitted. Waiting for all sweep_* jobs to finish...
~ [demo] Waiting for 'sweep_*' — 0 active jobs found.
✓ All matching jobs have finished.
Sweep complete — all jobs finished.

6. Waiting on individual job objects#

If you already have a SQueueJob object (e.g. from filtering), you can call .wait_until_done() directly on it — no need to pass the ID separately.

[16]:
q = SQueue()

# Find the specific job you care about
matches = q.jobs(job_name="my_run")
if not matches:
    print("Job not found in queue — it may have already finished.")
else:
    job = matches[0]
    print(repr(job))
    print(f"State: {job.state_name}")

    # Block until this specific job is done
    job.wait_until_done(poll_interval=30)
Job not found in queue — it may have already finished.

7. Waiting with a timeout#

Pass timeout= (seconds) to avoid waiting indefinitely. A TimeoutError is raised if the jobs are still active when the limit is reached.

[17]:
q = SQueue()

try:
    q.wait_until_done(
        job_name="train_*",
        poll_interval=60,
        timeout=7200,  # give up after 2 hours
    )
    print("All training jobs finished within the time limit.")
except TimeoutError as e:
    print(f"Warning: some jobs did not finish in time — {e}")
    # You can inspect the queue to see what's still running
    still_active = q.jobs(job_name="train_*")
    print(f"{len(still_active)} job(s) still active: {[j.job_id for j in still_active]}")
~ [demo] Waiting for 'train_*' — 0 active jobs found.
✓ All matching jobs have finished.
All training jobs finished within the time limit.

8. Waiting silently (for use in scripts)#

Pass verbose=False to suppress all progress messages — useful when wait_until_done is embedded inside a larger automated pipeline.

[18]:
q = SQueue()
q.wait_until_done(job_name="preprocess_*", poll_interval=30, verbose=False)
# Execution continues here once all matching jobs have left the queue
print("Preprocessing done — loading results")
Preprocessing done — loading results

9. Querying job history with SAcct#

SAcct wraps sacct to give you accounting records for completed, failed, and cancelled jobs. It is read-only and does not affect the live queue.

[19]:
from slurm_script_generator.squeue import SAcct

# Look back 30 days (default is 7)
acct = SAcct(days=30)

s = acct.summary()
print(f"Jobs in last 30 days : {s['total']}")
print(f"  Completed          : {s['completed']}")
print(f"  Failed             : {s['failed']}")
print(f"  Timeout            : {s['timeout']}")
print(f"  Cancelled          : {s['cancelled']}")
print(f"Total CPU-hours      : {s['cpu_hours']:.1f}")
Jobs in last 30 days : 11
  Completed          : 8
  Failed             : 1
  Timeout            : 1
  Cancelled          : 1
Total CPU-hours      : 1733.5
[20]:
# Per-partition CPU-hour breakdown
print(f"{'Partition':<12} {'Jobs':>6} {'CPU-hours':>12}")
print("-" * 32)
for partition, jobs in acct.jobs_by_partition().items():
    cpu_h = sum(j.cpu_hours for j in jobs)
    print(f"{partition:<12} {len(jobs):>6} {cpu_h:>12.1f}")
Partition      Jobs    CPU-hours
--------------------------------
gpu               7        672.3
cpu               4       1061.2
[21]:
# Find failed jobs and see their exit codes
failed = acct.jobs(state="FAILED")
print(f"{len(failed)} failed job(s) in the last 30 days:")
for job in failed[:10]:  # show at most 10
    print(f"  [{job.job_id}] {job.name:30s}  exit={job.exit_code}  elapsed={job.elapsed}")
1 failed job(s) in the last 30 days:
  [2003] train_vit                       exit=1:0  elapsed=0:45:00
[22]:
# Filter by partition
gpu_jobs = acct.jobs(partition="gpu")
gpu_cpu_h = sum(j.cpu_hours for j in gpu_jobs)
print(f"GPU partition: {len(gpu_jobs)} jobs, {gpu_cpu_h:.1f} CPU-hours over the last 30 days")
GPU partition: 7 jobs, 672.3 CPU-hours over the last 30 days

Iterate directly over an SAcct instance to access every SAcctJob:

[23]:
# Compute success rate
total     = len(acct)
completed = sum(1 for j in acct if j.is_completed)
failed    = sum(1 for j in acct if j.is_failed)
timeout   = sum(1 for j in acct if j.is_timeout)

if total:
    print(f"Success rate : {100 * completed / total:.1f}%")
    print(f"Failure rate : {100 * failed    / total:.1f}%")
    print(f"Timeout rate : {100 * timeout   / total:.1f}%")
Success rate : 72.7%
Failure rate : 9.1%
Timeout rate : 9.1%

10. History across multiple users (admin)#

If you have SLURM operator or admin privileges, you can query any user’s history. Without those privileges sacct only returns your own jobs.

[24]:
# Per-user CPU-hour leaderboard for the last 7 days
acct_all = SAcct(days=7)

rows = []
for user, jobs in acct_all.jobs_by_user().items():
    cpu_h     = sum(j.cpu_hours for j in jobs)
    completed = sum(1 for j in jobs if j.is_completed)
    failed    = sum(1 for j in jobs if j.is_failed)
    rows.append((user, len(jobs), completed, failed, cpu_h))

rows.sort(key=lambda r: -r[4])  # sort by CPU-hours

print(f"{'User':<12} {'Jobs':>6} {'Done':>6} {'Failed':>7} {'CPU-hours':>12}")
print("-" * 48)
for user, total, done, failed, cpu_h in rows:
    print(f"{user:<12} {total:>6} {done:>6} {failed:>7} {cpu_h:>12.1f}")
User           Jobs   Done  Failed    CPU-hours
------------------------------------------------
dave              1      1       0        644.3
alice             4      3       1        608.0
bob               3      2       0        416.9
carol             3      2       0         64.3

11. Complete end-to-end workflow#

This example puts everything together: define a set of experiments, generate scripts for each, submit them, monitor progress, wait for completion, and finally summarise the accounting data.

[25]:
import subprocess
from slurm_script_generator.slurm_script import SlurmScript
from slurm_script_generator.squeue import SQueue, SAcct

# ── 1. Define experiments ──────────────────────────────────────────────────
experiments = [
    {"name": "exp_small",  "size": "small",  "nodes": 1, "time": "01:00:00"},
    {"name": "exp_medium", "size": "medium", "nodes": 2, "time": "04:00:00"},
    {"name": "exp_large",  "size": "large",  "nodes": 4, "time": "08:00:00"},
]

# ── 2. Generate scripts and submit ────────────────────────────────────────
job_ids = []
for exp in experiments:
    script = SlurmScript(
        job_name=exp["name"],
        nodes=exp["nodes"],
        ntasks_per_node=8,
        time=exp["time"],
        partition="gpu",
        custom_commands=[
            f"python run.py --size {exp['size']} --output results/{exp['name']}"
        ],
    )
    script.save(f"{exp['name']}.sh")

    result = subprocess.run(
        ["sbatch", f"{exp['name']}.sh"],
        capture_output=True, text=True, check=True,
    )
    job_id = int(result.stdout.strip().split()[-1])
    job_ids.append(job_id)
    print(f"  {exp['name']:15s} → job {job_id}")

# ── 3. Snapshot the queue immediately after submission ────────────────────
q = SQueue()
my_jobs = q.jobs(job_name="exp_*")
print(f"\nQueue has {len(q)} total jobs; {len(my_jobs)} are our experiments.")
for job in my_jobs:
    print(f"  [{job.job_id}] {job.name:15s}  {job.state_name}  {job.reason}")

# ── 4. Wait for all experiments to finish ─────────────────────────────────
print("\nWaiting for all exp_* jobs to complete...")
q.wait_until_done(job_name="exp_*", poll_interval=120, timeout=86400)
print("All experiments finished!")

# ── 5. Review accounting data ──────────────────────────────────────────────
acct = SAcct(days=1)  # look back 24 h to capture today's runs
exp_jobs = acct.jobs()  # filter further if needed

total_cpu_h = sum(j.cpu_hours for j in exp_jobs)
completed   = [j for j in exp_jobs if j.is_completed]
failed      = [j for j in exp_jobs if j.is_failed]

print(f"\nAccounting summary (last 24 h):")
print(f"  Completed   : {len(completed)}")
print(f"  Failed      : {len(failed)}")
print(f"  CPU-hours   : {total_cpu_h:.1f}")

if failed:
    print("\nFailed jobs:")
    for job in failed:
        print(f"  [{job.job_id}] {job.name}  exit={job.exit_code}  elapsed={job.elapsed}")
  exp_small       → job 10005
  exp_medium      → job 10006
  exp_large       → job 10007

Queue has 8 total jobs; 0 are our experiments.

Waiting for all exp_* jobs to complete...
~ [demo] Waiting for 'exp_*' — 0 active jobs found.
✓ All matching jobs have finished.
All experiments finished!

Accounting summary (last 24 h):
  Completed   : 8
  Failed      : 1
  CPU-hours   : 1733.5

Failed jobs:
  [2003] train_vit  exit=1:0  elapsed=0:45:00

12. Combining SlurmScript + SQueue for reproducible pipelines#

A common pattern for multi-stage pipelines: run stage 1, wait, run stage 2.

[26]:
import subprocess
from slurm_script_generator.slurm_script import SlurmScript
from slurm_script_generator.squeue import SQueue


# TODO: Submission of a job should be done with script.submit() or similar,
# which returns the job ID.
# This is a placeholder
def submit(script: SlurmScript) -> int:
    """Save, submit, and return the SLURM job ID."""
    job_name = next((p.value for p in script.pragmas if p.arg_varname == "job_name"), None)
    path = f"{job_name}.sh"
    script.save(path)
    result = subprocess.run(
        ["sbatch", path], capture_output=True, text=True, check=True
    )
    return int(result.stdout.strip().split()[-1])


# ── Stage 1: preprocessing ────────────────────────────────────────────────
preprocess = SlurmScript(
    job_name="stage1_preprocess",
    nodes=1, ntasks_per_node=16,
    time="01:00:00",
    custom_commands=["python preprocess.py --input raw/ --output processed/"],
)
jid1 = submit(preprocess)
print(f"Stage 1 submitted → job {jid1}")

SQueue().wait_until_done(job_id=jid1, poll_interval=60)
print("Stage 1 done.")

# ── Stage 2: training (depends on stage 1 output) ─────────────────────────
train = SlurmScript(
    job_name="stage2_train",
    nodes=2, ntasks_per_node=8,
    time="08:00:00",
    partition="gpu",
    custom_commands=["python train.py --data processed/ --output models/"],
)
jid2 = submit(train)
print(f"Stage 2 submitted → job {jid2}")

SQueue().wait_until_done(job_id=jid2, poll_interval=120)
print("Stage 2 done. Pipeline complete!")
Stage 1 submitted → job 10008
~ [demo] Waiting for 'job 10008' — 0 active jobs found.
✓ All matching jobs have finished.
Stage 1 done.
Stage 2 submitted → job 10009
~ [demo] Waiting for 'job 10009' — 0 active jobs found.
✓ All matching jobs have finished.
Stage 2 done. Pipeline complete!