Publisher Tools

The publisher tools section of pdView lets you monitor your model runs, check their statuses, and debug failures. It’s available to users whose organisation has the model:publish permission.

Accessing publisher tools

Navigate to the /publisher path in pdView. If your account has publishing permissions, you’re redirected to the model runs view. If you don’t have the required permission, you see a “Forbidden” page.

Page layout

The publisher tools page has two areas:

  • Left sidebar — lists all models belonging to your organisation as clickable buttons. Select a model to view its run history.
  • Main panel — displays the run history for the selected model, including the model’s display name and ID at the top.

Run history

The main panel shows a live-updating list of model runs for the selected model. The list refreshes automatically every 10 seconds, so you don’t need to manually reload the page to see new runs.

Runs are displayed 25 at a time. Click Load More at the bottom to fetch older runs.

Run cards

Each run appears as a card showing:

  • Status badge — a colour-coded label indicating the current state of the run:

    StatusColour
    PendingGrey
    Input Files PreparedPurple
    RunningBlue
    CompletedGreen
    FailedRed
  • Created timestamp — when the run was queued.

  • Run ID — a unique identifier displayed in monospace text.

Click the expand arrow on the right side of a card to reveal the full run details.

Run details

The expanded view shows additional information about the run:

  • Created At — when the run was queued.
  • Run Datetime — the temporal context of the forecast (the date and time the forecast is “for”).
  • Started At — when the model container began executing.
  • Completed At — when output processing finished.
  • Errored At — when the run failed, if applicable.
  • Sensitivity — which sensitivity scenario this run belongs to, or empty for base runs.
  • Container ID — the Docker container identifier, useful for cross-referencing with infrastructure logs.
  • Error — for failed runs, the error message is displayed in a monospace block for easy reading.

Common workflows

Verify a newly published model

After publishing a model for the first time, open publisher tools and select your model from the sidebar. Check that the first run has moved through the status stages and reached Completed.

Debug a failed run

If a run shows a Failed badge, expand the card to read the error message. The error text and container ID can help you identify whether the failure was caused by your model code, a missing input file, or an infrastructure issue.

Monitor automatic run cadence

For models using the automatic run mode, use the run history to verify that runs are triggering as expected. Check that new runs appear at the expected frequency and that sensitivity runs complete alongside base runs.

Next steps

To learn how models get published, see Publishing. For details on how automatic runs are triggered, see Run Modes & Scheduling.