Comparing Models & Sensitivities

pdView lets you overlay multiple models on the same chart, enable “what if” sensitivity scenarios, and view input aggregations alongside your forecast. These features help you benchmark model revisions, explore alternative assumptions, and understand the data driving your predictions.

Model comparison

Model comparison lets you view forecast output from multiple models side by side on the same chart.

To compare models:

  1. On the forecast dashboard, click the Compare Model button in the sub-navigation bar. This button appears next to the category selector.
  2. In the modal that opens, select one or more models using the checkboxes. The list shows models that share the same time horizon as your current model.
  3. Close the modal. The selected comparison models are now overlaid on the forecast chart and table.

Comparison models appear as tags below the model title in the title area. To remove a model from the comparison, click the X on its tag.

Note: When model comparison is active, you can only select one run datetime at a time. The run datetime dropdown limits your selection to ensure a clean comparison.

When to use model comparison

  • Compare two revisions of the same model to validate improvements.
  • Benchmark your model against a different forecasting approach.
  • Evaluate how output differs across models for the same time horizon.

Sensitivities

Sensitivities are alternative “what if” runs of your model. Each sensitivity applies modified assumptions to the input data (for example, a 10% increase in demand) and produces a separate forecast for the same run datetime.

To view sensitivity scenarios:

  1. Open the options sidebar by clicking the options icon in the sub-navigation bar.
  2. Expand the Sensitivities section. Each available sensitivity is listed as a checkbox.
  3. Check one or more sensitivities. Their forecast data is overlaid on the chart alongside the base case.

Sensitivities share the same run datetime as their base run, so you can directly compare how different assumptions affect the forecast.

Note: If no sensitivities are configured for the current model, this section displays “No sensitivities found.”

When to use sensitivities

  • See how a high-demand scenario changes price forecasts.
  • Test the impact of modified supply assumptions on output.
  • Explore the range of outcomes under different conditions.

To learn how to configure sensitivities as a model author, see Sensitivities & Scenarios.

Input aggregations

Input aggregations show summary views of the data your model consumed during a run. They help you understand what input conditions drove a particular forecast.

To display input aggregations:

  1. Open the options sidebar and expand the Inputs section.
  2. Check one or more input aggregations. Each aggregation is labelled with a name and may include an info icon — hover over it to see a description of what the aggregation represents.
  3. A secondary chart appears below the main forecast chart, showing the selected aggregation data as time-series lines.

Input aggregations are colour-coded and update based on your selected categories. If no input aggregations are configured for the current model, the section displays “No inputs available.”

When to use input aggregations

  • View demand or weather data alongside a price forecast to understand causality.
  • Check whether unusual input conditions explain an unexpected forecast.
  • Present input context alongside forecast output during model reviews.

Next steps

To review how your model performs against actuals over time, see Performance Metrics. To dig into the raw data behind any run, see Exploring Data with SQL.