Skip to content
This repository was archived by the owner on May 28, 2024. It is now read-only.
This repository was archived by the owner on May 28, 2024. It is now read-only.

Time series benchmark model #200

@amcarter

Description

@amcarter

To assess the performance of the benchmark AR model, I forecast the validation datasets. The model with slope and temperature performed very poorly, even at the sites with lots of data:
image

The RMSEs for predicting DO mean are high across all sites with an average of ~3.6

I think the poor performance compared to when we fit a single site is because the model is required to select parameters that are pooled across all of these sites, so it is unable to optimize them for a single one, even with lots of data.

Given how poorly this does, I don't think it makes sense to try to optimize it with different variables, etc.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions