Skip to content

Retry 429 errors with exponential backoff #36

@jwadolowski

Description

@jwadolowski

As mentioned in #35 it's quite easy to lock yourself out and end up with permanently broken terraform plan / terraform apply. After a bit of thinking I came to a conclusion that it'd be great to introduce some retries when request/minute threshold is hit. Even slow-ish plan/apply is much better than current behaviour.

Current state

  1. Terraform immediately fails when req/minute threshold is hit

Expected state

  1. Terraform should respect the API limit and keep retrying in the background with exponential backoff

Noteworthy considerations

  • it'd be great if the API informs the user about remaining requests (i.e. in form of x-ratelimit-* HTTP headers; here's how GitHub and OpenAI APIs implement that). As far as I can see, there's no such a thing at the time of writing
  • does it make sense to track "in-flight" requests on the client (provider) side? Is this even feasible? Would it be reliable?
  • a feature flag (part of provider block?) that enables/disables the retry logic (opt-out? It's hard for me to imagine that'd be the case, but I bet that some users may prefer current behaviour)
  • customizable timeouts and deadline exceeded errors (docs)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions