-
Notifications
You must be signed in to change notification settings - Fork 1
Description
TLDR
- ngrok's API requests get throttled at 120 req/minute rate
- every single CIDR block has to be individually added to given IP policy (there's no batch option)
- each addition (or even check if given entry exists) issues an API request
- Terraform codebase with 2 IP policies (50 entries in each) followed by
terraform plan
generates 100+ requests which results interraform apply
failure (see.tf
code snippet below)
Context
Quite often I find myself in a situation that forces me to use ngrok's endpoint as CDN's origin to see what exactly is being exchanged between the edge server and my app. To make it as close to production setup as possible I'd like to protect my ngrok endpoint with some IP policy and/or additional rules.
CDN (and many other service providers) usually come with wide range of public CIDRs (from a few dozen to thousands). Here's a few examples:
- Fastly
- Cloudfront
- Github
- etc
At first the problem seem trivial - a combination of data sources from aforementioned providers (i.e. aws_ip_ranges) and ngrok_ip_policy
/ ngrok_ip_policy_rule
should do the trick. Unfortunately, the longer the list the harder it gets. In fact, a few slightly longer IP policies kicks off a vicious cycle you cannot break out of.
Steps to reproduce
The following code snippet would be the best way to explain the situation:
terraform {
required_version = "~> 1.9"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
ngrok = {
source = "ngrok/ngrok"
version = "~> 0.3"
}
}
}
provider "aws" {}
provider "ngrok" {}
locals {
ngrok_ip_policy_max_rules = 50
cloudfront_ip_range_url = "https://ip-ranges.amazonaws.com/ip-ranges.json"
cloudfront_ipv4_range_buckets = chunklist(
toset(data.aws_ip_ranges.cloudfront.cidr_blocks),
local.ngrok_ip_policy_max_rules
)
cloudfront_ip_ranges = flatten(
[
for index, cidrs in local.cloudfront_ipv4_range_buckets : [
for cidr in cidrs : {
bucket_id = index
cidr = cidr
type = "ipv4"
}
]
]
)
}
data "aws_ip_ranges" "cloudfront" {
services = ["cloudfront"]
}
resource "ngrok_ip_policy" "cloudfront_ipv4" {
for_each = {
for index, cidrs in local.cloudfront_ipv4_range_buckets : index => cidrs
}
description = length(local.cloudfront_ipv4_range_buckets) == 1 ? "Cloudfront IPv4" : "Cloudfront IPv4 (${each.key})"
metadata = jsonencode({ url = local.cloudfront_ip_range_url })
}
resource "ngrok_ip_policy_rule" "cloudfront" {
for_each = {
for item in local.cloudfront_ip_ranges : "${item.type}_${item.bucket_id}_${item.cidr}" => item
}
action = "allow"
cidr = each.value.cidr
ip_policy_id = ngrok_ip_policy.cloudfront_ipv4[each.value.bucket_id].id
}
- Create
ngrok.tf
file from the snippet above - Run
AWS_PROFILE="foo" AWS_REGION="eu-central-1" NGROK_API_KEY="bar" terraform plan
- Run
AWS_PROFILE="foo" AWS_REGION="eu-central-1" NGROK_API_KEY="bar" terraform apply
Current state
- The very first
plan
triggers 0 requests to ngrok API (expected, there's no TF state yet) apply
resulted in 247 API requests and never-ending stream ofError: HTTP 429: Your account is rate limited to 120 API requests per minute. [ERR_NGROK_226]
messages- Subsequent
plan
initiates even more API requests and vicious cycle begins - some resources have just been created, therefore Terraform needs to check if they're correctly configured (in this particular case Terraform triggered 62GET
calls to ngrok API) - Obviously the next
apply
(if you apply it right after theplan
) has just 120-62 requests available. Eventually that results in a "blocked"plan
because Terraform cannot even compare its state data to the actual situation (with 120+ rules theplan
alone exhausts the 120 req/min limit)
Expected state
- Expose batch IP policy update (according to API reference there's no such a thing at all) in order to reduce the overall number of API calls
- It'd be nice to see some clever rate limiting handling (i.e. expose available/remaining limits via HTTP headers in 429 responses + retry with exponential backoff based on that data) (see Retry 429 errors with exponential backoff #36)
Noteworthy facts
- Higher request/minute limit doesn't solve the problem in the long run (that'd go to 1000s to support larger IP policies and I'm quite sure you don't want to increase it that much)
- IP policy can store up to 50 rules
- Here's how AWS does it - there's just 1 AWS API request per security group (the entire
cidr_blocks
list gets transformed into API request payload)
resource "aws_security_group" "allow_cloudfront_ipv4" {
for_each = {
for index, cidrs in local.cloudfront_ipv4_range_buckets : index => cidrs
}
name = "allow_cloudfront_ipv4_${each.key}"
description = "Allow inbound traffic from cloudfront"
vpc_id = var.aws_default_vpc_id
tags = {
Name = "allow_cloudfront_ipv4_${each.key}"
}
}
resource "aws_security_group_rule" "cloudfront_ipv4" {
for_each = {
for index, cidrs in local.cloudfront_ipv4_range_buckets : index => cidrs
}
type = "ingress"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = each.value
security_group_id = aws_security_group.allow_cloudfront_ipv4[each.key].id
}
- an example of AWS security group batch update request
POST https://ec2.eu-central-1.amazonaws.com/
Host: ec2.eu-central-1.amazonaws.com
Content-Length: 2829
...
Action: AuthorizeSecurityGroupIngress
GroupId: <id_of_allow_cloudfront_ipv4_0>
IpPermissions.1.FromPort: 0
IpPermissions.1.IpProtocol: tcp
IpPermissions.1.IpRanges.1.CidrIp:108.138.0.0/15
IpPermissions.1.IpRanges.10.CidrIp:118.193.97.64/26
IpPermissions.1.IpRanges.11.CidrIp:119.147.182.0/25
IpPermissions.1.IpRanges.12.CidrIp:119.147.182.128/26
IpPermissions.1.IpRanges.13.CidrIp:120.232.236.0/25
...
IpPermissions.1.ToPort: 65535
Version: 2016-11-15