dr_rate_limit() adds a per-route or per-app cap on how
many requests are allowed in a rolling time window. Over-budget requests
are rejected with HTTP 429 Too Many Requests and a
Retry-After header, before the request is
dispatched to R — so a flood of clients can’t saturate the dispatcher or
your handler.
library(drogonR)
app <- dr_app() |>
dr_get("/health", function(req) "ok") |>
dr_get("/api/users", function(req) "users") |>
# 100 requests per 60 s, applied to every route under /api/
dr_rate_limit(capacity = 100L, window = 60, routes = "/api/")
dr_serve(app, port = 8080L)/health is unaffected (it is outside the
/api/ prefix). /api/users is allowed up to 100
hits per 60 s; subsequent hits get 429 until the window slides
forward.
The check runs on Drogon’s I/O thread, immediately after route matching and before the request enters the R-side dispatch pipeline. That means:
dr_rate_limit() in the current release; throttle them with
a reverse proxy.The Retry-After header carries the rule’s
window (rounded up to seconds) — a conservative upper bound
on how long the client should back off.
type =)dr_rate_limit(app, capacity = 10L, window = 1, type = "sliding_window")
dr_rate_limit(app, capacity = 10L, window = 1, type = "fixed_window")
dr_rate_limit(app, capacity = 10L, window = 1, type = "token_bucket")"sliding_window" (default) — counts
requests in the trailing window seconds. Smooth: no
clock-edge bursts, but slightly more bookkeeping than a fixed
window."fixed_window" — capacity
per fixed wall-clock interval of window seconds. Cheapest,
but allows a burst of 2 * capacity across a window
boundary."token_bucket" — refills at
capacity / window tokens per second, with
capacity being the maximum burst. Use this when steady
throughput matters more than tight per-window limits.All three are implemented by Drogon’s RateLimiter class;
drogonR just wraps each instance in a small mutex so the I/O threads can
call isAllowed() concurrently without racing.
scope =)dr_rate_limit(app, capacity = 10L, window = 1, routes = "/api/",
scope = "per_route") # default
dr_rate_limit(app, capacity = 10L, window = 1, routes = "/api/",
scope = "global")"per_route" — each matched route gets
its own bucket. /api/a and /api/b are
throttled independently."global" — one bucket shared across
every matched route. 10 hits to /api/a plus 0 hits to
/api/b already exhausts the bucket; both routes start
returning 429 until the window opens up.A route may match several rules at once (dr_rate_limit()
is additive). To pass through, the request must satisfy
every matching rule.
routes =)routes is a character vector of path
prefixes:
# all of /api/ AND /admin/, with separate budgets per route
dr_rate_limit(app, capacity = 100L, window = 60,
routes = c("/api/", "/admin/"))NULL (the default) matches every registered route —
useful for an app-wide cap layered on top of per-area rules.
Not provided by dr_rate_limit(). The bucket is shared
across all clients of the matched routes; if you need a per-client cap,
do it in front of drogonR (nginx limit_req, Caddy
rate_limit, Cloudflare, an API gateway, etc.). Doing per-IP
enforcement at the application layer would require maintaining a hash
table of clients keyed by source IP and pruning it under contention — a
lot of overhead for something a reverse proxy already does well.
dr_serve() time. Add all
dr_rate_limit() calls before starting the server; they
cannot be modified live.dr_rate_limit() after registering
routes so prefix matches resolve correctly.