Description
I have a use case where a third-party API I'm interacting with has a strict rate limit, and rather than responding with a 429 for each request past the limit, it responds with 429 for every request for ten seconds whenever the limit is exceeded.
Thus, I want to eagerly throttle my workers according to this limit. I imagine this can be useful in other cases where a dependency can only handle a certain level of throughput and lacks good resilience characteristics such that the consumer needs to handle load management.
Current Workaround
By setting :throttle-ms
to a function against a redis-backed rate-limiter which returns the number of milliseconds to wait to pick up a job, you can get close to adhering to the rate limit. However, :throttle-ms
is currently considered after picking up a message, so you end up throttling to + . Ideally, the rate limit settings do not need to know the number of workers.
Possible Solution Suggestions
- Move the
:throttle-ms
check to happen before picking up a message- Possibly make this an option to
:throttle-consideration
of:before-dequeue
vs:after-processing
(the latter of which would be the default to maintain previous behavior)
- Possibly make this an option to
- Add an option when creating a worker,
:work-rate-limiter
, that accepts a single argument (a map with the keys:qname
,:queue-size
, and:worker
) and returns true if the next message should be dequeued and processed. If it returns false, the work loop would progress as if processing was instantaneous, skipping the dequeue and handler call- Another possible option here would be for
work-rate-limiter
to return a number of milliseconds before checking again, and processing the message would only occur if the response of the limiter function were <= 0
- Another possible option here would be for
Clojurians' Slack thread for extra context: https://clojurians.slack.com/archives/C05M8GRL65Q/p1717003308524139?thread_ts=1716841423.174899&cid=C05M8GRL65Q
Activity