mirror of
1
Fork 0
gotosocial/vendor/github.com/ulule/limiter/v3
dependabot[bot] dae14cc0eb
[chore]: Bump github.com/ulule/limiter/v3 from 3.10.0 to 3.11.0 (#1429)
Bumps [github.com/ulule/limiter/v3](https://github.com/ulule/limiter) from 3.10.0 to 3.11.0.
- [Release notes](https://github.com/ulule/limiter/releases)
- [Commits](https://github.com/ulule/limiter/compare/v3.10.0...v3.11.0)

---
updated-dependencies:
- dependency-name: github.com/ulule/limiter/v3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-06 09:06:04 +01:00
..
drivers [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
internal/bytebuffer [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
.dockerignore [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
.editorconfig [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
.gitignore [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
.golangci.yml [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
AUTHORS [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
LICENSE [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
Makefile [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
README.md [chore]: Bump github.com/ulule/limiter/v3 from 3.10.0 to 3.11.0 (#1429) 2023-02-06 09:06:04 +01:00
defaults.go [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
limiter.go [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
network.go [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
options.go [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
rate.go [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00
store.go [feature] add rate limit middleware (#741) 2022-08-31 12:06:14 +02:00

README.md

Limiter

Documentation License Build Status Go Report Card

Dead simple rate limit middleware for Go.

  • Simple API
  • "Store" approach for backend
  • Redis support (but not tied too)
  • Middlewares: HTTP, FastHTTP and Gin

Installation

Using Go Modules

$ go get github.com/ulule/limiter/v3@v3.11.0

Usage

In five steps:

  • Create a limiter.Rate instance (the number of requests per period)
  • Create a limiter.Store instance (see Redis or In-Memory)
  • Create a limiter.Limiter instance that takes store and rate instances as arguments
  • Create a middleware instance using the middleware of your choice
  • Give the limiter instance to your middleware initializer

Example:

// Create a rate with the given limit (number of requests) for the given
// period (a time.Duration of your choice).
import "github.com/ulule/limiter/v3"

rate := limiter.Rate{
    Period: 1 * time.Hour,
    Limit:  1000,
}

// You can also use the simplified format "<limit>-<period>"", with the given
// periods:
//
// * "S": second
// * "M": minute
// * "H": hour
// * "D": day
//
// Examples:
//
// * 5 reqs/second: "5-S"
// * 10 reqs/minute: "10-M"
// * 1000 reqs/hour: "1000-H"
// * 2000 reqs/day: "2000-D"
//
rate, err := limiter.NewRateFromFormatted("1000-H")
if err != nil {
    panic(err)
}

// Then, create a store. Here, we use the bundled Redis store. Any store
// compliant to limiter.Store interface will do the job. The defaults are
// "limiter" as Redis key prefix and a maximum of 3 retries for the key under
// race condition.
import "github.com/ulule/limiter/v3/drivers/store/redis"

store, err := redis.NewStore(client)
if err != nil {
    panic(err)
}

// Alternatively, you can pass options to the store with the "WithOptions"
// function. For example, for Redis store:
import "github.com/ulule/limiter/v3/drivers/store/redis"

store, err := redis.NewStoreWithOptions(pool, limiter.StoreOptions{
    Prefix:   "your_own_prefix",
})
if err != nil {
    panic(err)
}

// Or use a in-memory store with a goroutine which clears expired keys.
import "github.com/ulule/limiter/v3/drivers/store/memory"

store := memory.NewStore()

// Then, create the limiter instance which takes the store and the rate as arguments.
// Now, you can give this instance to any supported middleware.
instance := limiter.New(store, rate)

// Alternatively, you can pass options to the limiter instance with several options.
instance := limiter.New(store, rate, limiter.WithClientIPHeader("True-Client-IP"), limiter.WithIPv6Mask(mask))

// Finally, give the limiter instance to your middleware initializer.
import "github.com/ulule/limiter/v3/drivers/middleware/stdlib"

middleware := stdlib.NewMiddleware(instance)

See middleware examples:

How it works

The ip address of the request is used as a key in the store.

If the key does not exist in the store we set a default value with an expiration period.

You will find two stores:

  • Redis: rely on TTL and incrementing the rate limit on each request.
  • In-Memory: rely on a fork of go-cache with a goroutine to clear expired keys using a default interval.

When the limit is reached, a 429 HTTP status code is sent.

Limiter behind a reverse proxy

Introduction

If your limiter is behind a reverse proxy, it could be difficult to obtain the "real" client IP.

Some reverse proxies, like AWS ALB, lets all header values through that it doesn't set itself. Like for example, True-Client-IP and X-Real-IP. Similarly, X-Forwarded-For is a list of comma-separated IPs that gets appended to by each traversed proxy. The idea is that the first IP (added by the first proxy) is the true client IP. Each subsequent IP is another proxy along the path.

An attacker can spoof either of those headers, which could be reported as a client IP.

By default, limiter doesn't trust any of those headers: you have to explicitly enable them in order to use them. If you enable them, you must always be aware that any header added by any (reverse) proxy not controlled by you are completely unreliable.

X-Forwarded-For

For example, if you make this request to your load balancer:

curl -X POST https://example.com/login -H "X-Forwarded-For: 1.2.3.4, 11.22.33.44"

And your server behind the load balancer obtain this:

X-Forwarded-For: 1.2.3.4, 11.22.33.44, <actual client IP>

That's mean you can't use X-Forwarded-For header, because it's unreliable and untrustworthy. So keep TrustForwardHeader disabled in your limiter option.

However, if you have configured your reverse proxy to always remove/overwrite X-Forwarded-For and/or X-Real-IP headers so that if you execute this (same) request:

curl -X POST https://example.com/login -H "X-Forwarded-For: 1.2.3.4, 11.22.33.44"

And your server behind the load balancer obtain this:

X-Forwarded-For: <actual client IP>

Then, you can enable TrustForwardHeader in your limiter option.

Custom header

Many CDN and Cloud providers add a custom header to define the client IP. Like for example, this non exhaustive list:

  • Fastly-Client-IP from Fastly
  • CF-Connecting-IP from Cloudflare
  • X-Azure-ClientIP from Azure

You can use these headers using ClientIPHeader in your limiter option.

None of the above

If none of the above solution are working, please use a custom KeyGetter in your middleware.

You can use this excellent article to help you define the best strategy depending on your network topology and your security need: https://adam-p.ca/blog/2022/03/x-forwarded-for/

If you have any idea/suggestions on how we could simplify this steps, don't hesitate to raise an issue. We would like some feedback on how we could implement this steps in the Limiter API.

Thank you.

Why Yet Another Package

You could ask us: why yet another rate limit package?

Because existing packages did not suit our needs.

We tried a lot of alternatives:

  1. Throttled. This package uses the generic cell-rate algorithm. To cite the documentation: "The algorithm has been slightly modified from its usual form to support limiting with an additional quantity parameter, such as for limiting the number of bytes uploaded". It is brillant in term of algorithm but documentation is quite unclear at the moment, we don't need burst feature for now, impossible to get a correct After-Retry (when limit exceeds, we can still make a few requests, because of the max burst) and it only supports http.Handler middleware (we use Gin). Currently, we only need to return 429 and X-Ratelimit-* headers for n reqs/duration.

  2. Speedbump. Good package but maybe too lightweight. No Reset support, only one middleware for Gin framework and too Redis-coupled. We rather prefer to use a "store" approach.

  3. Tollbooth. Good one too but does both too much and too little. It limits by remote IP, path, methods, custom headers and basic auth usernames... but does not provide any Redis support (only in-memory) and a ready-to-go middleware that sets X-Ratelimit-* headers. tollbooth.LimitByRequest(limiter, r) only returns an HTTP code.

  4. ratelimit. Probably the closer to our needs but, once again, too lightweight, no middleware available and not active (last commit was in August 2014). Some parts of code (Redis) comes from this project. It should deserve much more love.

There are other many packages on GitHub but most are either too lightweight, too old (only support old Go versions) or unmaintained. So that's why we decided to create yet another one.

Contributing

Don't hesitate ;)