text
stringlengths 55
456k
| metadata
dict |
---|---|
# Contributing
## Prerequisites
1. [Install Go][go-install].
2. Download the sources and switch the working directory:
```bash
go get -u -d github.com/go-chi/chi
cd $GOPATH/src/github.com/go-chi/chi
```
## Submitting a Pull Request
A typical workflow is:
1. [Fork the repository.][fork]
2. [Create a topic branch.][branch]
3. Add tests for your change.
4. Run `go test`. If your tests pass, return to the step 3.
5. Implement the change and ensure the steps from the previous step pass.
6. Run `goimports -w .`, to ensure the new code conforms to Go formatting guideline.
7. [Add, commit and push your changes.][git-help]
8. [Submit a pull request.][pull-req]
[go-install]: https://golang.org/doc/install
[fork]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
[branch]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-branches
[git-help]: https://docs.github.com/en
[pull-req]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests | {
"source": "yandex/perforator",
"title": "vendor/github.com/go-chi/chi/v5/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/go-chi/chi/v5/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1185
} |
# <img alt="chi" src="https://cdn.rawgit.com/go-chi/chi/master/_examples/chi.svg" width="220" />
[![GoDoc Widget]][GoDoc]
`chi` is a lightweight, idiomatic and composable router for building Go HTTP services. It's
especially good at helping you write large REST API services that are kept maintainable as your
project grows and changes. `chi` is built on the new `context` package introduced in Go 1.7 to
handle signaling, cancelation and request-scoped values across a handler chain.
The focus of the project has been to seek out an elegant and comfortable design for writing
REST API servers, written during the development of the Pressly API service that powers our
public API service, which in turn powers all of our client-side applications.
The key considerations of chi's design are: project structure, maintainability, standard http
handlers (stdlib-only), developer productivity, and deconstructing a large system into many small
parts. The core router `github.com/go-chi/chi` is quite small (less than 1000 LOC), but we've also
included some useful/optional subpackages: [middleware](/middleware), [render](https://github.com/go-chi/render)
and [docgen](https://github.com/go-chi/docgen). We hope you enjoy it too!
## Install
`go get -u github.com/go-chi/chi/v5`
## Features
* **Lightweight** - cloc'd in ~1000 LOC for the chi router
* **Fast** - yes, see [benchmarks](#benchmarks)
* **100% compatible with net/http** - use any http or middleware pkg in the ecosystem that is also compatible with `net/http`
* **Designed for modular/composable APIs** - middlewares, inline middlewares, route groups and sub-router mounting
* **Context control** - built on new `context` package, providing value chaining, cancellations and timeouts
* **Robust** - in production at Pressly, Cloudflare, Heroku, 99Designs, and many others (see [discussion](https://github.com/go-chi/chi/issues/91))
* **Doc generation** - `docgen` auto-generates routing documentation from your source to JSON or Markdown
* **Go.mod support** - as of v5, go.mod support (see [CHANGELOG](https://github.com/go-chi/chi/blob/master/CHANGELOG.md))
* **No external dependencies** - plain ol' Go stdlib + net/http
## Examples
See [_examples/](https://github.com/go-chi/chi/blob/master/_examples/) for a variety of examples.
**As easy as:**
```go
package main
import (
"net/http"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
)
func main() {
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Get("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("welcome"))
})
http.ListenAndServe(":3000", r)
}
```
**REST Preview:**
Here is a little preview of how routing looks like with chi. Also take a look at the generated routing docs
in JSON ([routes.json](https://github.com/go-chi/chi/blob/master/_examples/rest/routes.json)) and in
Markdown ([routes.md](https://github.com/go-chi/chi/blob/master/_examples/rest/routes.md)).
I highly recommend reading the source of the [examples](https://github.com/go-chi/chi/blob/master/_examples/) listed
above, they will show you all the features of chi and serve as a good form of documentation.
```go
import (
//...
"context"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
)
func main() {
r := chi.NewRouter()
// A good base middleware stack
r.Use(middleware.RequestID)
r.Use(middleware.RealIP)
r.Use(middleware.Logger)
r.Use(middleware.Recoverer)
// Set a timeout value on the request context (ctx), that will signal
// through ctx.Done() that the request has timed out and further
// processing should be stopped.
r.Use(middleware.Timeout(60 * time.Second))
r.Get("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("hi"))
})
// RESTy routes for "articles" resource
r.Route("/articles", func(r chi.Router) {
r.With(paginate).Get("/", listArticles) // GET /articles
r.With(paginate).Get("/{month}-{day}-{year}", listArticlesByDate) // GET /articles/01-16-2017
r.Post("/", createArticle) // POST /articles
r.Get("/search", searchArticles) // GET /articles/search
// Regexp url parameters:
r.Get("/{articleSlug:[a-z-]+}", getArticleBySlug) // GET /articles/home-is-toronto
// Subrouters:
r.Route("/{articleID}", func(r chi.Router) {
r.Use(ArticleCtx)
r.Get("/", getArticle) // GET /articles/123
r.Put("/", updateArticle) // PUT /articles/123
r.Delete("/", deleteArticle) // DELETE /articles/123
})
})
// Mount the admin sub-router
r.Mount("/admin", adminRouter())
http.ListenAndServe(":3333", r)
}
func ArticleCtx(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
articleID := chi.URLParam(r, "articleID")
article, err := dbGetArticle(articleID)
if err != nil {
http.Error(w, http.StatusText(404), 404)
return
}
ctx := context.WithValue(r.Context(), "article", article)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func getArticle(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
article, ok := ctx.Value("article").(*Article)
if !ok {
http.Error(w, http.StatusText(422), 422)
return
}
w.Write([]byte(fmt.Sprintf("title:%s", article.Title)))
}
// A completely separate router for administrator routes
func adminRouter() http.Handler {
r := chi.NewRouter()
r.Use(AdminOnly)
r.Get("/", adminIndex)
r.Get("/accounts", adminListAccounts)
return r
}
func AdminOnly(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
perm, ok := ctx.Value("acl.permission").(YourPermissionType)
if !ok || !perm.IsAdmin() {
http.Error(w, http.StatusText(403), 403)
return
}
next.ServeHTTP(w, r)
})
}
```
## Router interface
chi's router is based on a kind of [Patricia Radix trie](https://en.wikipedia.org/wiki/Radix_tree).
The router is fully compatible with `net/http`.
Built on top of the tree is the `Router` interface:
```go
// Router consisting of the core routing methods used by chi's Mux,
// using only the standard net/http.
type Router interface {
http.Handler
Routes
// Use appends one or more middlewares onto the Router stack.
Use(middlewares ...func(http.Handler) http.Handler)
// With adds inline middlewares for an endpoint handler.
With(middlewares ...func(http.Handler) http.Handler) Router
// Group adds a new inline-Router along the current routing
// path, with a fresh middleware stack for the inline-Router.
Group(fn func(r Router)) Router
// Route mounts a sub-Router along a `pattern`` string.
Route(pattern string, fn func(r Router)) Router
// Mount attaches another http.Handler along ./pattern/*
Mount(pattern string, h http.Handler)
// Handle and HandleFunc adds routes for `pattern` that matches
// all HTTP methods.
Handle(pattern string, h http.Handler)
HandleFunc(pattern string, h http.HandlerFunc)
// Method and MethodFunc adds routes for `pattern` that matches
// the `method` HTTP method.
Method(method, pattern string, h http.Handler)
MethodFunc(method, pattern string, h http.HandlerFunc)
// HTTP-method routing along `pattern`
Connect(pattern string, h http.HandlerFunc)
Delete(pattern string, h http.HandlerFunc)
Get(pattern string, h http.HandlerFunc)
Head(pattern string, h http.HandlerFunc)
Options(pattern string, h http.HandlerFunc)
Patch(pattern string, h http.HandlerFunc)
Post(pattern string, h http.HandlerFunc)
Put(pattern string, h http.HandlerFunc)
Trace(pattern string, h http.HandlerFunc)
// NotFound defines a handler to respond whenever a route could
// not be found.
NotFound(h http.HandlerFunc)
// MethodNotAllowed defines a handler to respond whenever a method is
// not allowed.
MethodNotAllowed(h http.HandlerFunc)
}
// Routes interface adds two methods for router traversal, which is also
// used by the github.com/go-chi/docgen package to generate documentation for Routers.
type Routes interface {
// Routes returns the routing tree in an easily traversable structure.
Routes() []Route
// Middlewares returns the list of middlewares in use by the router.
Middlewares() Middlewares
// Match searches the routing tree for a handler that matches
// the method/path - similar to routing a http request, but without
// executing the handler thereafter.
Match(rctx *Context, method, path string) bool
}
```
Each routing method accepts a URL `pattern` and chain of `handlers`. The URL pattern
supports named params (ie. `/users/{userID}`) and wildcards (ie. `/admin/*`). URL parameters
can be fetched at runtime by calling `chi.URLParam(r, "userID")` for named parameters
and `chi.URLParam(r, "*")` for a wildcard parameter.
### Middleware handlers
chi's middlewares are just stdlib net/http middleware handlers. There is nothing special
about them, which means the router and all the tooling is designed to be compatible and
friendly with any middleware in the community. This offers much better extensibility and reuse
of packages and is at the heart of chi's purpose.
Here is an example of a standard net/http middleware where we assign a context key `"user"`
the value of `"123"`. This middleware sets a hypothetical user identifier on the request
context and calls the next handler in the chain.
```go
// HTTP middleware setting a value on the request context
func MyMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// create new context from `r` request context, and assign key `"user"`
// to value of `"123"`
ctx := context.WithValue(r.Context(), "user", "123")
// call the next handler in the chain, passing the response writer and
// the updated request object with the new context value.
//
// note: context.Context values are nested, so any previously set
// values will be accessible as well, and the new `"user"` key
// will be accessible from this point forward.
next.ServeHTTP(w, r.WithContext(ctx))
})
}
```
### Request handlers
chi uses standard net/http request handlers. This little snippet is an example of a http.Handler
func that reads a user identifier from the request context - hypothetically, identifying
the user sending an authenticated request, validated+set by a previous middleware handler.
```go
// HTTP handler accessing data from the request context.
func MyRequestHandler(w http.ResponseWriter, r *http.Request) {
// here we read from the request context and fetch out `"user"` key set in
// the MyMiddleware example above.
user := r.Context().Value("user").(string)
// respond to the client
w.Write([]byte(fmt.Sprintf("hi %s", user)))
}
```
### URL parameters
chi's router parses and stores URL parameters right onto the request context. Here is
an example of how to access URL params in your net/http handlers. And of course, middlewares
are able to access the same information.
```go
// HTTP handler accessing the url routing parameters.
func MyRequestHandler(w http.ResponseWriter, r *http.Request) {
// fetch the url parameter `"userID"` from the request of a matching
// routing pattern. An example routing pattern could be: /users/{userID}
userID := chi.URLParam(r, "userID")
// fetch `"key"` from the request context
ctx := r.Context()
key := ctx.Value("key").(string)
// respond to the client
w.Write([]byte(fmt.Sprintf("hi %v, %v", userID, key)))
}
```
## Middlewares
chi comes equipped with an optional `middleware` package, providing a suite of standard
`net/http` middlewares. Please note, any middleware in the ecosystem that is also compatible
with `net/http` can be used with chi's mux.
### Core middlewares
----------------------------------------------------------------------------------------------------
| chi/middleware Handler | description |
| :--------------------- | :---------------------------------------------------------------------- |
| [AllowContentEncoding] | Enforces a whitelist of request Content-Encoding headers |
| [AllowContentType] | Explicit whitelist of accepted request Content-Types |
| [BasicAuth] | Basic HTTP authentication |
| [Compress] | Gzip compression for clients that accept compressed responses |
| [ContentCharset] | Ensure charset for Content-Type request headers |
| [CleanPath] | Clean double slashes from request path |
| [GetHead] | Automatically route undefined HEAD requests to GET handlers |
| [Heartbeat] | Monitoring endpoint to check the servers pulse |
| [Logger] | Logs the start and end of each request with the elapsed processing time |
| [NoCache] | Sets response headers to prevent clients from caching |
| [Profiler] | Easily attach net/http/pprof to your routers |
| [RealIP] | Sets a http.Request's RemoteAddr to either X-Real-IP or X-Forwarded-For |
| [Recoverer] | Gracefully absorb panics and prints the stack trace |
| [RequestID] | Injects a request ID into the context of each request |
| [RedirectSlashes] | Redirect slashes on routing paths |
| [RouteHeaders] | Route handling for request headers |
| [SetHeader] | Short-hand middleware to set a response header key/value |
| [StripSlashes] | Strip slashes on routing paths |
| [Sunset] | Sunset set Deprecation/Sunset header to response |
| [Throttle] | Puts a ceiling on the number of concurrent requests |
| [Timeout] | Signals to the request context when the timeout deadline is reached |
| [URLFormat] | Parse extension from url and put it on request context |
| [WithValue] | Short-hand middleware to set a key/value on the request context |
----------------------------------------------------------------------------------------------------
[AllowContentEncoding]: https://pkg.go.dev/github.com/go-chi/chi/middleware#AllowContentEncoding
[AllowContentType]: https://pkg.go.dev/github.com/go-chi/chi/middleware#AllowContentType
[BasicAuth]: https://pkg.go.dev/github.com/go-chi/chi/middleware#BasicAuth
[Compress]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Compress
[ContentCharset]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ContentCharset
[CleanPath]: https://pkg.go.dev/github.com/go-chi/chi/middleware#CleanPath
[GetHead]: https://pkg.go.dev/github.com/go-chi/chi/middleware#GetHead
[GetReqID]: https://pkg.go.dev/github.com/go-chi/chi/middleware#GetReqID
[Heartbeat]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Heartbeat
[Logger]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Logger
[NoCache]: https://pkg.go.dev/github.com/go-chi/chi/middleware#NoCache
[Profiler]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Profiler
[RealIP]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RealIP
[Recoverer]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Recoverer
[RedirectSlashes]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RedirectSlashes
[RequestLogger]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RequestLogger
[RequestID]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RequestID
[RouteHeaders]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RouteHeaders
[SetHeader]: https://pkg.go.dev/github.com/go-chi/chi/middleware#SetHeader
[StripSlashes]: https://pkg.go.dev/github.com/go-chi/chi/middleware#StripSlashes
[Sunset]: https://pkg.go.dev/github.com/go-chi/chi/v5/middleware#Sunset
[Throttle]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Throttle
[ThrottleBacklog]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ThrottleBacklog
[ThrottleWithOpts]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ThrottleWithOpts
[Timeout]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Timeout
[URLFormat]: https://pkg.go.dev/github.com/go-chi/chi/middleware#URLFormat
[WithLogEntry]: https://pkg.go.dev/github.com/go-chi/chi/middleware#WithLogEntry
[WithValue]: https://pkg.go.dev/github.com/go-chi/chi/middleware#WithValue
[Compressor]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Compressor
[DefaultLogFormatter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#DefaultLogFormatter
[EncoderFunc]: https://pkg.go.dev/github.com/go-chi/chi/middleware#EncoderFunc
[HeaderRoute]: https://pkg.go.dev/github.com/go-chi/chi/middleware#HeaderRoute
[HeaderRouter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#HeaderRouter
[LogEntry]: https://pkg.go.dev/github.com/go-chi/chi/middleware#LogEntry
[LogFormatter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#LogFormatter
[LoggerInterface]: https://pkg.go.dev/github.com/go-chi/chi/middleware#LoggerInterface
[ThrottleOpts]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ThrottleOpts
[WrapResponseWriter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#WrapResponseWriter
### Extra middlewares & packages
Please see https://github.com/go-chi for additional packages.
--------------------------------------------------------------------------------------------------------------------
| package | description |
|:---------------------------------------------------|:-------------------------------------------------------------
| [cors](https://github.com/go-chi/cors) | Cross-origin resource sharing (CORS) |
| [docgen](https://github.com/go-chi/docgen) | Print chi.Router routes at runtime |
| [jwtauth](https://github.com/go-chi/jwtauth) | JWT authentication |
| [hostrouter](https://github.com/go-chi/hostrouter) | Domain/host based request routing |
| [httplog](https://github.com/go-chi/httplog) | Small but powerful structured HTTP request logging |
| [httprate](https://github.com/go-chi/httprate) | HTTP request rate limiter |
| [httptracer](https://github.com/go-chi/httptracer) | HTTP request performance tracing library |
| [httpvcr](https://github.com/go-chi/httpvcr) | Write deterministic tests for external sources |
| [stampede](https://github.com/go-chi/stampede) | HTTP request coalescer |
--------------------------------------------------------------------------------------------------------------------
## context?
`context` is a tiny pkg that provides simple interface to signal context across call stacks
and goroutines. It was originally written by [Sameer Ajmani](https://github.com/Sajmani)
and is available in stdlib since go1.7.
Learn more at https://blog.golang.org/context
and..
* Docs: https://golang.org/pkg/context
* Source: https://github.com/golang/go/tree/master/src/context
## Benchmarks
The benchmark suite: https://github.com/pkieltyka/go-http-routing-benchmark
Results as of Nov 29, 2020 with Go 1.15.5 on Linux AMD 3950x
```shell
BenchmarkChi_Param 3075895 384 ns/op 400 B/op 2 allocs/op
BenchmarkChi_Param5 2116603 566 ns/op 400 B/op 2 allocs/op
BenchmarkChi_Param20 964117 1227 ns/op 400 B/op 2 allocs/op
BenchmarkChi_ParamWrite 2863413 420 ns/op 400 B/op 2 allocs/op
BenchmarkChi_GithubStatic 3045488 395 ns/op 400 B/op 2 allocs/op
BenchmarkChi_GithubParam 2204115 540 ns/op 400 B/op 2 allocs/op
BenchmarkChi_GithubAll 10000 113811 ns/op 81203 B/op 406 allocs/op
BenchmarkChi_GPlusStatic 3337485 359 ns/op 400 B/op 2 allocs/op
BenchmarkChi_GPlusParam 2825853 423 ns/op 400 B/op 2 allocs/op
BenchmarkChi_GPlus2Params 2471697 483 ns/op 400 B/op 2 allocs/op
BenchmarkChi_GPlusAll 194220 5950 ns/op 5200 B/op 26 allocs/op
BenchmarkChi_ParseStatic 3365324 356 ns/op 400 B/op 2 allocs/op
BenchmarkChi_ParseParam 2976614 404 ns/op 400 B/op 2 allocs/op
BenchmarkChi_Parse2Params 2638084 439 ns/op 400 B/op 2 allocs/op
BenchmarkChi_ParseAll 109567 11295 ns/op 10400 B/op 52 allocs/op
BenchmarkChi_StaticAll 16846 71308 ns/op 62802 B/op 314 allocs/op
```
Comparison with other routers: https://gist.github.com/pkieltyka/123032f12052520aaccab752bd3e78cc
NOTE: the allocs in the benchmark above are from the calls to http.Request's
`WithContext(context.Context)` method that clones the http.Request, sets the `Context()`
on the duplicated (alloc'd) request and returns it the new request object. This is just
how setting context on a request in Go works.
## Credits
* Carl Jackson for https://github.com/zenazn/goji
* Parts of chi's thinking comes from goji, and chi's middleware package
sources from goji.
* Armon Dadgar for https://github.com/armon/go-radix
* Contributions: [@VojtechVitek](https://github.com/VojtechVitek)
We'll be more than happy to see [your contributions](./CONTRIBUTING.md)!
## Beyond REST
chi is just a http router that lets you decompose request handling into many smaller layers.
Many companies use chi to write REST services for their public APIs. But, REST is just a convention
for managing state via HTTP, and there's a lot of other pieces required to write a complete client-server
system or network of microservices.
Looking beyond REST, I also recommend some newer works in the field:
* [webrpc](https://github.com/webrpc/webrpc) - Web-focused RPC client+server framework with code-gen
* [gRPC](https://github.com/grpc/grpc-go) - Google's RPC framework via protobufs
* [graphql](https://github.com/99designs/gqlgen) - Declarative query language
* [NATS](https://nats.io) - lightweight pub-sub
## License
Copyright (c) 2015-present [Peter Kieltyka](https://github.com/pkieltyka)
Licensed under [MIT License](./LICENSE)
[GoDoc]: https://pkg.go.dev/github.com/go-chi/chi/v5
[GoDoc Widget]: https://godoc.org/github.com/go-chi/chi?status.svg
[Travis]: https://travis-ci.org/go-chi/chi
[Travis Widget]: https://travis-ci.org/go-chi/chi.svg?branch=master | {
"source": "yandex/perforator",
"title": "vendor/github.com/go-chi/chi/v5/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/go-chi/chi/v5/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 23153
} |
# Benchmarking logr
Any major changes to the logr library must be benchmarked before and after the
change.
## Running the benchmark
```
$ go test -bench='.' -test.benchmem ./benchmark/
```
## Fixing the benchmark
If you think this benchmark can be improved, you are probably correct! PRs are
very welcome. | {
"source": "yandex/perforator",
"title": "vendor/github.com/go-logr/logr/benchmark/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/go-logr/logr/benchmark/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 311
} |
<p align="center">
<h1 align="center">Resty</h1>
<p align="center">Simple HTTP and REST client library for Go (inspired by Ruby rest-client)</p>
<p align="center"><a href="#features">Features</a> section describes in detail about Resty capabilities</p>
</p>
<p align="center">
<p align="center"><a href="https://github.com/go-resty/resty/actions/workflows/ci.yml?query=branch%3Av2"><img src="https://github.com/go-resty/resty/actions/workflows/ci.yml/badge.svg?branch=v2" alt="Build Status"></a> <a href="https://app.codecov.io/gh/go-resty/resty/tree/v2"><img src="https://codecov.io/gh/go-resty/resty/branch/v2/graph/badge.svg" alt="Code Coverage"></a> <a href="https://goreportcard.com/report/go-resty/resty"><img src="https://goreportcard.com/badge/go-resty/resty" alt="Go Report Card"></a> <a href="https://github.com/go-resty/resty/releases/latest"><img src="https://img.shields.io/badge/version-2.15.3-blue.svg" alt="Release Version"></a> <a href="https://pkg.go.dev/github.com/go-resty/resty/v2"><img src="https://pkg.go.dev/badge/github.com/go-resty/resty" alt="GoDoc"></a> <a href="LICENSE"><img src="https://img.shields.io/github/license/go-resty/resty.svg" alt="License"></a> <a href="https://github.com/avelino/awesome-go"><img src="https://awesome.re/mentioned-badge.svg" alt="Mentioned in Awesome Go"></a></p>
</p>
## News
* v2.15.3 [released](https://github.com/go-resty/resty/releases/tag/v2.15.3) and tagged on Sep 26, 2024.
* v2.0.0 [released](https://github.com/go-resty/resty/releases/tag/v2.0.0) and tagged on Jul 16, 2019.
* v1.12.0 [released](https://github.com/go-resty/resty/releases/tag/v1.12.0) and tagged on Feb 27, 2019.
* v1.0 released and tagged on Sep 25, 2017. - Resty's first version was released on Sep 15, 2015 then it grew gradually as a very handy and helpful library. Its been a two years since first release. I'm very thankful to Resty users and its [contributors](https://github.com/go-resty/resty/graphs/contributors).
## Features
* GET, POST, PUT, DELETE, HEAD, PATCH, OPTIONS, etc.
* Simple and chainable methods for settings and request
* [Request](https://pkg.go.dev/github.com/go-resty/resty/v2#Request) Body can be `string`, `[]byte`, `struct`, `map`, `slice` and `io.Reader` too
* Auto detects `Content-Type`
* Buffer less processing for `io.Reader`
* Native `*http.Request` instance may be accessed during middleware and request execution via `Request.RawRequest`
* Request Body can be read multiple times via `Request.RawRequest.GetBody()`
* [Response](https://pkg.go.dev/github.com/go-resty/resty/v2#Response) object gives you more possibility
* Access as `[]byte` array - `response.Body()` OR Access as `string` - `response.String()`
* Know your `response.Time()` and when we `response.ReceivedAt()`
* Automatic marshal and unmarshal for `JSON` and `XML` content type
* Default is `JSON`, if you supply `struct/map` without header `Content-Type`
* For auto-unmarshal, refer to -
- Success scenario [Request.SetResult()](https://pkg.go.dev/github.com/go-resty/resty/v2#Request.SetResult) and [Response.Result()](https://pkg.go.dev/github.com/go-resty/resty/v2#Response.Result).
- Error scenario [Request.SetError()](https://pkg.go.dev/github.com/go-resty/resty/v2#Request.SetError) and [Response.Error()](https://pkg.go.dev/github.com/go-resty/resty/v2#Response.Error).
- Supports [RFC7807](https://tools.ietf.org/html/rfc7807) - `application/problem+json` & `application/problem+xml`
* Resty provides an option to override [JSON Marshal/Unmarshal and XML Marshal/Unmarshal](#override-json--xml-marshalunmarshal)
* Easy to upload one or more file(s) via `multipart/form-data`
* Auto detects file content type
* Request URL [Path Params (aka URI Params)](https://pkg.go.dev/github.com/go-resty/resty/v2#Request.SetPathParams)
* Backoff Retry Mechanism with retry condition function [reference](retry_test.go)
* Resty client HTTP & REST [Request](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.OnBeforeRequest) and [Response](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.OnAfterResponse) middlewares
* `Request.SetContext` supported
* Authorization option of `BasicAuth` and `Bearer` token
* Set request `ContentLength` value for all request or particular request
* Custom [Root Certificates](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.SetRootCertificate) and Client [Certificates](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.SetCertificates)
* Download/Save HTTP response directly into File, like `curl -o` flag. See [SetOutputDirectory](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.SetOutputDirectory) & [SetOutput](https://pkg.go.dev/github.com/go-resty/resty/v2#Request.SetOutput).
* Cookies for your request and CookieJar support
* SRV Record based request instead of Host URL
* Client settings like `Timeout`, `RedirectPolicy`, `Proxy`, `TLSClientConfig`, `Transport`, etc.
* Optionally allows GET request with payload, see [SetAllowGetMethodPayload](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.SetAllowGetMethodPayload)
* Supports registering external JSON library into resty, see [how to use](https://github.com/go-resty/resty/issues/76#issuecomment-314015250)
* Exposes Response reader without reading response (no auto-unmarshaling) if need be, see [how to use](https://github.com/go-resty/resty/issues/87#issuecomment-322100604)
* Option to specify expected `Content-Type` when response `Content-Type` header missing. Refer to [#92](https://github.com/go-resty/resty/issues/92)
* Resty design
* Have client level settings & options and also override at Request level if you want to
* Request and Response middleware
* Create Multiple clients if you want to `resty.New()`
* Supports `http.RoundTripper` implementation, see [SetTransport](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.SetTransport)
* goroutine concurrent safe
* Resty Client trace, see [Client.EnableTrace](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.EnableTrace) and [Request.EnableTrace](https://pkg.go.dev/github.com/go-resty/resty/v2#Request.EnableTrace)
* Since v2.4.0, trace info contains a `RequestAttempt` value, and the `Request` object contains an `Attempt` attribute
* Supports on-demand CURL command generation, see [Client.EnableGenerateCurlOnDebug](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.EnableGenerateCurlOnDebug), [Request.EnableGenerateCurlOnDebug](https://pkg.go.dev/github.com/go-resty/resty/v2#Request.EnableGenerateCurlOnDebug). It requires debug mode to be enabled.
* Debug mode - clean and informative logging presentation
* Gzip - Go does it automatically also resty has fallback handling too
* Works fine with `HTTP/2` and `HTTP/1.1`, also `HTTP/3` can be used with Resty, see this [comment](https://github.com/go-resty/resty/issues/846#issuecomment-2329696110)
* [Bazel support](#bazel-support)
* Easily mock Resty for testing, [for e.g.](#mocking-http-requests-using-httpmock-library)
* Well tested client library
### Included Batteries
* Redirect Policies - see [how to use](#redirect-policy)
* NoRedirectPolicy
* FlexibleRedirectPolicy
* DomainCheckRedirectPolicy
* etc. [more info](redirect.go)
* Retry Mechanism [how to use](#retries)
* Backoff Retry
* Conditional Retry
* Since v2.6.0, Retry Hooks - [Client](https://pkg.go.dev/github.com/go-resty/resty/v2#Client.AddRetryHook), [Request](https://pkg.go.dev/github.com/go-resty/resty/v2#Request.AddRetryHook)
* SRV Record based request instead of Host URL [how to use](resty_test.go#L1412)
* etc (upcoming - throw your idea's [here](https://github.com/go-resty/resty/issues)).
#### Supported Go Versions
Recommended to use `go1.20` and above.
Initially Resty started supporting `go modules` since `v1.10.0` release.
Starting Resty v2 and higher versions, it fully embraces [go modules](https://github.com/golang/go/wiki/Modules) package release. It requires a Go version capable of understanding `/vN` suffixed imports:
- 1.9.7+
- 1.10.3+
- 1.11+
## It might be beneficial for your project :smile:
Resty author also published following projects for Go Community.
* [go-model](https://github.com/jeevatkm/go-model) - Robust & Easy to use model mapper and utility methods for Go `struct`.
## Installation
```bash
# Go Modules
require github.com/go-resty/resty/v2 v2.15.3
```
## Usage
The following samples will assist you to become as comfortable as possible with resty library.
```go
// Import resty into your code and refer it as `resty`.
import "github.com/go-resty/resty/v2"
```
#### Simple GET
```go
// Create a Resty Client
client := resty.New()
resp, err := client.R().
EnableTrace().
Get("https://httpbin.org/get")
// Explore response object
fmt.Println("Response Info:")
fmt.Println(" Error :", err)
fmt.Println(" Status Code:", resp.StatusCode())
fmt.Println(" Status :", resp.Status())
fmt.Println(" Proto :", resp.Proto())
fmt.Println(" Time :", resp.Time())
fmt.Println(" Received At:", resp.ReceivedAt())
fmt.Println(" Body :\n", resp)
fmt.Println()
// Explore trace info
fmt.Println("Request Trace Info:")
ti := resp.Request.TraceInfo()
fmt.Println(" DNSLookup :", ti.DNSLookup)
fmt.Println(" ConnTime :", ti.ConnTime)
fmt.Println(" TCPConnTime :", ti.TCPConnTime)
fmt.Println(" TLSHandshake :", ti.TLSHandshake)
fmt.Println(" ServerTime :", ti.ServerTime)
fmt.Println(" ResponseTime :", ti.ResponseTime)
fmt.Println(" TotalTime :", ti.TotalTime)
fmt.Println(" IsConnReused :", ti.IsConnReused)
fmt.Println(" IsConnWasIdle :", ti.IsConnWasIdle)
fmt.Println(" ConnIdleTime :", ti.ConnIdleTime)
fmt.Println(" RequestAttempt:", ti.RequestAttempt)
fmt.Println(" RemoteAddr :", ti.RemoteAddr.String())
/* Output
Response Info:
Error : <nil>
Status Code: 200
Status : 200 OK
Proto : HTTP/2.0
Time : 457.034718ms
Received At: 2020-09-14 15:35:29.784681 -0700 PDT m=+0.458137045
Body :
{
"args": {},
"headers": {
"Accept-Encoding": "gzip",
"Host": "httpbin.org",
"User-Agent": "go-resty/2.4.0 (https://github.com/go-resty/resty)",
"X-Amzn-Trace-Id": "Root=1-5f5ff031-000ff6292204aa6898e4de49"
},
"origin": "0.0.0.0",
"url": "https://httpbin.org/get"
}
Request Trace Info:
DNSLookup : 4.074657ms
ConnTime : 381.709936ms
TCPConnTime : 77.428048ms
TLSHandshake : 299.623597ms
ServerTime : 75.414703ms
ResponseTime : 79.337µs
TotalTime : 457.034718ms
IsConnReused : false
IsConnWasIdle : false
ConnIdleTime : 0s
RequestAttempt: 1
RemoteAddr : 3.221.81.55:443
*/
```
#### Enhanced GET
```go
// Create a Resty Client
client := resty.New()
resp, err := client.R().
SetQueryParams(map[string]string{
"page_no": "1",
"limit": "20",
"sort":"name",
"order": "asc",
"random":strconv.FormatInt(time.Now().Unix(), 10),
}).
SetHeader("Accept", "application/json").
SetAuthToken("BC594900518B4F7EAC75BD37F019E08FBC594900518B4F7EAC75BD37F019E08F").
Get("/search_result")
// Sample of using Request.SetQueryString method
resp, err := client.R().
SetQueryString("productId=232&template=fresh-sample&cat=resty&source=google&kw=buy a lot more").
SetHeader("Accept", "application/json").
SetAuthToken("BC594900518B4F7EAC75BD37F019E08FBC594900518B4F7EAC75BD37F019E08F").
Get("/show_product")
// If necessary, you can force response content type to tell Resty to parse a JSON response into your struct
resp, err := client.R().
SetResult(result).
ForceContentType("application/json").
Get("v2/alpine/manifests/latest")
```
#### Various POST method combinations
```go
// Create a Resty Client
client := resty.New()
// POST JSON string
// No need to set content type, if you have client level setting
resp, err := client.R().
SetHeader("Content-Type", "application/json").
SetBody(`{"username":"testuser", "password":"testpass"}`).
SetResult(&AuthSuccess{}). // or SetResult(AuthSuccess{}).
Post("https://myapp.com/login")
// POST []byte array
// No need to set content type, if you have client level setting
resp, err := client.R().
SetHeader("Content-Type", "application/json").
SetBody([]byte(`{"username":"testuser", "password":"testpass"}`)).
SetResult(&AuthSuccess{}). // or SetResult(AuthSuccess{}).
Post("https://myapp.com/login")
// POST Struct, default is JSON content type. No need to set one
resp, err := client.R().
SetBody(User{Username: "testuser", Password: "testpass"}).
SetResult(&AuthSuccess{}). // or SetResult(AuthSuccess{}).
SetError(&AuthError{}). // or SetError(AuthError{}).
Post("https://myapp.com/login")
// POST Map, default is JSON content type. No need to set one
resp, err := client.R().
SetBody(map[string]interface{}{"username": "testuser", "password": "testpass"}).
SetResult(&AuthSuccess{}). // or SetResult(AuthSuccess{}).
SetError(&AuthError{}). // or SetError(AuthError{}).
Post("https://myapp.com/login")
// POST of raw bytes for file upload. For example: upload file to Dropbox
fileBytes, _ := os.ReadFile("/Users/jeeva/mydocument.pdf")
// See we are not setting content-type header, since go-resty automatically detects Content-Type for you
resp, err := client.R().
SetBody(fileBytes).
SetContentLength(true). // Dropbox expects this value
SetAuthToken("<your-auth-token>").
SetError(&DropboxError{}). // or SetError(DropboxError{}).
Post("https://content.dropboxapi.com/1/files_put/auto/resty/mydocument.pdf") // for upload Dropbox supports PUT too
// Note: resty detects Content-Type for request body/payload if content type header is not set.
// * For struct and map data type defaults to 'application/json'
// * Fallback is plain text content type
```
#### Sample PUT
You can use various combinations of `PUT` method call like demonstrated for `POST`.
```go
// Note: This is one sample of PUT method usage, refer POST for more combination
// Create a Resty Client
client := resty.New()
// Request goes as JSON content type
// No need to set auth token, error, if you have client level settings
resp, err := client.R().
SetBody(Article{
Title: "go-resty",
Content: "This is my article content, oh ya!",
Author: "Jeevanandam M",
Tags: []string{"article", "sample", "resty"},
}).
SetAuthToken("C6A79608-782F-4ED0-A11D-BD82FAD829CD").
SetError(&Error{}). // or SetError(Error{}).
Put("https://myapp.com/article/1234")
```
#### Sample PATCH
You can use various combinations of `PATCH` method call like demonstrated for `POST`.
```go
// Note: This is one sample of PUT method usage, refer POST for more combination
// Create a Resty Client
client := resty.New()
// Request goes as JSON content type
// No need to set auth token, error, if you have client level settings
resp, err := client.R().
SetBody(Article{
Tags: []string{"new tag1", "new tag2"},
}).
SetAuthToken("C6A79608-782F-4ED0-A11D-BD82FAD829CD").
SetError(&Error{}). // or SetError(Error{}).
Patch("https://myapp.com/articles/1234")
```
#### Sample DELETE, HEAD, OPTIONS
```go
// Create a Resty Client
client := resty.New()
// DELETE a article
// No need to set auth token, error, if you have client level settings
resp, err := client.R().
SetAuthToken("C6A79608-782F-4ED0-A11D-BD82FAD829CD").
SetError(&Error{}). // or SetError(Error{}).
Delete("https://myapp.com/articles/1234")
// DELETE a articles with payload/body as a JSON string
// No need to set auth token, error, if you have client level settings
resp, err := client.R().
SetAuthToken("C6A79608-782F-4ED0-A11D-BD82FAD829CD").
SetError(&Error{}). // or SetError(Error{}).
SetHeader("Content-Type", "application/json").
SetBody(`{article_ids: [1002, 1006, 1007, 87683, 45432] }`).
Delete("https://myapp.com/articles")
// HEAD of resource
// No need to set auth token, if you have client level settings
resp, err := client.R().
SetAuthToken("C6A79608-782F-4ED0-A11D-BD82FAD829CD").
Head("https://myapp.com/videos/hi-res-video")
// OPTIONS of resource
// No need to set auth token, if you have client level settings
resp, err := client.R().
SetAuthToken("C6A79608-782F-4ED0-A11D-BD82FAD829CD").
Options("https://myapp.com/servers/nyc-dc-01")
```
#### Override JSON & XML Marshal/Unmarshal
User could register choice of JSON/XML library into resty or write your own. By default resty registers standard `encoding/json` and `encoding/xml` respectively.
```go
// Example of registering json-iterator
import jsoniter "github.com/json-iterator/go"
json := jsoniter.ConfigCompatibleWithStandardLibrary
client := resty.New().
SetJSONMarshaler(json.Marshal).
SetJSONUnmarshaler(json.Unmarshal)
// similarly user could do for XML too with -
client.SetXMLMarshaler(xml.Marshal).
SetXMLUnmarshaler(xml.Unmarshal)
```
### Multipart File(s) upload
#### Using io.Reader
```go
profileImgBytes, _ := os.ReadFile("/Users/jeeva/test-img.png")
notesBytes, _ := os.ReadFile("/Users/jeeva/text-file.txt")
// Create a Resty Client
client := resty.New()
resp, err := client.R().
SetFileReader("profile_img", "test-img.png", bytes.NewReader(profileImgBytes)).
SetFileReader("notes", "text-file.txt", bytes.NewReader(notesBytes)).
SetFormData(map[string]string{
"first_name": "Jeevanandam",
"last_name": "M",
}).
Post("http://myapp.com/upload")
```
#### Using File directly from Path
```go
// Create a Resty Client
client := resty.New()
// Single file scenario
resp, err := client.R().
SetFile("profile_img", "/Users/jeeva/test-img.png").
Post("http://myapp.com/upload")
// Multiple files scenario
resp, err := client.R().
SetFiles(map[string]string{
"profile_img": "/Users/jeeva/test-img.png",
"notes": "/Users/jeeva/text-file.txt",
}).
Post("http://myapp.com/upload")
// Multipart of form fields and files
resp, err := client.R().
SetFiles(map[string]string{
"profile_img": "/Users/jeeva/test-img.png",
"notes": "/Users/jeeva/text-file.txt",
}).
SetFormData(map[string]string{
"first_name": "Jeevanandam",
"last_name": "M",
"zip_code": "00001",
"city": "my city",
"access_token": "C6A79608-782F-4ED0-A11D-BD82FAD829CD",
}).
Post("http://myapp.com/profile")
```
#### Sample Form submission
```go
// Create a Resty Client
client := resty.New()
// just mentioning about POST as an example with simple flow
// User Login
resp, err := client.R().
SetFormData(map[string]string{
"username": "jeeva",
"password": "mypass",
}).
Post("http://myapp.com/login")
// Followed by profile update
resp, err := client.R().
SetFormData(map[string]string{
"first_name": "Jeevanandam",
"last_name": "M",
"zip_code": "00001",
"city": "new city update",
}).
Post("http://myapp.com/profile")
// Multi value form data
criteria := url.Values{
"search_criteria": []string{"book", "glass", "pencil"},
}
resp, err := client.R().
SetFormDataFromValues(criteria).
Post("http://myapp.com/search")
```
#### Save HTTP Response into File
```go
// Create a Resty Client
client := resty.New()
// Setting output directory path, If directory not exists then resty creates one!
// This is optional one, if you're planning using absolute path in
// `Request.SetOutput` and can used together.
client.SetOutputDirectory("/Users/jeeva/Downloads")
// HTTP response gets saved into file, similar to curl -o flag
_, err := client.R().
SetOutput("plugin/ReplyWithHeader-v5.1-beta.zip").
Get("http://bit.ly/1LouEKr")
// OR using absolute path
// Note: output directory path is not used for absolute path
_, err := client.R().
SetOutput("/MyDownloads/plugin/ReplyWithHeader-v5.1-beta.zip").
Get("http://bit.ly/1LouEKr")
```
#### Request URL Path Params
Resty provides easy to use dynamic request URL path params. Params can be set at client and request level. Client level params value can be overridden at request level.
```go
// Create a Resty Client
client := resty.New()
client.R().SetPathParams(map[string]string{
"userId": "[email protected]",
"subAccountId": "100002",
}).
Get("/v1/users/{userId}/{subAccountId}/details")
// Result:
// Composed URL - /v1/users/[email protected]/100002/details
```
#### Request and Response Middleware
Resty provides middleware ability to manipulate for Request and Response. It is more flexible than callback approach.
```go
// Create a Resty Client
client := resty.New()
// Registering Request Middleware
client.OnBeforeRequest(func(c *resty.Client, req *resty.Request) error {
// Now you have access to Client and current Request object
// manipulate it as per your need
return nil // if its success otherwise return error
})
// Registering Response Middleware
client.OnAfterResponse(func(c *resty.Client, resp *resty.Response) error {
// Now you have access to Client and current Response object
// manipulate it as per your need
return nil // if its success otherwise return error
})
```
#### OnError Hooks
Resty provides OnError hooks that may be called because:
- The client failed to send the request due to connection timeout, TLS handshake failure, etc...
- The request was retried the maximum amount of times, and still failed.
If there was a response from the server, the original error will be wrapped in `*resty.ResponseError` which contains the last response received.
```go
// Create a Resty Client
client := resty.New()
client.OnError(func(req *resty.Request, err error) {
if v, ok := err.(*resty.ResponseError); ok {
// v.Response contains the last response from the server
// v.Err contains the original error
}
// Log the error, increment a metric, etc...
})
```
#### Generate CURL Command
>Refer: [curl_cmd_test.go](https://github.com/go-resty/resty/blob/v2/curl_cmd_test.go)
```go
// Create a Resty Client
client := resty.New()
resp, err := client.R().
SetDebug(true).
EnableGenerateCurlOnDebug(). // CURL command generated when debug mode enabled with this option
SetBody(map[string]string{"name": "Alex"}).
Post("https://httpbin.org/post")
curlCmdExecuted := resp.Request.GenerateCurlCommand()
// Explore curl command
fmt.Println("Curl Command:\n ", curlCmdExecuted+"\n")
/* Output
Curl Command:
curl -X POST -H 'Content-Type: application/json' -H 'User-Agent: go-resty/2.14.0 (https://github.com/go-resty/resty)' -d '{"name":"Alex"}' https://httpbin.org/post
*/
```
#### Redirect Policy
Resty provides few ready to use redirect policy(s) also it supports multiple policies together.
```go
// Create a Resty Client
client := resty.New()
// Assign Client Redirect Policy. Create one as per you need
client.SetRedirectPolicy(resty.FlexibleRedirectPolicy(15))
// Wanna multiple policies such as redirect count, domain name check, etc
client.SetRedirectPolicy(resty.FlexibleRedirectPolicy(20),
resty.DomainCheckRedirectPolicy("host1.com", "host2.org", "host3.net"))
```
##### Custom Redirect Policy
Implement [RedirectPolicy](redirect.go#L20) interface and register it with resty client. Have a look [redirect.go](redirect.go) for more information.
```go
// Create a Resty Client
client := resty.New()
// Using raw func into resty.SetRedirectPolicy
client.SetRedirectPolicy(resty.RedirectPolicyFunc(func(req *http.Request, via []*http.Request) error {
// Implement your logic here
// return nil for continue redirect otherwise return error to stop/prevent redirect
return nil
}))
//---------------------------------------------------
// Using struct create more flexible redirect policy
type CustomRedirectPolicy struct {
// variables goes here
}
func (c *CustomRedirectPolicy) Apply(req *http.Request, via []*http.Request) error {
// Implement your logic here
// return nil for continue redirect otherwise return error to stop/prevent redirect
return nil
}
// Registering in resty
client.SetRedirectPolicy(CustomRedirectPolicy{/* initialize variables */})
```
#### Custom Root Certificates and Client Certificates
```go
// Create a Resty Client
client := resty.New()
// Custom Root certificates, just supply .pem file.
// you can add one or more root certificates, its get appended
client.SetRootCertificate("/path/to/root/pemFile1.pem")
client.SetRootCertificate("/path/to/root/pemFile2.pem")
// ... and so on!
// Adding Client Certificates, you add one or more certificates
// Sample for creating certificate object
// Parsing public/private key pair from a pair of files. The files must contain PEM encoded data.
cert1, err := tls.LoadX509KeyPair("certs/client.pem", "certs/client.key")
if err != nil {
log.Fatalf("ERROR client certificate: %s", err)
}
// ...
// You add one or more certificates
client.SetCertificates(cert1, cert2, cert3)
```
#### Custom Root Certificates and Client Certificates from string
```go
// Custom Root certificates from string
// You can pass you certificates through env variables as strings
// you can add one or more root certificates, its get appended
client.SetRootCertificateFromString("-----BEGIN CERTIFICATE-----content-----END CERTIFICATE-----")
client.SetRootCertificateFromString("-----BEGIN CERTIFICATE-----content-----END CERTIFICATE-----")
// ... and so on!
// Adding Client Certificates, you add one or more certificates
// Sample for creating certificate object
// Parsing public/private key pair from a pair of files. The files must contain PEM encoded data.
cert1, err := tls.X509KeyPair([]byte("-----BEGIN CERTIFICATE-----content-----END CERTIFICATE-----"), []byte("-----BEGIN CERTIFICATE-----content-----END CERTIFICATE-----"))
if err != nil {
log.Fatalf("ERROR client certificate: %s", err)
}
// ...
// You add one or more certificates
client.SetCertificates(cert1, cert2, cert3)
```
#### Proxy Settings
Default `Go` supports Proxy via environment variable `HTTP_PROXY`. Resty provides support via `SetProxy` & `RemoveProxy`.
Choose as per your need.
**Client Level Proxy** settings applied to all the request
```go
// Create a Resty Client
client := resty.New()
// Setting a Proxy URL and Port
client.SetProxy("http://proxyserver:8888")
// Want to remove proxy setting
client.RemoveProxy()
```
#### Retries
Resty uses [backoff](http://www.awsarchitectureblog.com/2015/03/backoff.html)
to increase retry intervals after each attempt.
Usage example:
```go
// Create a Resty Client
client := resty.New()
// Retries are configured per client
client.
// Set retry count to non zero to enable retries
SetRetryCount(3).
// You can override initial retry wait time.
// Default is 100 milliseconds.
SetRetryWaitTime(5 * time.Second).
// MaxWaitTime can be overridden as well.
// Default is 2 seconds.
SetRetryMaxWaitTime(20 * time.Second).
// SetRetryAfter sets callback to calculate wait time between retries.
// Default (nil) implies exponential backoff with jitter
SetRetryAfter(func(client *resty.Client, resp *resty.Response) (time.Duration, error) {
return 0, errors.New("quota exceeded")
})
```
By default, resty will retry requests that return a non-nil error during execution.
Therefore, the above setup will result in resty retrying requests with non-nil errors up to 3 times,
with the delay increasing after each attempt.
You can optionally provide client with [custom retry conditions](https://pkg.go.dev/github.com/go-resty/resty/v2#RetryConditionFunc):
```go
// Create a Resty Client
client := resty.New()
client.AddRetryCondition(
// RetryConditionFunc type is for retry condition function
// input: non-nil Response OR request execution error
func(r *resty.Response, err error) bool {
return r.StatusCode() == http.StatusTooManyRequests
},
)
```
The above example will make resty retry requests that end with a `429 Too Many Requests` status code.
It's important to note that when you specify conditions using `AddRetryCondition`,
it will override the default retry behavior, which retries on errors encountered during the request.
If you want to retry on errors encountered during the request, similar to the default behavior,
you'll need to configure it as follows:
```go
// Create a Resty Client
client := resty.New()
client.AddRetryCondition(
func(r *resty.Response, err error) bool {
// Including "err != nil" emulates the default retry behavior for errors encountered during the request.
return err != nil || r.StatusCode() == http.StatusTooManyRequests
},
)
```
Multiple retry conditions can be added.
Note that if multiple conditions are specified, a retry will occur if any of the conditions are met.
It is also possible to use `resty.Backoff(...)` to get arbitrary retry scenarios
implemented. [Reference](retry_test.go).
#### Allow GET request with Payload
```go
// Create a Resty Client
client := resty.New()
// Allow GET request with Payload. This is disabled by default.
client.SetAllowGetMethodPayload(true)
```
#### Wanna Multiple Clients
```go
// Here you go!
// Client 1
client1 := resty.New()
client1.R().Get("http://httpbin.org")
// ...
// Client 2
client2 := resty.New()
client2.R().Head("http://httpbin.org")
// ...
// Bend it as per your need!!!
```
#### Remaining Client Settings & its Options
```go
// Create a Resty Client
client := resty.New()
// Unique settings at Client level
//--------------------------------
// Enable debug mode
client.SetDebug(true)
// Assign Client TLSClientConfig
// One can set custom root-certificate. Refer: http://golang.org/pkg/crypto/tls/#example_Dial
client.SetTLSClientConfig(&tls.Config{ RootCAs: roots })
// or One can disable security check (https)
client.SetTLSClientConfig(&tls.Config{ InsecureSkipVerify: true })
// Set client timeout as per your need
client.SetTimeout(1 * time.Minute)
// You can override all below settings and options at request level if you want to
//--------------------------------------------------------------------------------
// Host URL for all request. So you can use relative URL in the request
client.SetBaseURL("http://httpbin.org")
// Headers for all request
client.SetHeader("Accept", "application/json")
client.SetHeaders(map[string]string{
"Content-Type": "application/json",
"User-Agent": "My custom User Agent String",
})
// Cookies for all request
client.SetCookie(&http.Cookie{
Name:"go-resty",
Value:"This is cookie value",
Path: "/",
Domain: "sample.com",
MaxAge: 36000,
HttpOnly: true,
Secure: false,
})
client.SetCookies(cookies)
// URL query parameters for all request
client.SetQueryParam("user_id", "00001")
client.SetQueryParams(map[string]string{ // sample of those who use this manner
"api_key": "api-key-here",
"api_secret": "api-secret",
})
client.R().SetQueryString("productId=232&template=fresh-sample&cat=resty&source=google&kw=buy a lot more")
// Form data for all request. Typically used with POST and PUT
client.SetFormData(map[string]string{
"access_token": "BC594900-518B-4F7E-AC75-BD37F019E08F",
})
// Basic Auth for all request
client.SetBasicAuth("myuser", "mypass")
// Bearer Auth Token for all request
client.SetAuthToken("BC594900518B4F7EAC75BD37F019E08FBC594900518B4F7EAC75BD37F019E08F")
// Enabling Content length value for all request
client.SetContentLength(true)
// Registering global Error object structure for JSON/XML request
client.SetError(&Error{}) // or resty.SetError(Error{})
```
#### Unix Socket
```go
unixSocket := "/var/run/my_socket.sock"
// Create a Go's http.Transport so we can set it in resty.
transport := http.Transport{
Dial: func(_, _ string) (net.Conn, error) {
return net.Dial("unix", unixSocket)
},
}
// Create a Resty Client
client := resty.New()
// Set the previous transport that we created, set the scheme of the communication to the
// socket and set the unixSocket as the HostURL.
client.SetTransport(&transport).SetScheme("http").SetBaseURL(unixSocket)
// No need to write the host's URL on the request, just the path.
client.R().Get("http://localhost/index.html")
```
#### Bazel Support
Resty can be built, tested and depended upon via [Bazel](https://bazel.build).
For example, to run all tests:
```shell
bazel test :resty_test
```
#### Mocking http requests using [httpmock](https://github.com/jarcoal/httpmock) library
In order to mock the http requests when testing your application you
could use the `httpmock` library.
When using the default resty client, you should pass the client to the library as follow:
```go
// Create a Resty Client
client := resty.New()
// Get the underlying HTTP Client and set it to Mock
httpmock.ActivateNonDefault(client.GetClient())
```
More detailed example of mocking resty http requests using ginko could be found [here](https://github.com/jarcoal/httpmock#ginkgo--resty-example).
## Versioning
Resty releases versions according to [Semantic Versioning](http://semver.org)
* Resty v2 does not use `gopkg.in` service for library versioning.
* Resty fully adapted to `go mod` capabilities since `v1.10.0` release.
* Resty v1 series was using `gopkg.in` to provide versioning. `gopkg.in/resty.vX` points to appropriate tagged versions; `X` denotes version series number and it's a stable release for production use. For e.g. `gopkg.in/resty.v0`.
* Development takes place at the master branch. Although the code in master should always compile and test successfully, it might break API's. I aim to maintain backwards compatibility, but sometimes API's and behavior might be changed to fix a bug.
## Contribution
I would welcome your contribution! If you find any improvement or issue you want to fix, feel free to send a pull request, I like pull requests that include test cases for fix/enhancement. I have done my best to bring pretty good code coverage. Feel free to write tests.
BTW, I'd like to know what you think about `Resty`. Kindly open an issue or send me an email; it'd mean a lot to me.
## Creator
[Jeevanandam M.](https://github.com/jeevatkm) ([email protected])
## Core Team
Have a look on [Members](https://github.com/orgs/go-resty/people) page.
## Contributors
Have a look on [Contributors](https://github.com/go-resty/resty/graphs/contributors) page.
## License
Resty released under MIT license, refer [LICENSE](LICENSE) file. | {
"source": "yandex/perforator",
"title": "vendor/github.com/go-resty/resty/v2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/go-resty/resty/v2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 35012
} |
# Development, Testing and Contributing
1. Make sure you have a running Docker daemon
(Install for [MacOS](https://docs.docker.com/docker-for-mac/))
1. Use a version of Go that supports [modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more) (e.g. Go 1.11+)
1. Fork this repo and `git clone` somewhere to `$GOPATH/src/github.com/golang-migrate/migrate`
* Ensure that [Go modules are enabled](https://golang.org/cmd/go/#hdr-Preliminary_module_support) (e.g. your repo path or the `GO111MODULE` environment variable are set correctly)
1. Install [golangci-lint](https://github.com/golangci/golangci-lint#install)
1. Run the linter: `golangci-lint run`
1. Confirm tests are working: `make test-short`
1. Write awesome code ...
1. `make test` to run all tests against all database versions
1. Push code and open Pull Request
Some more helpful commands:
* You can specify which database/ source tests to run:
`make test-short SOURCE='file go_bindata' DATABASE='postgres cassandra'`
* After `make test`, run `make html-coverage` which opens a shiny test coverage overview.
* `make build-cli` builds the CLI in directory `cli/build/`.
* `make list-external-deps` lists all external dependencies for each package
* `make docs && make open-docs` opens godoc in your browser, `make kill-docs` kills the godoc server.
Repeatedly call `make docs` to refresh the server.
* Set the `DOCKER_API_VERSION` environment variable to the latest supported version if you get errors regarding the docker client API version being too new. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1588
} |
[](https://github.com/golang-migrate/migrate/actions/workflows/ci.yaml?query=branch%3Amaster)
[](https://pkg.go.dev/github.com/golang-migrate/migrate/v4)
[](https://coveralls.io/github/golang-migrate/migrate?branch=master)
[](https://packagecloud.io/golang-migrate/migrate?filter=debs)
[](https://hub.docker.com/r/migrate/migrate/)

[](https://github.com/golang-migrate/migrate/releases)
[](https://goreportcard.com/report/github.com/golang-migrate/migrate)
# migrate
__Database migrations written in Go. Use as [CLI](#cli-usage) or import as [library](#use-in-your-go-project).__
* Migrate reads migrations from [sources](#migration-sources)
and applies them in correct order to a [database](#databases).
* Drivers are "dumb", migrate glues everything together and makes sure the logic is bulletproof.
(Keeps the drivers lightweight, too.)
* Database drivers don't assume things or try to correct user input. When in doubt, fail.
Forked from [mattes/migrate](https://github.com/mattes/migrate)
## Databases
Database drivers run migrations. [Add a new database?](database/driver.go)
* [PostgreSQL](database/postgres)
* [PGX](database/pgx)
* [Redshift](database/redshift)
* [Ql](database/ql)
* [Cassandra](database/cassandra)
* [SQLite](database/sqlite)
* [SQLite3](database/sqlite3) ([todo #165](https://github.com/mattes/migrate/issues/165))
* [SQLCipher](database/sqlcipher)
* [MySQL/ MariaDB](database/mysql)
* [Neo4j](database/neo4j)
* [MongoDB](database/mongodb)
* [CrateDB](database/crate) ([todo #170](https://github.com/mattes/migrate/issues/170))
* [Shell](database/shell) ([todo #171](https://github.com/mattes/migrate/issues/171))
* [Google Cloud Spanner](database/spanner)
* [CockroachDB](database/cockroachdb)
* [ClickHouse](database/clickhouse)
* [Firebird](database/firebird)
* [MS SQL Server](database/sqlserver)
### Database URLs
Database connection strings are specified via URLs. The URL format is driver dependent but generally has the form: `dbdriver://username:password@host:port/dbname?param1=true¶m2=false`
Any [reserved URL characters](https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters) need to be escaped. Note, the `%` character also [needs to be escaped](https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_the_percent_character)
Explicitly, the following characters need to be escaped:
`!`, `#`, `$`, `%`, `&`, `'`, `(`, `)`, `*`, `+`, `,`, `/`, `:`, `;`, `=`, `?`, `@`, `[`, `]`
It's easiest to always run the URL parts of your DB connection URL (e.g. username, password, etc) through an URL encoder. See the example Python snippets below:
```bash
$ python3 -c 'import urllib.parse; print(urllib.parse.quote(input("String to encode: "), ""))'
String to encode: FAKEpassword!#$%&'()*+,/:;=?@[]
FAKEpassword%21%23%24%25%26%27%28%29%2A%2B%2C%2F%3A%3B%3D%3F%40%5B%5D
$ python2 -c 'import urllib; print urllib.quote(raw_input("String to encode: "), "")'
String to encode: FAKEpassword!#$%&'()*+,/:;=?@[]
FAKEpassword%21%23%24%25%26%27%28%29%2A%2B%2C%2F%3A%3B%3D%3F%40%5B%5D
$
```
## Migration Sources
Source drivers read migrations from local or remote sources. [Add a new source?](source/driver.go)
* [Filesystem](source/file) - read from filesystem
* [io/fs](source/iofs) - read from a Go [io/fs](https://pkg.go.dev/io/fs#FS)
* [Go-Bindata](source/go_bindata) - read from embedded binary data ([jteeuwen/go-bindata](https://github.com/jteeuwen/go-bindata))
* [pkger](source/pkger) - read from embedded binary data ([markbates/pkger](https://github.com/markbates/pkger))
* [GitHub](source/github) - read from remote GitHub repositories
* [GitHub Enterprise](source/github_ee) - read from remote GitHub Enterprise repositories
* [Bitbucket](source/bitbucket) - read from remote Bitbucket repositories
* [Gitlab](source/gitlab) - read from remote Gitlab repositories
* [AWS S3](source/aws_s3) - read from Amazon Web Services S3
* [Google Cloud Storage](source/google_cloud_storage) - read from Google Cloud Platform Storage
## CLI usage
* Simple wrapper around this library.
* Handles ctrl+c (SIGINT) gracefully.
* No config search paths, no config files, no magic ENV var injections.
__[CLI Documentation](cmd/migrate)__
### Basic usage
```bash
$ migrate -source file://path/to/migrations -database postgres://localhost:5432/database up 2
```
### Docker usage
```bash
$ docker run -v {{ migration dir }}:/migrations --network host migrate/migrate
-path=/migrations/ -database postgres://localhost:5432/database up 2
```
## Use in your Go project
* API is stable and frozen for this release (v3 & v4).
* Uses [Go modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more) to manage dependencies.
* To help prevent database corruptions, it supports graceful stops via `GracefulStop chan bool`.
* Bring your own logger.
* Uses `io.Reader` streams internally for low memory overhead.
* Thread-safe and no goroutine leaks.
__[Go Documentation](https://godoc.org/github.com/golang-migrate/migrate)__
```go
import (
"github.com/golang-migrate/migrate/v4"
_ "github.com/golang-migrate/migrate/v4/database/postgres"
_ "github.com/golang-migrate/migrate/v4/source/github"
)
func main() {
m, err := migrate.New(
"github://mattes:personal-access-token@mattes/migrate_test",
"postgres://localhost:5432/database?sslmode=enable")
m.Steps(2)
}
```
Want to use an existing database client?
```go
import (
"database/sql"
_ "github.com/lib/pq"
"github.com/golang-migrate/migrate/v4"
"github.com/golang-migrate/migrate/v4/database/postgres"
_ "github.com/golang-migrate/migrate/v4/source/file"
)
func main() {
db, err := sql.Open("postgres", "postgres://localhost:5432/database?sslmode=enable")
driver, err := postgres.WithInstance(db, &postgres.Config{})
m, err := migrate.NewWithDatabaseInstance(
"file:///migrations",
"postgres", driver)
m.Up() // or m.Step(2) if you want to explicitly set the number of migrations to run
}
```
## Getting started
Go to [getting started](GETTING_STARTED.md)
## Tutorials
* [CockroachDB](database/cockroachdb/TUTORIAL.md)
* [PostgreSQL](database/postgres/TUTORIAL.md)
(more tutorials to come)
## Migration files
Each migration has an up and down migration. [Why?](FAQ.md#why-two-separate-files-up-and-down-for-a-migration)
```bash
1481574547_create_users_table.up.sql
1481574547_create_users_table.down.sql
```
[Best practices: How to write migrations.](MIGRATIONS.md)
## Versions
Version | Supported? | Import | Notes
--------|------------|--------|------
**master** | :white_check_mark: | `import "github.com/golang-migrate/migrate/v4"` | New features and bug fixes arrive here first |
**v4** | :white_check_mark: | `import "github.com/golang-migrate/migrate/v4"` | Used for stable releases |
**v3** | :x: | `import "github.com/golang-migrate/migrate"` (with package manager) or `import "gopkg.in/golang-migrate/migrate.v3"` (not recommended) | **DO NOT USE** - No longer supported |
## Development and Contributing
Yes, please! [`Makefile`](Makefile) is your friend,
read the [development guide](CONTRIBUTING.md).
Also have a look at the [FAQ](FAQ.md).
---
Looking for alternatives? [https://awesome-go.com/#database](https://awesome-go.com/#database). | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 7986
} |
# Changelog
## [2.12.3](https://github.com/googleapis/gax-go/compare/v2.12.2...v2.12.3) (2024-03-14)
### Bug Fixes
* bump protobuf dep to v1.33 ([#333](https://github.com/googleapis/gax-go/issues/333)) ([2892b22](https://github.com/googleapis/gax-go/commit/2892b22c1ae8a70dec3448d82e634643fe6c1be2))
## [2.12.2](https://github.com/googleapis/gax-go/compare/v2.12.1...v2.12.2) (2024-02-23)
### Bug Fixes
* **v2/callctx:** fix SetHeader race by cloning header map ([#326](https://github.com/googleapis/gax-go/issues/326)) ([534311f](https://github.com/googleapis/gax-go/commit/534311f0f163d101f30657736c0e6f860e9c39dc))
## [2.12.1](https://github.com/googleapis/gax-go/compare/v2.12.0...v2.12.1) (2024-02-13)
### Bug Fixes
* add XGoogFieldMaskHeader constant ([#321](https://github.com/googleapis/gax-go/issues/321)) ([666ee08](https://github.com/googleapis/gax-go/commit/666ee08931041b7fed56bed7132649785b2d3dfe))
## [2.12.0](https://github.com/googleapis/gax-go/compare/v2.11.0...v2.12.0) (2023-06-26)
### Features
* **v2/callctx:** add new callctx package ([#291](https://github.com/googleapis/gax-go/issues/291)) ([11503ed](https://github.com/googleapis/gax-go/commit/11503ed98df4ae1bbdedf91ff64d47e63f187d68))
* **v2:** add BuildHeaders and InsertMetadataIntoOutgoingContext to header ([#290](https://github.com/googleapis/gax-go/issues/290)) ([6a4b89f](https://github.com/googleapis/gax-go/commit/6a4b89f5551a40262e7c3caf2e1bdc7321b76ea1))
## [2.11.0](https://github.com/googleapis/gax-go/compare/v2.10.0...v2.11.0) (2023-06-13)
### Features
* **v2:** add GoVersion package variable ([#283](https://github.com/googleapis/gax-go/issues/283)) ([26553cc](https://github.com/googleapis/gax-go/commit/26553ccadb4016b189881f52e6c253b68bb3e3d5))
### Bug Fixes
* **v2:** handle space in non-devel go version ([#288](https://github.com/googleapis/gax-go/issues/288)) ([fd7bca0](https://github.com/googleapis/gax-go/commit/fd7bca029a1c5e63def8f0a5fd1ec3f725d92f75))
## [2.10.0](https://github.com/googleapis/gax-go/compare/v2.9.1...v2.10.0) (2023-05-30)
### Features
* update dependencies ([#280](https://github.com/googleapis/gax-go/issues/280)) ([4514281](https://github.com/googleapis/gax-go/commit/4514281058590f3637c36bfd49baa65c4d3cfb21))
## [2.9.1](https://github.com/googleapis/gax-go/compare/v2.9.0...v2.9.1) (2023-05-23)
### Bug Fixes
* **v2:** drop cloud lro test dep ([#276](https://github.com/googleapis/gax-go/issues/276)) ([c67eeba](https://github.com/googleapis/gax-go/commit/c67eeba0f10a3294b1d93c1b8fbe40211a55ae5f)), refs [#270](https://github.com/googleapis/gax-go/issues/270)
## [2.9.0](https://github.com/googleapis/gax-go/compare/v2.8.0...v2.9.0) (2023-05-22)
### Features
* **apierror:** add method to return HTTP status code conditionally ([#274](https://github.com/googleapis/gax-go/issues/274)) ([5874431](https://github.com/googleapis/gax-go/commit/587443169acd10f7f86d1989dc8aaf189e645e98)), refs [#229](https://github.com/googleapis/gax-go/issues/229)
### Documentation
* add ref to usage with clients ([#272](https://github.com/googleapis/gax-go/issues/272)) ([ea4d72d](https://github.com/googleapis/gax-go/commit/ea4d72d514beba4de450868b5fb028601a29164e)), refs [#228](https://github.com/googleapis/gax-go/issues/228)
## [2.8.0](https://github.com/googleapis/gax-go/compare/v2.7.1...v2.8.0) (2023-03-15)
### Features
* **v2:** add WithTimeout option ([#259](https://github.com/googleapis/gax-go/issues/259)) ([9a8da43](https://github.com/googleapis/gax-go/commit/9a8da43693002448b1e8758023699387481866d1))
## [2.7.1](https://github.com/googleapis/gax-go/compare/v2.7.0...v2.7.1) (2023-03-06)
### Bug Fixes
* **v2/apierror:** return Unknown GRPCStatus when err source is HTTP ([#260](https://github.com/googleapis/gax-go/issues/260)) ([043b734](https://github.com/googleapis/gax-go/commit/043b73437a240a91229207fb3ee52a9935a36f23)), refs [#254](https://github.com/googleapis/gax-go/issues/254)
## [2.7.0](https://github.com/googleapis/gax-go/compare/v2.6.0...v2.7.0) (2022-11-02)
### Features
* update google.golang.org/api to latest ([#240](https://github.com/googleapis/gax-go/issues/240)) ([f690a02](https://github.com/googleapis/gax-go/commit/f690a02c806a2903bdee943ede3a58e3a331ebd6))
* **v2/apierror:** add apierror.FromWrappingError ([#238](https://github.com/googleapis/gax-go/issues/238)) ([9dbd96d](https://github.com/googleapis/gax-go/commit/9dbd96d59b9d54ceb7c025513aa8c1a9d727382f))
## [2.6.0](https://github.com/googleapis/gax-go/compare/v2.5.1...v2.6.0) (2022-10-13)
### Features
* **v2:** copy DetermineContentType functionality ([#230](https://github.com/googleapis/gax-go/issues/230)) ([2c52a70](https://github.com/googleapis/gax-go/commit/2c52a70bae965397f740ed27d46aabe89ff249b3))
## [2.5.1](https://github.com/googleapis/gax-go/compare/v2.5.0...v2.5.1) (2022-08-04)
### Bug Fixes
* **v2:** resolve bad genproto pseudoversion in go.mod ([#218](https://github.com/googleapis/gax-go/issues/218)) ([1379b27](https://github.com/googleapis/gax-go/commit/1379b27e9846d959f7e1163b9ef298b3c92c8d23))
## [2.5.0](https://github.com/googleapis/gax-go/compare/v2.4.0...v2.5.0) (2022-08-04)
### Features
* add ExtractProtoMessage to apierror ([#213](https://github.com/googleapis/gax-go/issues/213)) ([a6ce70c](https://github.com/googleapis/gax-go/commit/a6ce70c725c890533a9de6272d3b5ba2e336d6bb))
## [2.4.0](https://github.com/googleapis/gax-go/compare/v2.3.0...v2.4.0) (2022-05-09)
### Features
* **v2:** add OnHTTPCodes CallOption ([#188](https://github.com/googleapis/gax-go/issues/188)) ([ba7c534](https://github.com/googleapis/gax-go/commit/ba7c5348363ab6c33e1cee3c03c0be68a46ca07c))
### Bug Fixes
* **v2/apierror:** use errors.As in FromError ([#189](https://github.com/googleapis/gax-go/issues/189)) ([f30f05b](https://github.com/googleapis/gax-go/commit/f30f05be583828f4c09cca4091333ea88ff8d79e))
### Miscellaneous Chores
* **v2:** bump release-please processing ([#192](https://github.com/googleapis/gax-go/issues/192)) ([56172f9](https://github.com/googleapis/gax-go/commit/56172f971d1141d7687edaac053ad3470af76719)) | {
"source": "yandex/perforator",
"title": "vendor/github.com/googleapis/gax-go/v2/CHANGES.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/googleapis/gax-go/v2/CHANGES.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 6132
} |
golang-lru
==========
This provides the `lru` package which implements a fixed-size
thread safe LRU cache. It is based on the cache in Groupcache.
Documentation
=============
Full docs are available on [Go Packages](https://pkg.go.dev/github.com/hashicorp/golang-lru/v2)
LRU cache example
=================
```go
package main
import (
"fmt"
"github.com/hashicorp/golang-lru/v2"
)
func main() {
l, _ := lru.New[int, any](128)
for i := 0; i < 256; i++ {
l.Add(i, nil)
}
if l.Len() != 128 {
panic(fmt.Sprintf("bad len: %v", l.Len()))
}
}
```
Expirable LRU cache example
===========================
```go
package main
import (
"fmt"
"time"
"github.com/hashicorp/golang-lru/v2/expirable"
)
func main() {
// make cache with 10ms TTL and 5 max keys
cache := expirable.NewLRU[string, string](5, nil, time.Millisecond*10)
// set value under key1.
cache.Add("key1", "val1")
// get value under key1
r, ok := cache.Get("key1")
// check for OK value
if ok {
fmt.Printf("value before expiration is found: %v, value: %q\n", ok, r)
}
// wait for cache to expire
time.Sleep(time.Millisecond * 12)
// get value under key1 after key expiration
r, ok = cache.Get("key1")
fmt.Printf("value after expiration is found: %v, value: %q\n", ok, r)
// set value under key2, would evict old entry because it is already expired.
cache.Add("key2", "val2")
fmt.Printf("Cache len: %d\n", cache.Len())
// Output:
// value before expiration is found: true, value: "val1"
// value after expiration is found: false, value: ""
// Cache len: 1
}
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/hashicorp/golang-lru/v2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/hashicorp/golang-lru/v2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1567
} |
[](https://godoc.org/github.com/jackc/chunkreader)
[](https://travis-ci.org/jackc/chunkreader)
# chunkreader
Package chunkreader provides an io.Reader wrapper that minimizes IO reads and memory allocations.
Extracted from original implementation in https://github.com/jackc/pgx. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/chunkreader/v2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/chunkreader/v2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 403
} |
[](https://godoc.org/github.com/jackc/pgproto3)
[](https://travis-ci.org/jackc/pgproto3)
---
This version is used with pgx `v4`. In pgx `v5` it is part of the https://github.com/jackc/pgx repository.
---
# pgproto3
Package pgproto3 is a encoder and decoder of the PostgreSQL wire protocol version 3.
pgproto3 can be used as a foundation for PostgreSQL drivers, proxies, mock servers, load balancers and more.
See example/pgfortune for a playful example of a fake PostgreSQL server.
Extracted from original implementation in https://github.com/jackc/pgx. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgproto3/v2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgproto3/v2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 677
} |
# 4.18.3 (March 9, 2024)
Use spaces instead of parentheses for SQL sanitization.
This still solves the problem of negative numbers creating a line comment, but this avoids breaking edge cases such as
`set foo to $1` where the substitution is taking place in a location where an arbitrary expression is not allowed.
# 4.18.2 (March 4, 2024)
Fix CVE-2024-27289
SQL injection can occur when all of the following conditions are met:
1. The non-default simple protocol is used.
2. A placeholder for a numeric value must be immediately preceded by a minus.
3. There must be a second placeholder for a string value after the first placeholder; both must be on the same line.
4. Both parameter values must be user-controlled.
Thanks to Paul Gerste for reporting this issue.
Fix CVE-2024-27304
SQL injection can occur if an attacker can cause a single query or bind message to exceed 4 GB in size. An integer
overflow in the calculated message size can cause the one large message to be sent as multiple messages under the
attacker's control.
Thanks to Paul Gerste for reporting this issue.
* Fix *dbTx.Exec not checking if it is already closed
# 4.18.1 (February 27, 2023)
* Fix: Support pgx v4 and v5 stdlib in same program (Tomáš Procházka)
# 4.18.0 (February 11, 2023)
* Upgrade pgconn to v1.14.0
* Upgrade pgproto3 to v2.3.2
* Upgrade pgtype to v1.14.0
* Fix query sanitizer when query text contains Unicode replacement character
* Fix context with value in BeforeConnect (David Harju)
* Support pgx v4 and v5 stdlib in same program (Vitalii Solodilov)
# 4.17.2 (September 3, 2022)
* Fix panic when logging batch error (Tom Möller)
# 4.17.1 (August 27, 2022)
* Upgrade puddle to v1.3.0 - fixes context failing to cancel Acquire when acquire is creating resource which was introduced in v4.17.0 (James Hartig)
* Fix atomic alignment on 32-bit platforms
# 4.17.0 (August 6, 2022)
* Upgrade pgconn to v1.13.0
* Upgrade pgproto3 to v2.3.1
* Upgrade pgtype to v1.12.0
* Allow background pool connections to continue even if cause is canceled (James Hartig)
* Add LoggerFunc (Gabor Szabad)
* pgxpool: health check should avoid going below minConns (James Hartig)
* Add pgxpool.Conn.Hijack()
* Logging improvements (Stepan Rabotkin)
# 4.16.1 (May 7, 2022)
* Upgrade pgconn to v1.12.1
* Fix explicitly prepared statements with describe statement cache mode
# 4.16.0 (April 21, 2022)
* Upgrade pgconn to v1.12.0
* Upgrade pgproto3 to v2.3.0
* Upgrade pgtype to v1.11.0
* Fix: Do not panic when context cancelled while getting statement from cache.
* Fix: Less memory pinning from old Rows.
* Fix: Support '\r' line ending when sanitizing SQL comment.
* Add pluggable GSSAPI support (Oliver Tan)
# 4.15.0 (February 7, 2022)
* Upgrade to pgconn v1.11.0
* Upgrade to pgtype v1.10.0
* Upgrade puddle to v1.2.1
* Make BatchResults.Close safe to be called multiple times
# 4.14.1 (November 28, 2021)
* Upgrade pgtype to v1.9.1 (fixes unintentional change to timestamp binary decoding)
* Start pgxpool background health check after initial connections
# 4.14.0 (November 20, 2021)
* Upgrade pgconn to v1.10.1
* Upgrade pgproto3 to v2.2.0
* Upgrade pgtype to v1.9.0
* Upgrade puddle to v1.2.0
* Add QueryFunc to BatchResults
* Add context options to zerologadapter (Thomas Frössman)
* Add zerologadapter.NewContextLogger (urso)
* Eager initialize minpoolsize on connect (Daniel)
* Unpin memory used by large queries immediately after use
# 4.13.0 (July 24, 2021)
* Trimmed pseudo-dependencies in Go modules from other packages tests
* Upgrade pgconn -- context cancellation no longer will return a net.Error
* Support time durations for simple protocol (Michael Darr)
# 4.12.0 (July 10, 2021)
* ResetSession hook is called before a connection is reused from pool for another query (Dmytro Haranzha)
* stdlib: Add RandomizeHostOrderFunc (dkinder)
* stdlib: add OptionBeforeConnect (dkinder)
* stdlib: Do not reuse ConnConfig strings (Andrew Kimball)
* stdlib: implement Conn.ResetSession (Jonathan Amsterdam)
* Upgrade pgconn to v1.9.0
* Upgrade pgtype to v1.8.0
# 4.11.0 (March 25, 2021)
* Add BeforeConnect callback to pgxpool.Config (Robert Froehlich)
* Add Ping method to pgxpool.Conn (davidsbond)
* Added a kitlog level log adapter (Fabrice Aneche)
* Make ScanArgError public to allow identification of offending column (Pau Sanchez)
* Add *pgxpool.AcquireFunc
* Add BeginFunc and BeginTxFunc
* Add prefer_simple_protocol to connection string
* Add logging on CopyFrom (Patrick Hemmer)
* Add comment support when sanitizing SQL queries (Rusakow Andrew)
* Do not panic on double close of pgxpool.Pool (Matt Schultz)
* Avoid panic on SendBatch on closed Tx (Matt Schultz)
* Update pgconn to v1.8.1
* Update pgtype to v1.7.0
# 4.10.1 (December 19, 2020)
* Fix panic on Query error with nil stmtcache.
# 4.10.0 (December 3, 2020)
* Add CopyFromSlice to simplify CopyFrom usage (Egon Elbre)
* Remove broken prepared statements from stmtcache (Ethan Pailes)
* stdlib: consider any Ping error as fatal
* Update puddle to v1.1.3 - this fixes an issue where concurrent Acquires can hang when a connection cannot be established
* Update pgtype to v1.6.2
# 4.9.2 (November 3, 2020)
The underlying library updates fix an issue where appending to a scanned slice could corrupt other data.
* Update pgconn to v1.7.2
* Update pgproto3 to v2.0.6
# 4.9.1 (October 31, 2020)
* Update pgconn to v1.7.1
* Update pgtype to v1.6.1
* Fix SendBatch of all prepared statements with statement cache disabled
# 4.9.0 (September 26, 2020)
* pgxpool now waits for connection cleanup to finish before making room in pool for another connection. This prevents temporarily exceeding max pool size.
* Fix when scanning a column to nil to skip it on the first row but scanning it to a real value on a subsequent row.
* Fix prefer simple protocol with prepared statements. (Jinzhu)
* Fix FieldDescriptions not being available on Rows before calling Next the first time.
* Various minor fixes in updated versions of pgconn, pgtype, and puddle.
# 4.8.1 (July 29, 2020)
* Update pgconn to v1.6.4
* Fix deadlock on error after CommandComplete but before ReadyForQuery
* Fix panic on parsing DSN with trailing '='
# 4.8.0 (July 22, 2020)
* All argument types supported by native pgx should now also work through database/sql
* Update pgconn to v1.6.3
* Update pgtype to v1.4.2
# 4.7.2 (July 14, 2020)
* Improve performance of Columns() (zikaeroh)
* Fix fatal Commit() failure not being considered fatal
* Update pgconn to v1.6.2
* Update pgtype to v1.4.1
# 4.7.1 (June 29, 2020)
* Fix stdlib decoding error with certain order and combination of fields
# 4.7.0 (June 27, 2020)
* Update pgtype to v1.4.0
* Update pgconn to v1.6.1
* Update puddle to v1.1.1
* Fix context propagation with Tx commit and Rollback (georgysavva)
* Add lazy connect option to pgxpool (georgysavva)
* Fix connection leak if pgxpool.BeginTx() fail (Jean-Baptiste Bronisz)
* Add native Go slice support for strings and numbers to simple protocol
* stdlib add default timeouts for Conn.Close() and Stmt.Close() (georgysavva)
* Assorted performance improvements especially with large result sets
* Fix close pool on not lazy connect failure (Yegor Myskin)
* Add Config copy (georgysavva)
* Support SendBatch with Simple Protocol (Jordan Lewis)
* Better error logging on rows close (Igor V. Kozinov)
* Expose stdlib.Conn.Conn() to enable database/sql.Conn.Raw()
* Improve unknown type support for database/sql
* Fix transaction commit failure closing connection
# 4.6.0 (March 30, 2020)
* stdlib: Bail early if preloading rows.Next() results in rows.Err() (Bas van Beek)
* Sanitize time to microsecond accuracy (Andrew Nicoll)
* Update pgtype to v1.3.0
* Update pgconn to v1.5.0
* Update golang.org/x/crypto for security fix
* Implement "verify-ca" SSL mode
# 4.5.0 (March 7, 2020)
* Update to pgconn v1.4.0
* Fixes QueryRow with empty SQL
* Adds PostgreSQL service file support
* Add Len() to *pgx.Batch (WGH)
* Better logging for individual batch items (Ben Bader)
# 4.4.1 (February 14, 2020)
* Update pgconn to v1.3.2 - better default read buffer size
* Fix race in CopyFrom
# 4.4.0 (February 5, 2020)
* Update puddle to v1.1.0 - fixes possible deadlock when acquire is cancelled
* Update pgconn to v1.3.1 - fixes CopyFrom deadlock when multiple NoticeResponse received during copy
* Update pgtype to v1.2.0
* Add MaxConnIdleTime to pgxpool (Patrick Ellul)
* Add MinConns to pgxpool (Patrick Ellul)
* Fix: stdlib.ReleaseConn closes connections left in invalid state
# 4.3.0 (January 23, 2020)
* Fix Rows.Values panic when unable to decode
* Add Rows.Values support for unknown types
* Add DriverContext support for stdlib (Alex Gaynor)
* Update pgproto3 to v2.0.1 to never return an io.EOF as it would be misinterpreted by database/sql. Instead return io.UnexpectedEOF.
# 4.2.1 (January 13, 2020)
* Update pgconn to v1.2.1 (fixes context cancellation data race introduced in v1.2.0))
# 4.2.0 (January 11, 2020)
* Update pgconn to v1.2.0.
* Update pgtype to v1.1.0.
* Return error instead of panic when wrong number of arguments passed to Exec. (malstoun)
* Fix large objects functionality when PreferSimpleProtocol = true.
* Restore GetDefaultDriver which existed in v3. (Johan Brandhorst)
* Add RegisterConnConfig to stdlib which replaces the removed RegisterDriverConfig from v3.
# 4.1.2 (October 22, 2019)
* Fix dbSavepoint.Begin recursive self call
* Upgrade pgtype to v1.0.2 - fix scan pointer to pointer
# 4.1.1 (October 21, 2019)
* Fix pgxpool Rows.CommandTag() infinite loop / typo
# 4.1.0 (October 12, 2019)
## Potentially Breaking Changes
Technically, two changes are breaking changes, but in practice these are extremely unlikely to break existing code.
* Conn.Begin and Conn.BeginTx return a Tx interface instead of the internal dbTx struct. This is necessary for the Conn.Begin method to signature as other methods that begin a transaction.
* Add Conn() to Tx interface. This is necessary to allow code using a Tx to access the *Conn (and pgconn.PgConn) on which the Tx is executing.
## Fixes
* Releasing a busy connection closes the connection instead of returning an unusable connection to the pool
* Do not mutate config.Config.OnNotification in connect
# 4.0.1 (September 19, 2019)
* Fix statement cache cleanup.
* Corrected daterange OID.
* Fix Tx when committing or rolling back multiple times in certain cases.
* Improve documentation.
# 4.0.0 (September 14, 2019)
v4 is a major release with many significant changes some of which are breaking changes. The most significant are
included below.
* Simplified establishing a connection with a connection string.
* All potentially blocking operations now require a context.Context. The non-context aware functions have been removed.
* OIDs are hard-coded for known types. This saves the query on connection.
* Context cancellations while network activity is in progress is now always fatal. Previously, it was sometimes recoverable. This led to increased complexity in pgx itself and in application code.
* Go modules are required.
* Errors are now implemented in the Go 1.13 style.
* `Rows` and `Tx` are now interfaces.
* The connection pool as been decoupled from pgx and is now a separate, included package (github.com/jackc/pgx/v4/pgxpool).
* pgtype has been spun off to a separate package (github.com/jackc/pgtype).
* pgproto3 has been spun off to a separate package (github.com/jackc/pgproto3/v2).
* Logical replication support has been spun off to a separate package (github.com/jackc/pglogrepl).
* Lower level PostgreSQL functionality is now implemented in a separate package (github.com/jackc/pgconn).
* Tests are now configured with environment variables.
* Conn has an automatic statement cache by default.
* Batch interface has been simplified.
* QueryArgs has been removed. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v4/CHANGELOG.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v4/CHANGELOG.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 11879
} |
[](https://pkg.go.dev/github.com/jackc/pgx/v4)
[](https://travis-ci.org/jackc/pgx)
---
This is the previous stable `v4` release. `v5` been released.
---
# pgx - PostgreSQL Driver and Toolkit
pgx is a pure Go driver and toolkit for PostgreSQL.
pgx aims to be low-level, fast, and performant, while also enabling PostgreSQL-specific features that the standard `database/sql` package does not allow for.
The driver component of pgx can be used alongside the standard `database/sql` package.
The toolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocol
and type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers,
proxies, load balancers, logical replication clients, etc.
The current release of `pgx v4` requires Go modules. To use the previous version, checkout and vendor the `v3` branch.
## Example Usage
```go
package main
import (
"context"
"fmt"
"os"
"github.com/jackc/pgx/v4"
)
func main() {
// urlExample := "postgres://username:password@localhost:5432/database_name"
conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
if err != nil {
fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
os.Exit(1)
}
defer conn.Close(context.Background())
var name string
var weight int64
err = conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)
os.Exit(1)
}
fmt.Println(name, weight)
}
```
See the [getting started guide](https://github.com/jackc/pgx/wiki/Getting-started-with-pgx) for more information.
## Choosing Between the pgx and database/sql Interfaces
It is recommended to use the pgx interface if:
1. The application only targets PostgreSQL.
2. No other libraries that require `database/sql` are in use.
The pgx interface is faster and exposes more features.
The `database/sql` interface only allows the underlying driver to return or receive the following types: `int64`,
`float64`, `bool`, `[]byte`, `string`, `time.Time`, or `nil`. Handling other types requires implementing the
`database/sql.Scanner` and the `database/sql/driver/driver.Valuer` interfaces which require transmission of values in text format. The binary format can be substantially faster, which is what the pgx interface uses.
## Features
pgx supports many features beyond what is available through `database/sql`:
* Support for approximately 70 different PostgreSQL types
* Automatic statement preparation and caching
* Batch queries
* Single-round trip query mode
* Full TLS connection control
* Binary format support for custom types (allows for much quicker encoding/decoding)
* COPY protocol support for faster bulk data loads
* Extendable logging support including built-in support for `log15adapter`, [`logrus`](https://github.com/sirupsen/logrus), [`zap`](https://github.com/uber-go/zap), and [`zerolog`](https://github.com/rs/zerolog)
* Connection pool with after-connect hook for arbitrary connection setup
* Listen / notify
* Conversion of PostgreSQL arrays to Go slice mappings for integers, floats, and strings
* Hstore support
* JSON and JSONB support
* Maps `inet` and `cidr` PostgreSQL types to `net.IPNet` and `net.IP`
* Large object support
* NULL mapping to Null* struct or pointer to pointer
* Supports `database/sql.Scanner` and `database/sql/driver.Valuer` interfaces for custom types
* Notice response handling
* Simulated nested transactions with savepoints
## Performance
There are three areas in particular where pgx can provide a significant performance advantage over the standard
`database/sql` interface and other drivers:
1. PostgreSQL specific types - Types such as arrays can be parsed much quicker because pgx uses the binary format.
2. Automatic statement preparation and caching - pgx will prepare and cache statements by default. This can provide an
significant free improvement to code that does not explicitly use prepared statements. Under certain workloads, it can
perform nearly 3x the number of queries per second.
3. Batched queries - Multiple queries can be batched together to minimize network round trips.
## Testing
pgx tests naturally require a PostgreSQL database. It will connect to the database specified in the `PGX_TEST_DATABASE` environment
variable. The `PGX_TEST_DATABASE` environment variable can either be a URL or DSN. In addition, the standard `PG*` environment
variables will be respected. Consider using [direnv](https://github.com/direnv/direnv) to simplify environment variable
handling.
### Example Test Environment
Connect to your PostgreSQL server and run:
```
create database pgx_test;
```
Connect to the newly-created database and run:
```
create domain uint64 as numeric(20,0);
```
Now, you can run the tests:
```
PGX_TEST_DATABASE="host=/var/run/postgresql database=pgx_test" go test ./...
```
In addition, there are tests specific for PgBouncer that will be executed if `PGX_TEST_PGBOUNCER_CONN_STRING` is set.
## Supported Go and PostgreSQL Versions
pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For [Go](https://golang.org/doc/devel/release.html#policy) that is the two most recent major releases and for [PostgreSQL](https://www.postgresql.org/support/versioning/) the major releases in the last 5 years. This means pgx supports Go 1.17 and higher and PostgreSQL 10 and higher. pgx also is tested against the latest version of [CockroachDB](https://www.cockroachlabs.com/product/).
## Version Policy
pgx follows semantic versioning for the documented public API on stable releases. `v4` is the latest stable major version.
## PGX Family Libraries
pgx is the head of a family of PostgreSQL libraries. Many of these can be used independently. Many can also be accessed
from pgx for lower-level control.
### [github.com/jackc/pgconn](https://github.com/jackc/pgconn)
`pgconn` is a lower-level PostgreSQL database driver that operates at nearly the same level as the C library `libpq`.
### [github.com/jackc/pgx/v4/pgxpool](https://github.com/jackc/pgx/tree/master/pgxpool)
`pgxpool` is a connection pool for pgx. pgx is entirely decoupled from its default pool implementation. This means that pgx can be used with a different pool or without any pool at all.
### [github.com/jackc/pgx/v4/stdlib](https://github.com/jackc/pgx/tree/master/stdlib)
This is a `database/sql` compatibility layer for pgx. pgx can be used as a normal `database/sql` driver, but at any time, the native interface can be acquired for more performance or PostgreSQL specific functionality.
### [github.com/jackc/pgtype](https://github.com/jackc/pgtype)
Over 70 PostgreSQL types are supported including `uuid`, `hstore`, `json`, `bytea`, `numeric`, `interval`, `inet`, and arrays. These types support `database/sql` interfaces and are usable outside of pgx. They are fully tested in pgx and pq. They also support a higher performance interface when used with the pgx driver.
### [github.com/jackc/pgproto3](https://github.com/jackc/pgproto3)
pgproto3 provides standalone encoding and decoding of the PostgreSQL v3 wire protocol. This is useful for implementing very low level PostgreSQL tooling.
### [github.com/jackc/pglogrepl](https://github.com/jackc/pglogrepl)
pglogrepl provides functionality to act as a client for PostgreSQL logical replication.
### [github.com/jackc/pgmock](https://github.com/jackc/pgmock)
pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler).
### [github.com/jackc/tern](https://github.com/jackc/tern)
tern is a stand-alone SQL migration system.
### [github.com/jackc/pgerrcode](https://github.com/jackc/pgerrcode)
pgerrcode contains constants for the PostgreSQL error codes.
## 3rd Party Libraries with PGX Support
### [github.com/georgysavva/scany](https://github.com/georgysavva/scany)
Library for scanning data from a database into Go structs and more.
### [https://github.com/otan/gopgkrb5](https://github.com/otan/gopgkrb5)
Adds GSSAPI / Kerberos authentication support.
### [https://github.com/vgarvardt/pgx-google-uuid](https://github.com/vgarvardt/pgx-google-uuid)
Adds support for [`github.com/google/uuid`](https://github.com/google/uuid). | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v4/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v4/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 8696
} |
# 5.7.2 (December 21, 2024)
* Fix prepared statement already exists on batch prepare failure
* Add commit query to tx options (Lucas Hild)
* Fix pgtype.Timestamp json unmarshal (Shean de Montigny-Desautels)
* Add message body size limits in frontend and backend (zene)
* Add xid8 type
* Ensure planning encodes and scans cannot infinitely recurse
* Implement pgtype.UUID.String() (Konstantin Grachev)
* Switch from ExecParams to Exec in ValidateConnectTargetSessionAttrs functions (Alexander Rumyantsev)
* Update golang.org/x/crypto
# 5.7.1 (September 10, 2024)
* Fix data race in tracelog.TraceLog
* Update puddle to v2.2.2. This removes the import of nanotime via linkname.
* Update golang.org/x/crypto and golang.org/x/text
# 5.7.0 (September 7, 2024)
* Add support for sslrootcert=system (Yann Soubeyrand)
* Add LoadTypes to load multiple types in a single SQL query (Nick Farrell)
* Add XMLCodec supports encoding + scanning XML column type like json (nickcruess-soda)
* Add MultiTrace (Stepan Rabotkin)
* Add TraceLogConfig with customizable TimeKey (stringintech)
* pgx.ErrNoRows wraps sql.ErrNoRows to aid in database/sql compatibility with native pgx functions (merlin)
* Support scanning binary formatted uint32 into string / TextScanner (jennifersp)
* Fix interval encoding to allow 0s and avoid extra spaces (Carlos Pérez-Aradros Herce)
* Update pgservicefile - fixes panic when parsing invalid file
* Better error message when reading past end of batch
* Don't print url when url.Parse returns an error (Kevin Biju)
* Fix snake case name normalization collision in RowToStructByName with db tag (nolandseigler)
* Fix: Scan and encode types with underlying types of arrays
# 5.6.0 (May 25, 2024)
* Add StrictNamedArgs (Tomas Zahradnicek)
* Add support for macaddr8 type (Carlos Pérez-Aradros Herce)
* Add SeverityUnlocalized field to PgError / Notice
* Performance optimization of RowToStructByPos/Name (Zach Olstein)
* Allow customizing context canceled behavior for pgconn
* Add ScanLocation to pgtype.Timestamp[tz]Codec
* Add custom data to pgconn.PgConn
* Fix ResultReader.Read() to handle nil values
* Do not encode interval microseconds when they are 0 (Carlos Pérez-Aradros Herce)
* pgconn.SafeToRetry checks for wrapped errors (tjasko)
* Failed connection attempts include all errors
* Optimize LargeObject.Read (Mitar)
* Add tracing for connection acquire and release from pool (ngavinsir)
* Fix encode driver.Valuer not called when nil
* Add support for custom JSON marshal and unmarshal (Mitar)
* Use Go default keepalive for TCP connections (Hans-Joachim Kliemeck)
# 5.5.5 (March 9, 2024)
Use spaces instead of parentheses for SQL sanitization.
This still solves the problem of negative numbers creating a line comment, but this avoids breaking edge cases such as
`set foo to $1` where the substitution is taking place in a location where an arbitrary expression is not allowed.
# 5.5.4 (March 4, 2024)
Fix CVE-2024-27304
SQL injection can occur if an attacker can cause a single query or bind message to exceed 4 GB in size. An integer
overflow in the calculated message size can cause the one large message to be sent as multiple messages under the
attacker's control.
Thanks to Paul Gerste for reporting this issue.
* Fix behavior of CollectRows to return empty slice if Rows are empty (Felix)
* Fix simple protocol encoding of json.RawMessage
* Fix *Pipeline.getResults should close pipeline on error
* Fix panic in TryFindUnderlyingTypeScanPlan (David Kurman)
* Fix deallocation of invalidated cached statements in a transaction
* Handle invalid sslkey file
* Fix scan float4 into sql.Scanner
* Fix pgtype.Bits not making copy of data from read buffer. This would cause the data to be corrupted by future reads.
# 5.5.3 (February 3, 2024)
* Fix: prepared statement already exists
* Improve CopyFrom auto-conversion of text-ish values
* Add ltree type support (Florent Viel)
* Make some properties of Batch and QueuedQuery public (Pavlo Golub)
* Add AppendRows function (Edoardo Spadolini)
* Optimize convert UUID [16]byte to string (Kirill Malikov)
* Fix: LargeObject Read and Write of more than ~1GB at a time (Mitar)
# 5.5.2 (January 13, 2024)
* Allow NamedArgs to start with underscore
* pgproto3: Maximum message body length support (jeremy.spriet)
* Upgrade golang.org/x/crypto to v0.17.0
* Add snake_case support to RowToStructByName (Tikhon Fedulov)
* Fix: update description cache after exec prepare (James Hartig)
* Fix: pipeline checks if it is closed (James Hartig and Ryan Fowler)
* Fix: normalize timeout / context errors during TLS startup (Samuel Stauffer)
* Add OnPgError for easier centralized error handling (James Hartig)
# 5.5.1 (December 9, 2023)
* Add CopyFromFunc helper function. (robford)
* Add PgConn.Deallocate method that uses PostgreSQL protocol Close message.
* pgx uses new PgConn.Deallocate method. This allows deallocating statements to work in a failed transaction. This fixes a case where the prepared statement map could become invalid.
* Fix: Prefer driver.Valuer over json.Marshaler for json fields. (Jacopo)
* Fix: simple protocol SQL sanitizer previously panicked if an invalid $0 placeholder was used. This now returns an error instead. (maksymnevajdev)
* Add pgtype.Numeric.ScanScientific (Eshton Robateau)
# 5.5.0 (November 4, 2023)
* Add CollectExactlyOneRow. (Julien GOTTELAND)
* Add OpenDBFromPool to create *database/sql.DB from *pgxpool.Pool. (Lev Zakharov)
* Prepare can automatically choose statement name based on sql. This makes it easier to explicitly manage prepared statements.
* Statement cache now uses deterministic, stable statement names.
* database/sql prepared statement names are deterministically generated.
* Fix: SendBatch wasn't respecting context cancellation.
* Fix: Timeout error from pipeline is now normalized.
* Fix: database/sql encoding json.RawMessage to []byte.
* CancelRequest: Wait for the cancel request to be acknowledged by the server. This should improve PgBouncer compatibility. (Anton Levakin)
* stdlib: Use Ping instead of CheckConn in ResetSession
* Add json.Marshaler and json.Unmarshaler for Float4, Float8 (Kirill Mironov)
# 5.4.3 (August 5, 2023)
* Fix: QCharArrayOID was defined with the wrong OID (Christoph Engelbert)
* Fix: connect_timeout for sslmode=allow|prefer (smaher-edb)
* Fix: pgxpool: background health check cannot overflow pool
* Fix: Check for nil in defer when sending batch (recover properly from panic)
* Fix: json scan of non-string pointer to pointer
* Fix: zeronull.Timestamptz should use pgtype.Timestamptz
* Fix: NewConnsCount was not correctly counting connections created by Acquire directly. (James Hartig)
* RowTo(AddrOf)StructByPos ignores fields with "-" db tag
* Optimization: improve text format numeric parsing (horpto)
# 5.4.2 (July 11, 2023)
* Fix: RowScanner errors are fatal to Rows
* Fix: Enable failover efforts when pg_hba.conf disallows non-ssl connections (Brandon Kauffman)
* Hstore text codec internal improvements (Evan Jones)
* Fix: Stop timers for background reader when not in use. Fixes memory leak when closing connections (Adrian-Stefan Mares)
* Fix: Stop background reader as soon as possible.
* Add PgConn.SyncConn(). This combined with the above fix makes it safe to directly use the underlying net.Conn.
# 5.4.1 (June 18, 2023)
* Fix: concurrency bug with pgtypeDefaultMap and simple protocol (Lev Zakharov)
* Add TxOptions.BeginQuery to allow overriding the default BEGIN query
# 5.4.0 (June 14, 2023)
* Replace platform specific syscalls for non-blocking IO with more traditional goroutines and deadlines. This returns to the v4 approach with some additional improvements and fixes. This restores the ability to use a pgx.Conn over an ssh.Conn as well as other non-TCP or Unix socket connections. In addition, it is a significantly simpler implementation that is less likely to have cross platform issues.
* Optimization: The default type registrations are now shared among all connections. This saves about 100KB of memory per connection. `pgtype.Type` and `pgtype.Codec` values are now required to be immutable after registration. This was already necessary in most cases but wasn't documented until now. (Lev Zakharov)
* Fix: Ensure pgxpool.Pool.QueryRow.Scan releases connection on panic
* CancelRequest: don't try to read the reply (Nicola Murino)
* Fix: correctly handle bool type aliases (Wichert Akkerman)
* Fix: pgconn.CancelRequest: Fix unix sockets: don't use RemoteAddr()
* Fix: pgx.Conn memory leak with prepared statement caching (Evan Jones)
* Add BeforeClose to pgxpool.Pool (Evan Cordell)
* Fix: various hstore fixes and optimizations (Evan Jones)
* Fix: RowToStructByPos with embedded unexported struct
* Support different bool string representations (Lev Zakharov)
* Fix: error when using BatchResults.Exec on a select that returns an error after some rows.
* Fix: pipelineBatchResults.Exec() not returning error from ResultReader
* Fix: pipeline batch results not closing pipeline when error occurs while reading directly from results instead of using
a callback.
* Fix: scanning a table type into a struct
* Fix: scan array of record to pointer to slice of struct
* Fix: handle null for json (Cemre Mengu)
* Batch Query callback is called even when there is an error
* Add RowTo(AddrOf)StructByNameLax (Audi P. Risa P)
# 5.3.1 (February 27, 2023)
* Fix: Support v4 and v5 stdlib in same program (Tomáš Procházka)
* Fix: sql.Scanner not being used in certain cases
* Add text format jsonpath support
* Fix: fake non-blocking read adaptive wait time
# 5.3.0 (February 11, 2023)
* Fix: json values work with sql.Scanner
* Fixed / improved error messages (Mark Chambers and Yevgeny Pats)
* Fix: support scan into single dimensional arrays
* Fix: MaxConnLifetimeJitter setting actually jitter (Ben Weintraub)
* Fix: driver.Value representation of bytea should be []byte not string
* Fix: better handling of unregistered OIDs
* CopyFrom can use query cache to avoid extra round trip to get OIDs (Alejandro Do Nascimento Mora)
* Fix: encode to json ignoring driver.Valuer
* Support sql.Scanner on renamed base type
* Fix: pgtype.Numeric text encoding of negative numbers (Mark Chambers)
* Fix: connect with multiple hostnames when one can't be resolved
* Upgrade puddle to remove dependency on uber/atomic and fix alignment issue on 32-bit platform
* Fix: scanning json column into **string
* Multiple reductions in memory allocations
* Fake non-blocking read adapts its max wait time
* Improve CopyFrom performance and reduce memory usage
* Fix: encode []any to array
* Fix: LoadType for composite with dropped attributes (Felix Röhrich)
* Support v4 and v5 stdlib in same program
* Fix: text format array decoding with string of "NULL"
* Prefer binary format for arrays
# 5.2.0 (December 5, 2022)
* `tracelog.TraceLog` implements the pgx.PrepareTracer interface. (Vitalii Solodilov)
* Optimize creating begin transaction SQL string (Petr Evdokimov and ksco)
* `Conn.LoadType` supports range and multirange types (Vitalii Solodilov)
* Fix scan `uint` and `uint64` `ScanNumeric`. This resolves a PostgreSQL `numeric` being incorrectly scanned into `uint` and `uint64`.
# 5.1.1 (November 17, 2022)
* Fix simple query sanitizer where query text contains a Unicode replacement character.
* Remove erroneous `name` argument from `DeallocateAll()`. Technically, this is a breaking change, but given that method was only added 5 days ago this change was accepted. (Bodo Kaiser)
# 5.1.0 (November 12, 2022)
* Update puddle to v2.1.2. This resolves a race condition and a deadlock in pgxpool.
* `QueryRewriter.RewriteQuery` now returns an error. Technically, this is a breaking change for any external implementers, but given the minimal likelihood that there are actually any external implementers this change was accepted.
* Expose `GetSSLPassword` support to pgx.
* Fix encode `ErrorResponse` unknown field handling. This would only affect pgproto3 being used directly as a proxy with a non-PostgreSQL server that included additional error fields.
* Fix date text format encoding with 5 digit years.
* Fix date values passed to a `sql.Scanner` as `string` instead of `time.Time`.
* DateCodec.DecodeValue can return `pgtype.InfinityModifier` instead of `string` for infinite values. This now matches the behavior of the timestamp types.
* Add domain type support to `Conn.LoadType()`.
* Add `RowToStructByName` and `RowToAddrOfStructByName`. (Pavlo Golub)
* Add `Conn.DeallocateAll()` to clear all prepared statements including the statement cache. (Bodo Kaiser)
# 5.0.4 (October 24, 2022)
* Fix: CollectOneRow prefers PostgreSQL error over pgx.ErrorNoRows
* Fix: some reflect Kind checks to first check for nil
* Bump golang.org/x/text dependency to placate snyk
* Fix: RowToStructByPos on structs with multiple anonymous sub-structs (Baptiste Fontaine)
* Fix: Exec checks if tx is closed
# 5.0.3 (October 14, 2022)
* Fix `driver.Valuer` handling edge cases that could cause infinite loop or crash
# v5.0.2 (October 8, 2022)
* Fix date encoding in text format to always use 2 digits for month and day
* Prefer driver.Valuer over wrap plans when encoding
* Fix scan to pointer to pointer to renamed type
* Allow scanning NULL even if PG and Go types are incompatible
# v5.0.1 (September 24, 2022)
* Fix 32-bit atomic usage
* Add MarshalJSON for Float8 (yogipristiawan)
* Add `[` and `]` to text encoding of `Lseg`
* Fix sqlScannerWrapper NULL handling
# v5.0.0 (September 17, 2022)
## Merged Packages
`github.com/jackc/pgtype`, `github.com/jackc/pgconn`, and `github.com/jackc/pgproto3` are now included in the main
`github.com/jackc/pgx` repository. Previously there was confusion as to where issues should be reported, additional
release work due to releasing multiple packages, and less clear changelogs.
## pgconn
`CommandTag` is now an opaque type instead of directly exposing an underlying `[]byte`.
The return value `ResultReader.Values()` is no longer safe to retain a reference to after a subsequent call to `NextRow()` or `Close()`.
`Trace()` method adds low level message tracing similar to the `PQtrace` function in `libpq`.
pgconn now uses non-blocking IO. This is a significant internal restructuring, but it should not cause any visible changes on its own. However, it is important in implementing other new features.
`CheckConn()` checks a connection's liveness by doing a non-blocking read. This can be used to detect database restarts or network interruptions without executing a query or a ping.
pgconn now supports pipeline mode.
`*PgConn.ReceiveResults` removed. Use pipeline mode instead.
`Timeout()` no longer considers `context.Canceled` as a timeout error. `context.DeadlineExceeded` still is considered a timeout error.
## pgxpool
`Connect` and `ConnectConfig` have been renamed to `New` and `NewWithConfig` respectively. The `LazyConnect` option has been removed. Pools always lazily connect.
## pgtype
The `pgtype` package has been significantly changed.
### NULL Representation
Previously, types had a `Status` field that could be `Undefined`, `Null`, or `Present`. This has been changed to a
`Valid` `bool` field to harmonize with how `database/sql` represents `NULL` and to make the zero value useable.
Previously, a type that implemented `driver.Valuer` would have the `Value` method called even on a nil pointer. All nils
whether typed or untyped now represent `NULL`.
### Codec and Value Split
Previously, the type system combined decoding and encoding values with the value types. e.g. Type `Int8` both handled
encoding and decoding the PostgreSQL representation and acted as a value object. This caused some difficulties when
there was not an exact 1 to 1 relationship between the Go types and the PostgreSQL types For example, scanning a
PostgreSQL binary `numeric` into a Go `float64` was awkward (see https://github.com/jackc/pgtype/issues/147). This
concepts have been separated. A `Codec` only has responsibility for encoding and decoding values. Value types are
generally defined by implementing an interface that a particular `Codec` understands (e.g. `PointScanner` and
`PointValuer` for the PostgreSQL `point` type).
### Array Types
All array types are now handled by `ArrayCodec` instead of using code generation for each new array type. This also
means that less common array types such as `point[]` are now supported. `Array[T]` supports PostgreSQL multi-dimensional
arrays.
### Composite Types
Composite types must be registered before use. `CompositeFields` may still be used to construct and destruct composite
values, but any type may now implement `CompositeIndexGetter` and `CompositeIndexScanner` to be used as a composite.
### Range Types
Range types are now handled with types `RangeCodec` and `Range[T]`. This allows additional user defined range types to
easily be handled. Multirange types are handled similarly with `MultirangeCodec` and `Multirange[T]`.
### pgxtype
`LoadDataType` moved to `*Conn` as `LoadType`.
### Bytea
The `Bytea` and `GenericBinary` types have been replaced. Use the following instead:
* `[]byte` - For normal usage directly use `[]byte`.
* `DriverBytes` - Uses driver memory only available until next database method call. Avoids a copy and an allocation.
* `PreallocBytes` - Uses preallocated byte slice to avoid an allocation.
* `UndecodedBytes` - Avoids any decoding. Allows working with raw bytes.
### Dropped lib/pq Support
`pgtype` previously supported and was tested against [lib/pq](https://github.com/lib/pq). While it will continue to work
in most cases this is no longer supported.
### database/sql Scan
Previously, most `Scan` implementations would convert `[]byte` to `string` automatically to decode a text value. Now
only `string` is handled. This is to allow the possibility of future binary support in `database/sql` mode by
considering `[]byte` to be binary format and `string` text format. This change should have no effect for any use with
`pgx`. The previous behavior was only necessary for `lib/pq` compatibility.
Added `*Map.SQLScanner` to create a `sql.Scanner` for types such as `[]int32` and `Range[T]` that do not implement
`sql.Scanner` directly.
### Number Type Fields Include Bit size
`Int2`, `Int4`, `Int8`, `Float4`, `Float8`, and `Uint32` fields now include bit size. e.g. `Int` is renamed to `Int64`.
This matches the convention set by `database/sql`. In addition, for comparable types like `pgtype.Int8` and
`sql.NullInt64` the structures are identical. This means they can be directly converted one to another.
### 3rd Party Type Integrations
* Extracted integrations with https://github.com/shopspring/decimal and https://github.com/gofrs/uuid to
https://github.com/jackc/pgx-shopspring-decimal and https://github.com/jackc/pgx-gofrs-uuid respectively. This trims
the pgx dependency tree.
### Other Changes
* `Bit` and `Varbit` are both replaced by the `Bits` type.
* `CID`, `OID`, `OIDValue`, and `XID` are replaced by the `Uint32` type.
* `Hstore` is now defined as `map[string]*string`.
* `JSON` and `JSONB` types removed. Use `[]byte` or `string` directly.
* `QChar` type removed. Use `rune` or `byte` directly.
* `Inet` and `Cidr` types removed. Use `netip.Addr` and `netip.Prefix` directly. These types are more memory efficient than the previous `net.IPNet`.
* `Macaddr` type removed. Use `net.HardwareAddr` directly.
* Renamed `pgtype.ConnInfo` to `pgtype.Map`.
* Renamed `pgtype.DataType` to `pgtype.Type`.
* Renamed `pgtype.None` to `pgtype.Finite`.
* `RegisterType` now accepts a `*Type` instead of `Type`.
* Assorted array helper methods and types made private.
## stdlib
* Removed `AcquireConn` and `ReleaseConn` as that functionality has been built in since Go 1.13.
## Reduced Memory Usage by Reusing Read Buffers
Previously, the connection read buffer would allocate large chunks of memory and never reuse them. This allowed
transferring ownership to anything such as scanned values without incurring an additional allocation and memory copy.
However, this came at the cost of overall increased memory allocation size. But worse it was also possible to pin large
chunks of memory by retaining a reference to a small value that originally came directly from the read buffer. Now
ownership remains with the read buffer and anything needing to retain a value must make a copy.
## Query Execution Modes
Control over automatic prepared statement caching and simple protocol use are now combined into query execution mode.
See documentation for `QueryExecMode`.
## QueryRewriter Interface and NamedArgs
pgx now supports named arguments with the `NamedArgs` type. This is implemented via the new `QueryRewriter` interface which
allows arbitrary rewriting of query SQL and arguments.
## RowScanner Interface
The `RowScanner` interface allows a single argument to Rows.Scan to scan the entire row.
## Rows Result Helpers
* `CollectRows` and `RowTo*` functions simplify collecting results into a slice.
* `CollectOneRow` collects one row using `RowTo*` functions.
* `ForEachRow` simplifies scanning each row and executing code using the scanned values. `ForEachRow` replaces `QueryFunc`.
## Tx Helpers
Rather than every type that implemented `Begin` or `BeginTx` methods also needing to implement `BeginFunc` and
`BeginTxFunc` these methods have been converted to functions that take a db that implements `Begin` or `BeginTx`.
## Improved Batch Query Ergonomics
Previously, the code for building a batch went in one place before the call to `SendBatch`, and the code for reading the
results went in one place after the call to `SendBatch`. This could make it difficult to match up the query and the code
to handle the results. Now `Queue` returns a `QueuedQuery` which has methods `Query`, `QueryRow`, and `Exec` which can
be used to register a callback function that will handle the result. Callback functions are called automatically when
`BatchResults.Close` is called.
## SendBatch Uses Pipeline Mode When Appropriate
Previously, a batch with 10 unique parameterized statements executed 100 times would entail 11 network round trips. 1
for each prepare / describe and 1 for executing them all. Now pipeline mode is used to prepare / describe all statements
in a single network round trip. So it would only take 2 round trips.
## Tracing and Logging
Internal logging support has been replaced with tracing hooks. This allows custom tracing integration with tools like OpenTelemetry. Package tracelog provides an adapter for pgx v4 loggers to act as a tracer.
All integrations with 3rd party loggers have been extracted to separate repositories. This trims the pgx dependency
tree. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/CHANGELOG.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/CHANGELOG.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 22717
} |
# Contributing
## Discuss Significant Changes
Before you invest a significant amount of time on a change, please create a discussion or issue describing your
proposal. This will help to ensure your proposed change has a reasonable chance of being merged.
## Avoid Dependencies
Adding a dependency is a big deal. While on occasion a new dependency may be accepted, the default answer to any change
that adds a dependency is no.
## Development Environment Setup
pgx tests naturally require a PostgreSQL database. It will connect to the database specified in the `PGX_TEST_DATABASE`
environment variable. The `PGX_TEST_DATABASE` environment variable can either be a URL or key-value pairs. In addition,
the standard `PG*` environment variables will be respected. Consider using [direnv](https://github.com/direnv/direnv) to
simplify environment variable handling.
### Using an Existing PostgreSQL Cluster
If you already have a PostgreSQL development server this is the quickest way to start and run the majority of the pgx
test suite. Some tests will be skipped that require server configuration changes (e.g. those testing different
authentication methods).
Create and setup a test database:
```
export PGDATABASE=pgx_test
createdb
psql -c 'create extension hstore;'
psql -c 'create extension ltree;'
psql -c 'create domain uint64 as numeric(20,0);'
```
Ensure a `postgres` user exists. This happens by default in normal PostgreSQL installs, but some installation methods
such as Homebrew do not.
```
createuser -s postgres
```
Ensure your `PGX_TEST_DATABASE` environment variable points to the database you just created and run the tests.
```
export PGX_TEST_DATABASE="host=/private/tmp database=pgx_test"
go test ./...
```
This will run the vast majority of the tests, but some tests will be skipped (e.g. those testing different connection methods).
### Creating a New PostgreSQL Cluster Exclusively for Testing
The following environment variables need to be set both for initial setup and whenever the tests are run. (direnv is
highly recommended). Depending on your platform, you may need to change the host for `PGX_TEST_UNIX_SOCKET_CONN_STRING`.
```
export PGPORT=5015
export PGUSER=postgres
export PGDATABASE=pgx_test
export POSTGRESQL_DATA_DIR=postgresql
export PGX_TEST_DATABASE="host=127.0.0.1 database=pgx_test user=pgx_md5 password=secret"
export PGX_TEST_UNIX_SOCKET_CONN_STRING="host=/private/tmp database=pgx_test"
export PGX_TEST_TCP_CONN_STRING="host=127.0.0.1 database=pgx_test user=pgx_md5 password=secret"
export PGX_TEST_SCRAM_PASSWORD_CONN_STRING="host=127.0.0.1 user=pgx_scram password=secret database=pgx_test"
export PGX_TEST_MD5_PASSWORD_CONN_STRING="host=127.0.0.1 database=pgx_test user=pgx_md5 password=secret"
export PGX_TEST_PLAIN_PASSWORD_CONN_STRING="host=127.0.0.1 user=pgx_pw password=secret"
export PGX_TEST_TLS_CONN_STRING="host=localhost user=pgx_ssl password=secret sslmode=verify-full sslrootcert=`pwd`/.testdb/ca.pem"
export PGX_SSL_PASSWORD=certpw
export PGX_TEST_TLS_CLIENT_CONN_STRING="host=localhost user=pgx_sslcert sslmode=verify-full sslrootcert=`pwd`/.testdb/ca.pem database=pgx_test sslcert=`pwd`/.testdb/pgx_sslcert.crt sslkey=`pwd`/.testdb/pgx_sslcert.key"
```
Create a new database cluster.
```
initdb --locale=en_US -E UTF-8 --username=postgres .testdb/$POSTGRESQL_DATA_DIR
echo "listen_addresses = '127.0.0.1'" >> .testdb/$POSTGRESQL_DATA_DIR/postgresql.conf
echo "port = $PGPORT" >> .testdb/$POSTGRESQL_DATA_DIR/postgresql.conf
cat testsetup/postgresql_ssl.conf >> .testdb/$POSTGRESQL_DATA_DIR/postgresql.conf
cp testsetup/pg_hba.conf .testdb/$POSTGRESQL_DATA_DIR/pg_hba.conf
cd .testdb
# Generate CA, server, and encrypted client certificates.
go run ../testsetup/generate_certs.go
# Copy certificates to server directory and set permissions.
cp ca.pem $POSTGRESQL_DATA_DIR/root.crt
cp localhost.key $POSTGRESQL_DATA_DIR/server.key
chmod 600 $POSTGRESQL_DATA_DIR/server.key
cp localhost.crt $POSTGRESQL_DATA_DIR/server.crt
cd ..
```
Start the new cluster. This will be necessary whenever you are running pgx tests.
```
postgres -D .testdb/$POSTGRESQL_DATA_DIR
```
Setup the test database in the new cluster.
```
createdb
psql --no-psqlrc -f testsetup/postgresql_setup.sql
```
### PgBouncer
There are tests specific for PgBouncer that will be executed if `PGX_TEST_PGBOUNCER_CONN_STRING` is set.
### Optional Tests
pgx supports multiple connection types and means of authentication. These tests are optional. They will only run if the
appropriate environment variables are set. In addition, there may be tests specific to particular PostgreSQL versions,
non-PostgreSQL servers (e.g. CockroachDB), or connection poolers (e.g. PgBouncer). `go test ./... -v | grep SKIP` to see
if any tests are being skipped. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4796
} |
[](https://pkg.go.dev/github.com/jackc/pgx/v5)
[](https://github.com/jackc/pgx/actions/workflows/ci.yml)
# pgx - PostgreSQL Driver and Toolkit
pgx is a pure Go driver and toolkit for PostgreSQL.
The pgx driver is a low-level, high performance interface that exposes PostgreSQL-specific features such as `LISTEN` /
`NOTIFY` and `COPY`. It also includes an adapter for the standard `database/sql` interface.
The toolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocol
and type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers,
proxies, load balancers, logical replication clients, etc.
## Example Usage
```go
package main
import (
"context"
"fmt"
"os"
"github.com/jackc/pgx/v5"
)
func main() {
// urlExample := "postgres://username:password@localhost:5432/database_name"
conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
if err != nil {
fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
os.Exit(1)
}
defer conn.Close(context.Background())
var name string
var weight int64
err = conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)
os.Exit(1)
}
fmt.Println(name, weight)
}
```
See the [getting started guide](https://github.com/jackc/pgx/wiki/Getting-started-with-pgx) for more information.
## Features
* Support for approximately 70 different PostgreSQL types
* Automatic statement preparation and caching
* Batch queries
* Single-round trip query mode
* Full TLS connection control
* Binary format support for custom types (allows for much quicker encoding/decoding)
* `COPY` protocol support for faster bulk data loads
* Tracing and logging support
* Connection pool with after-connect hook for arbitrary connection setup
* `LISTEN` / `NOTIFY`
* Conversion of PostgreSQL arrays to Go slice mappings for integers, floats, and strings
* `hstore` support
* `json` and `jsonb` support
* Maps `inet` and `cidr` PostgreSQL types to `netip.Addr` and `netip.Prefix`
* Large object support
* NULL mapping to pointer to pointer
* Supports `database/sql.Scanner` and `database/sql/driver.Valuer` interfaces for custom types
* Notice response handling
* Simulated nested transactions with savepoints
## Choosing Between the pgx and database/sql Interfaces
The pgx interface is faster. Many PostgreSQL specific features such as `LISTEN` / `NOTIFY` and `COPY` are not available
through the `database/sql` interface.
The pgx interface is recommended when:
1. The application only targets PostgreSQL.
2. No other libraries that require `database/sql` are in use.
It is also possible to use the `database/sql` interface and convert a connection to the lower-level pgx interface as needed.
## Testing
See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions.
## Architecture
See the presentation at Golang Estonia, [PGX Top to Bottom](https://www.youtube.com/watch?v=sXMSWhcHCf8) for a description of pgx architecture.
## Supported Go and PostgreSQL Versions
pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For [Go](https://golang.org/doc/devel/release.html#policy) that is the two most recent major releases and for [PostgreSQL](https://www.postgresql.org/support/versioning/) the major releases in the last 5 years. This means pgx supports Go 1.21 and higher and PostgreSQL 12 and higher. pgx also is tested against the latest version of [CockroachDB](https://www.cockroachlabs.com/product/).
## Version Policy
pgx follows semantic versioning for the documented public API on stable releases. `v5` is the latest stable major version.
## PGX Family Libraries
### [github.com/jackc/pglogrepl](https://github.com/jackc/pglogrepl)
pglogrepl provides functionality to act as a client for PostgreSQL logical replication.
### [github.com/jackc/pgmock](https://github.com/jackc/pgmock)
pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler).
### [github.com/jackc/tern](https://github.com/jackc/tern)
tern is a stand-alone SQL migration system.
### [github.com/jackc/pgerrcode](https://github.com/jackc/pgerrcode)
pgerrcode contains constants for the PostgreSQL error codes.
## Adapters for 3rd Party Types
* [github.com/jackc/pgx-gofrs-uuid](https://github.com/jackc/pgx-gofrs-uuid)
* [github.com/jackc/pgx-shopspring-decimal](https://github.com/jackc/pgx-shopspring-decimal)
* [github.com/twpayne/pgx-geos](https://github.com/twpayne/pgx-geos) ([PostGIS](https://postgis.net/) and [GEOS](https://libgeos.org/) via [go-geos](https://github.com/twpayne/go-geos))
* [github.com/vgarvardt/pgx-google-uuid](https://github.com/vgarvardt/pgx-google-uuid)
## Adapters for 3rd Party Tracers
* [github.com/jackhopner/pgx-xray-tracer](https://github.com/jackhopner/pgx-xray-tracer)
## Adapters for 3rd Party Loggers
These adapters can be used with the tracelog package.
* [github.com/jackc/pgx-go-kit-log](https://github.com/jackc/pgx-go-kit-log)
* [github.com/jackc/pgx-log15](https://github.com/jackc/pgx-log15)
* [github.com/jackc/pgx-logrus](https://github.com/jackc/pgx-logrus)
* [github.com/jackc/pgx-zap](https://github.com/jackc/pgx-zap)
* [github.com/jackc/pgx-zerolog](https://github.com/jackc/pgx-zerolog)
* [github.com/mcosta74/pgx-slog](https://github.com/mcosta74/pgx-slog)
* [github.com/kataras/pgx-golog](https://github.com/kataras/pgx-golog)
## 3rd Party Libraries with PGX Support
### [github.com/pashagolub/pgxmock](https://github.com/pashagolub/pgxmock)
pgxmock is a mock library implementing pgx interfaces.
pgxmock has one and only purpose - to simulate pgx behavior in tests, without needing a real database connection.
### [github.com/georgysavva/scany](https://github.com/georgysavva/scany)
Library for scanning data from a database into Go structs and more.
### [github.com/vingarcia/ksql](https://github.com/vingarcia/ksql)
A carefully designed SQL client for making using SQL easier,
more productive, and less error-prone on Golang.
### [github.com/otan/gopgkrb5](https://github.com/otan/gopgkrb5)
Adds GSSAPI / Kerberos authentication support.
### [github.com/wcamarao/pmx](https://github.com/wcamarao/pmx)
Explicit data mapping and scanning library for Go structs and slices.
### [github.com/stephenafamo/scan](https://github.com/stephenafamo/scan)
Type safe and flexible package for scanning database data into Go types.
Supports, structs, maps, slices and custom mapping functions.
### [github.com/z0ne-dev/mgx](https://github.com/z0ne-dev/mgx)
Code first migration library for native pgx (no database/sql abstraction). | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 7159
} |
# 2.2.2 (September 10, 2024)
* Add empty acquire time to stats (Maxim Ivanov)
* Stop importing nanotime from runtime via linkname (maypok86)
# 2.2.1 (July 15, 2023)
* Fix: CreateResource cannot overflow pool. This changes documented behavior of CreateResource. Previously,
CreateResource could create a resource even if the pool was full. This could cause the pool to overflow. While this
was documented, it was documenting incorrect behavior. CreateResource now returns an error if the pool is full.
# 2.2.0 (February 11, 2023)
* Use Go 1.19 atomics and drop go.uber.org/atomic dependency
# 2.1.2 (November 12, 2022)
* Restore support to Go 1.18 via go.uber.org/atomic
# 2.1.1 (November 11, 2022)
* Fix create resource concurrently with Stat call race
# 2.1.0 (October 28, 2022)
* Concurrency control is now implemented with a semaphore. This simplifies some internal logic, resolves a few error conditions (including a deadlock), and improves performance. (Jan Dubsky)
* Go 1.19 is now required for the improved atomic support.
# 2.0.1 (October 28, 2022)
* Fix race condition when Close is called concurrently with multiple constructors
# 2.0.0 (September 17, 2022)
* Use generics instead of interface{} (Столяров Владимир Алексеевич)
* Add Reset
* Do not cancel resource construction when Acquire is canceled
* NewPool takes Config
# 1.3.0 (August 27, 2022)
* Acquire creates resources in background to allow creation to continue after Acquire is canceled (James Hartig)
# 1.2.1 (December 2, 2021)
* TryAcquire now does not block when background constructing resource
# 1.2.0 (November 20, 2021)
* Add TryAcquire (A. Jensen)
* Fix: remove memory leak / unintentionally pinned memory when shrinking slices (Alexander Staubo)
* Fix: Do not leave pool locked after panic from nil context
# 1.1.4 (September 11, 2021)
* Fix: Deadlock in CreateResource if pool was closed during resource acquisition (Dmitriy Matrenichev)
# 1.1.3 (December 3, 2020)
* Fix: Failed resource creation could cause concurrent Acquire to hang. (Evgeny Vanslov)
# 1.1.2 (September 26, 2020)
* Fix: Resource.Destroy no longer removes itself from the pool before its destructor has completed.
* Fix: Prevent crash when pool is closed while resource is being created.
# 1.1.1 (April 2, 2020)
* Pool.Close can be safely called multiple times
* AcquireAllIDle immediately returns nil if pool is closed
* CreateResource checks if pool is closed before taking any action
* Fix potential race condition when CreateResource and Close are called concurrently. CreateResource now checks if pool is closed before adding newly created resource to pool.
# 1.1.0 (February 5, 2020)
* Use runtime.nanotime for faster tracking of acquire time and last usage time.
* Track resource idle time to enable client health check logic. (Patrick Ellul)
* Add CreateResource to construct a new resource without acquiring it. (Patrick Ellul)
* Fix deadlock race when acquire is cancelled. (Michael Tharp) | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/puddle/v2/CHANGELOG.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/puddle/v2/CHANGELOG.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2985
} |
[](https://pkg.go.dev/github.com/jackc/puddle/v2)

# Puddle
Puddle is a tiny generic resource pool library for Go that uses the standard
context library to signal cancellation of acquires. It is designed to contain
the minimum functionality required for a resource pool. It can be used directly
or it can be used as the base for a domain specific resource pool. For example,
a database connection pool may use puddle internally and implement health checks
and keep-alive behavior without needing to implement any concurrent code of its
own.
## Features
* Acquire cancellation via context standard library
* Statistics API for monitoring pool pressure
* No dependencies outside of standard library and golang.org/x/sync
* High performance
* 100% test coverage of reachable code
## Example Usage
```go
package main
import (
"context"
"log"
"net"
"github.com/jackc/puddle/v2"
)
func main() {
constructor := func(context.Context) (net.Conn, error) {
return net.Dial("tcp", "127.0.0.1:8080")
}
destructor := func(value net.Conn) {
value.Close()
}
maxPoolSize := int32(10)
pool, err := puddle.NewPool(&puddle.Config[net.Conn]{Constructor: constructor, Destructor: destructor, MaxSize: maxPoolSize})
if err != nil {
log.Fatal(err)
}
// Acquire resource from the pool.
res, err := pool.Acquire(context.Background())
if err != nil {
log.Fatal(err)
}
// Use resource.
_, err = res.Value().Write([]byte{1})
if err != nil {
log.Fatal(err)
}
// Release when done.
res.Release()
}
```
## Status
Puddle is stable and feature complete.
* Bug reports and fixes are welcome.
* New features will usually not be accepted if they can be feasibly implemented in a wrapper.
* Performance optimizations will usually not be accepted unless the performance issue rises to the level of a bug.
## Supported Go Versions
puddle supports the same versions of Go that are supported by the Go project. For [Go](https://golang.org/doc/devel/release.html#policy) that is the two most recent major releases. This means puddle supports Go 1.19 and higher.
## License
MIT | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/puddle/v2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/puddle/v2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2240
} |
# reflectx
The sqlx package has special reflect needs. In particular, it needs to:
* be able to map a name to a field
* understand embedded structs
* understand mapping names to fields by a particular tag
* user specified name -> field mapping functions
These behaviors mimic the behaviors by the standard library marshallers and also the
behavior of standard Go accessors.
The first two are amply taken care of by `Reflect.Value.FieldByName`, and the third is
addressed by `Reflect.Value.FieldByNameFunc`, but these don't quite understand struct
tags in the ways that are vital to most marshallers, and they are slow.
This reflectx package extends reflect to achieve these goals. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jmoiron/sqlx/reflectx/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jmoiron/sqlx/reflectx/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 686
} |
# types
The types package provides some useful types which implement the `sql.Scanner`
and `driver.Valuer` interfaces, suitable for use as scan and value targets with
database/sql. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jmoiron/sqlx/types/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jmoiron/sqlx/types/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 181
} |
# CCache
Generic version is on the way:
https://github.com/karlseguin/ccache/tree/generic
CCache is an LRU Cache, written in Go, focused on supporting high concurrency.
Lock contention on the list is reduced by:
* Introducing a window which limits the frequency that an item can get promoted
* Using a buffered channel to queue promotions for a single worker
* Garbage collecting within the same thread as the worker
Unless otherwise stated, all methods are thread-safe.
The non-generic version of this cache can be imported via `github.com/karlseguin/ccache/`.
## Configuration
Import and create a `Cache` instance:
```go
import (
github.com/karlseguin/ccache/v3
)
// create a cache with string values
var cache = ccache.New(ccache.Configure[string]())
```
`Configure` exposes a chainable API:
```go
// creates a cache with int values
var cache = ccache.New(ccache.Configure[int]().MaxSize(1000).ItemsToPrune(100))
```
The most likely configuration options to tweak are:
* `MaxSize(int)` - the maximum number size to store in the cache (default: 5000)
* `GetsPerPromote(int)` - the number of times an item is fetched before we promote it. For large caches with long TTLs, it normally isn't necessary to promote an item after every fetch (default: 3)
* `ItemsToPrune(int)` - the number of items to prune when we hit `MaxSize`. Freeing up more than 1 slot at a time improved performance (default: 500)
Configurations that change the internals of the cache, which aren't as likely to need tweaking:
* `Buckets` - ccache shards its internal map to provide a greater amount of concurrency. Must be a power of 2 (default: 16).
* `PromoteBuffer(int)` - the size of the buffer to use to queue promotions (default: 1024)
* `DeleteBuffer(int)` the size of the buffer to use to queue deletions (default: 1024)
## Usage
Once the cache is setup, you can `Get`, `Set` and `Delete` items from it. A `Get` returns an `*Item`:
### Get
```go
item := cache.Get("user:4")
if item == nil {
//handle
} else {
user := item.Value()
}
```
The returned `*Item` exposes a number of methods:
* `Value() T` - the value cached
* `Expired() bool` - whether the item is expired or not
* `TTL() time.Duration` - the duration before the item expires (will be a negative value for expired items)
* `Expires() time.Time` - the time the item will expire
By returning expired items, CCache lets you decide if you want to serve stale content or not. For example, you might decide to serve up slightly stale content (< 30 seconds old) while re-fetching newer data in the background. You might also decide to serve up infinitely stale content if you're unable to get new data from your source.
### GetWithoutPromote
Same as `Get` but does not "promote" the value, which is to say it circumvents the "lru" aspect of this cache. Should only be used in limited cases, such as peaking at the value.
### Set
`Set` expects the key, value and ttl:
```go
cache.Set("user:4", user, time.Minute * 10)
```
### Fetch
There's also a `Fetch` which mixes a `Get` and a `Set`:
```go
item, err := cache.Fetch("user:4", time.Minute * 10, func() (*User, error) {
//code to fetch the data incase of a miss
//should return the data to cache and the error, if any
})
```
`Fetch` doesn't do anything fancy: it merely uses the public `Get` and `Set` functions. If you want more advanced behavior, such as using a singleflight to protect against thundering herd, support a callback that accepts the key, or returning expired items, you should implement that in your application.
### Delete
`Delete` expects the key to delete. It's ok to call `Delete` on a non-existent key:
```go
cache.Delete("user:4")
```
### DeletePrefix
`DeletePrefix` deletes all keys matching the provided prefix. Returns the number of keys removed.
### DeleteFunc
`DeleteFunc` deletes all items that the provided matches func evaluates to true. Returns the number of keys removed.
### ForEachFunc
`ForEachFunc` iterates through all keys and values in the map and passes them to the provided function. Iteration stops if the function returns false. Iteration order is random.
### Clear
`Clear` clears the cache. If the cache's gc is running, `Clear` waits for it to finish.
### Extend
The life of an item can be changed via the `Extend` method. This will change the expiry of the item by the specified duration relative to the current time.
### Replace
The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU:
```go
cache.Replace("user:4", user)
```
`Replace` returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value *is not* inserted and false is returned.
### GetDropped
You can get the number of keys evicted due to memory pressure by calling `GetDropped`:
```go
dropped := cache.GetDropped()
```
The counter is reset on every call. If the cache's gc is running, `GetDropped` waits for it to finish; it's meant to be called asynchronously for statistics /monitoring purposes.
### Stop
The cache's background worker can be stopped by calling `Stop`. Once `Stop` is called
the cache should not be used (calls are likely to panic). Stop must be called in order to allow the garbage collector to reap the cache.
## Tracking
CCache supports a special tracking mode which is meant to be used in conjunction with other pieces of your code that maintains a long-lived reference to data.
When you configure your cache with `Track()`:
```go
cache = ccache.New(ccache.Configure[int]().Track())
```
The items retrieved via `TrackingGet` will not be eligible for purge until `Release` is called on them:
```go
item := cache.TrackingGet("user:4")
user := item.Value() //will be nil if "user:4" didn't exist in the cache
item.Release() //can be called even if item.Value() returned nil
```
In practice, `Release` wouldn't be called until later, at some other place in your code. `TrackingSet` can be used to set a value to be tracked.
There's a couple reason to use the tracking mode if other parts of your code also hold references to objects. First, if you're already going to hold a reference to these objects, there's really no reason not to have them in the cache - the memory is used up anyways.
More important, it helps ensure that your code returns consistent data. With tracking, "user:4" might be purged, and a subsequent `Fetch` would reload the data. This can result in different versions of "user:4" being returned by different parts of your system.
## LayeredCache
CCache's `LayeredCache` stores and retrieves values by both a primary and secondary key. Deletion can happen against either the primary and secondary key, or the primary key only (removing all values that share the same primary key).
`LayeredCache` is useful for HTTP caching, when you want to purge all variations of a request.
`LayeredCache` takes the same configuration object as the main cache, exposes the same optional tracking capabilities, but exposes a slightly different API:
```go
cache := ccache.Layered(ccache.Configure[string]())
cache.Set("/users/goku", "type:json", "{value_to_cache}", time.Minute * 5)
cache.Set("/users/goku", "type:xml", "<value_to_cache>", time.Minute * 5)
json := cache.Get("/users/goku", "type:json")
xml := cache.Get("/users/goku", "type:xml")
cache.Delete("/users/goku", "type:json")
cache.Delete("/users/goku", "type:xml")
// OR
cache.DeleteAll("/users/goku")
```
# SecondaryCache
In some cases, when using a `LayeredCache`, it may be desirable to always be acting on the secondary portion of the cache entry. This could be the case where the primary key is used as a key elsewhere in your code. The `SecondaryCache` is retrieved with:
```go
cache := ccache.Layered(ccache.Configure[string]())
sCache := cache.GetOrCreateSecondaryCache("/users/goku")
sCache.Set("type:json", "{value_to_cache}", time.Minute * 5)
```
The semantics for interacting with the `SecondaryCache` are exactly the same as for a regular `Cache`. However, one difference is that `Get` will not return nil, but will return an empty 'cache' for a non-existent primary key.
## Size
By default, items added to a cache have a size of 1. This means that if you configure `MaxSize(10000)`, you'll be able to store 10000 items in the cache.
However, if the values you set into the cache have a method `Size() int64`, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with `MaxSize(4096000)` and items that return a `Size() int64` of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes.
## Want Something Simpler?
For a simpler cache, checkout out [rcache](https://github.com/karlseguin/rcache) | {
"source": "yandex/perforator",
"title": "vendor/github.com/karlseguin/ccache/v3/readme.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/karlseguin/ccache/v3/readme.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 8810
} |
# Finite State Entropy
This package provides Finite State Entropy encoding and decoding.
Finite State Entropy (also referenced as [tANS](https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tANS))
encoding provides a fast near-optimal symbol encoding/decoding
for byte blocks as implemented in [zstandard](https://github.com/facebook/zstd).
This can be used for compressing input with a lot of similar input values to the smallest number of bytes.
This does not perform any multi-byte [dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ coders,
but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding.
* [Godoc documentation](https://godoc.org/github.com/klauspost/compress/fse)
## News
* Feb 2018: First implementation released. Consider this beta software for now.
# Usage
This package provides a low level interface that allows to compress single independent blocks.
Each block is separate, and there is no built in integrity checks.
This means that the caller should keep track of block sizes and also do checksums if needed.
Compressing a block is done via the [`Compress`](https://godoc.org/github.com/klauspost/compress/fse#Compress) function.
You must provide input and will receive the output and maybe an error.
These error values can be returned:
| Error | Description |
|---------------------|-----------------------------------------------------------------------------|
| `<nil>` | Everything ok, output is returned |
| `ErrIncompressible` | Returned when input is judged to be too hard to compress |
| `ErrUseRLE` | Returned from the compressor when the input is a single byte value repeated |
| `(error)` | An internal error occurred. |
As can be seen above there are errors that will be returned even under normal operation so it is important to handle these.
To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/fse#Scratch) object
that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same
object can be used for both.
Be aware, that when re-using a `Scratch` object that the *output* buffer is also re-used, so if you are still using this
you must set the `Out` field in the scratch to nil. The same buffer is used for compression and decompression output.
Decompressing is done by calling the [`Decompress`](https://godoc.org/github.com/klauspost/compress/fse#Decompress) function.
You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back
your input was likely corrupted.
It is important to note that a successful decoding does *not* mean your output matches your original input.
There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid.
For more detailed usage, see examples in the [godoc documentation](https://godoc.org/github.com/klauspost/compress/fse#pkg-examples).
# Performance
A lot of factors are affecting speed. Block sizes and compressibility of the material are primary factors.
All compression functions are currently only running on the calling goroutine so only one core will be used per block.
The compressor is significantly faster if symbols are kept as small as possible. The highest byte value of the input
is used to reduce some of the processing, so if all your input is above byte value 64 for instance, it may be
beneficial to transpose all your input values down by 64.
With moderate block sizes around 64k speed are typically 200MB/s per core for compression and
around 300MB/s decompression speed.
The same hardware typically does Huffman (deflate) encoding at 125MB/s and decompression at 100MB/s.
# Plans
At one point, more internals will be exposed to facilitate more "expert" usage of the components.
A streaming interface is also likely to be implemented. Likely compatible with [FSE stream format](https://github.com/Cyan4973/FiniteStateEntropy/blob/dev/programs/fileio.c#L261).
# Contributing
Contributions are always welcome. Be aware that adding public functions will require good justification and breaking
changes will likely not be accepted. If in doubt open an issue before writing the PR. | {
"source": "yandex/perforator",
"title": "vendor/github.com/klauspost/compress/fse/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/klauspost/compress/fse/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4608
} |
Gzip Middleware
===============
This Go package which wraps HTTP *server* handlers to transparently gzip the
response body, for clients which support it.
For HTTP *clients* we provide a transport wrapper that will do gzip decompression
faster than what the standard library offers.
Both the client and server wrappers are fully compatible with other servers and clients.
This package is forked from the dead [nytimes/gziphandler](https://github.com/nytimes/gziphandler)
and extends functionality for it.
## Install
```bash
go get -u github.com/klauspost/compress
```
## Documentation
[](https://pkg.go.dev/github.com/klauspost/compress/gzhttp)
## Usage
There are 2 main parts, one for http servers and one for http clients.
### Client
The standard library automatically adds gzip compression to most requests
and handles decompression of the responses.
However, by wrapping the transport we are able to override this and provide
our own (faster) decompressor.
Wrapping is done on the Transport of the http client:
```Go
func ExampleTransport() {
// Get an HTTP client.
client := http.Client{
// Wrap the transport:
Transport: gzhttp.Transport(http.DefaultTransport),
}
resp, err := client.Get("https://google.com")
if err != nil {
return
}
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
fmt.Println("body:", string(body))
}
```
Speed compared to standard library `DefaultTransport` for an approximate 127KB JSON payload:
```
BenchmarkTransport
Single core:
BenchmarkTransport/gzhttp-32 1995 609791 ns/op 214.14 MB/s 10129 B/op 73 allocs/op
BenchmarkTransport/stdlib-32 1567 772161 ns/op 169.11 MB/s 53950 B/op 99 allocs/op
BenchmarkTransport/zstd-32 4579 238503 ns/op 547.51 MB/s 5775 B/op 69 allocs/op
Multi Core:
BenchmarkTransport/gzhttp-par-32 29113 36802 ns/op 3548.27 MB/s 11061 B/op 73 allocs/op
BenchmarkTransport/stdlib-par-32 16114 66442 ns/op 1965.38 MB/s 54971 B/op 99 allocs/op
BenchmarkTransport/zstd-par-32 90177 13110 ns/op 9960.83 MB/s 5361 B/op 67 allocs/op
```
This includes both serving the http request, parsing requests and decompressing.
### Server
For the simplest usage call `GzipHandler` with any handler (an object which implements the
`http.Handler` interface), and it'll return a new handler which gzips the
response. For example:
```go
package main
import (
"io"
"net/http"
"github.com/klauspost/compress/gzhttp"
)
func main() {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
io.WriteString(w, "Hello, World")
})
http.Handle("/", gzhttp.GzipHandler(handler))
http.ListenAndServe("0.0.0.0:8000", nil)
}
```
This will wrap a handler using the default options.
To specify custom options a reusable wrapper can be created that can be used to wrap
any number of handlers.
```Go
package main
import (
"io"
"log"
"net/http"
"github.com/klauspost/compress/gzhttp"
"github.com/klauspost/compress/gzip"
)
func main() {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
io.WriteString(w, "Hello, World")
})
// Create a reusable wrapper with custom options.
wrapper, err := gzhttp.NewWrapper(gzhttp.MinSize(2000), gzhttp.CompressionLevel(gzip.BestSpeed))
if err != nil {
log.Fatalln(err)
}
http.Handle("/", wrapper(handler))
http.ListenAndServe("0.0.0.0:8000", nil)
}
```
### Performance
Speed compared to [nytimes/gziphandler](https://github.com/nytimes/gziphandler) with default settings, 2KB, 20KB and 100KB:
```
λ benchcmp before.txt after.txt
benchmark old ns/op new ns/op delta
BenchmarkGzipHandler_S2k-32 51302 23679 -53.84%
BenchmarkGzipHandler_S20k-32 301426 156331 -48.14%
BenchmarkGzipHandler_S100k-32 1546203 818981 -47.03%
BenchmarkGzipHandler_P2k-32 3973 1522 -61.69%
BenchmarkGzipHandler_P20k-32 20319 9397 -53.75%
BenchmarkGzipHandler_P100k-32 96079 46361 -51.75%
benchmark old MB/s new MB/s speedup
BenchmarkGzipHandler_S2k-32 39.92 86.49 2.17x
BenchmarkGzipHandler_S20k-32 67.94 131.00 1.93x
BenchmarkGzipHandler_S100k-32 66.23 125.03 1.89x
BenchmarkGzipHandler_P2k-32 515.44 1345.31 2.61x
BenchmarkGzipHandler_P20k-32 1007.92 2179.47 2.16x
BenchmarkGzipHandler_P100k-32 1065.79 2208.75 2.07x
benchmark old allocs new allocs delta
BenchmarkGzipHandler_S2k-32 22 16 -27.27%
BenchmarkGzipHandler_S20k-32 25 19 -24.00%
BenchmarkGzipHandler_S100k-32 28 21 -25.00%
BenchmarkGzipHandler_P2k-32 22 16 -27.27%
BenchmarkGzipHandler_P20k-32 25 19 -24.00%
BenchmarkGzipHandler_P100k-32 27 21 -22.22%
benchmark old bytes new bytes delta
BenchmarkGzipHandler_S2k-32 8836 2980 -66.27%
BenchmarkGzipHandler_S20k-32 69034 20562 -70.21%
BenchmarkGzipHandler_S100k-32 356582 86682 -75.69%
BenchmarkGzipHandler_P2k-32 9062 2971 -67.21%
BenchmarkGzipHandler_P20k-32 67799 20051 -70.43%
BenchmarkGzipHandler_P100k-32 300972 83077 -72.40%
```
### Stateless compression
In cases where you expect to run many thousands of compressors concurrently,
but with very little activity you can use stateless compression.
This is not intended for regular web servers serving individual requests.
Use `CompressionLevel(-3)` or `CompressionLevel(gzip.StatelessCompression)` to enable.
Consider adding a [`bufio.Writer`](https://golang.org/pkg/bufio/#NewWriterSize) with a small buffer.
See [more details on stateless compression](https://github.com/klauspost/compress#stateless-compression).
### Migrating from gziphandler
This package removes some of the extra constructors.
When replacing, this can be used to find a replacement.
* `GzipHandler(h)` -> `GzipHandler(h)` (keep as-is)
* `GzipHandlerWithOpts(opts...)` -> `NewWrapper(opts...)`
* `MustNewGzipLevelHandler(n)` -> `NewWrapper(CompressionLevel(n))`
* `NewGzipLevelAndMinSize(n, s)` -> `NewWrapper(CompressionLevel(n), MinSize(s))`
By default, some mime types will now be excluded.
To re-enable compression of all types, use the `ContentTypeFilter(gzhttp.CompressAllContentTypeFilter)` option.
### Range Requests
Ranged requests are not well supported with compression.
Therefore any request with a "Content-Range" header is not compressed.
To signify that range requests are not supported any "Accept-Ranges" header set is removed when data is compressed.
If you do not want this behavior use the `KeepAcceptRanges()` option.
### Flushing data
The wrapper supports the [http.Flusher](https://golang.org/pkg/net/http/#Flusher) interface.
The only caveat is that the writer may not yet have received enough bytes to determine if `MinSize`
has been reached. In this case it will assume that the minimum size has been reached.
If nothing has been written to the response writer, nothing will be flushed.
## BREACH mitigation
[BREACH](http://css.csail.mit.edu/6.858/2020/readings/breach.pdf) is a specialized attack where attacker controlled data
is injected alongside secret data in a response body. This can lead to sidechannel attacks, where observing the compressed response
size can reveal if there are overlaps between the secret data and the injected data.
For more information see https://breachattack.com/
It can be hard to judge if you are vulnerable to BREACH.
In general, if you do not include any user provided content in the response body you are safe,
but if you do, or you are in doubt, you can apply mitigations.
`gzhttp` can apply [Heal the Breach](https://ieeexplore.ieee.org/document/9754554), or improved content aware padding.
```Go
// RandomJitter adds 1->n random bytes to output based on checksum of payload.
// Specify the amount of input to buffer before applying jitter.
// This should cover the sensitive part of your response.
// This can be used to obfuscate the exact compressed size.
// Specifying 0 will use a buffer size of 64KB.
// 'paranoid' will use a slower hashing function, that MAY provide more safety.
// If a negative buffer is given, the amount of jitter will not be content dependent.
// This provides *less* security than applying content based jitter.
func RandomJitter(n, buffer int, paranoid bool) option
...
```
The jitter is added as a "Comment" field. This field has a 1 byte overhead, so actual extra size will be 2 -> n+1 (inclusive).
A good option would be to apply 32 random bytes, with default 64KB buffer: `gzhttp.RandomJitter(32, 0, false)`.
Note that flushing the data forces the padding to be applied, which means that only data before the flush is considered for content aware padding.
The *padding* in the comment is the text `Padding-Padding-Padding-Padding-Pad....`
The *length* is `1 + crc32c(payload) MOD n` or `1 + sha256(payload) MOD n` (paranoid), or just random from `crypto/rand` if buffer < 0.
### Paranoid?
The padding size is determined by the remainder of a CRC32 of the content.
Since the payload contains elements unknown to the attacker, there is no reason to believe they can derive any information
from this remainder, or predict it.
However, for those that feel uncomfortable with a CRC32 being used for this can enable "paranoid" mode which will use SHA256 for determining the padding.
The hashing itself is about 2 orders of magnitude slower, but in overall terms will maybe only reduce speed by 10%.
Paranoid mode has no effect if buffer is < 0 (non-content aware padding).
### Examples
Adding the option `gzhttp.RandomJitter(32, 50000)` will apply from 1 up to 32 bytes of random data to the output.
The number of bytes added depends on the content of the first 50000 bytes, or all of them if the output was less than that.
Adding the option `gzhttp.RandomJitter(32, -1)` will apply from 1 up to 32 bytes of random data to the output.
Each call will apply a random amount of jitter. This should be considered less secure than content based jitter.
This can be used if responses are very big, deterministic and the buffer size would be too big to cover where the mutation occurs.
## License
[Apache 2.0](LICENSE) | {
"source": "yandex/perforator",
"title": "vendor/github.com/klauspost/compress/gzhttp/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/klauspost/compress/gzhttp/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 10932
} |
# Huff0 entropy compression
This package provides Huff0 encoding and decoding as used in zstd.
[Huff0](https://github.com/Cyan4973/FiniteStateEntropy#new-generation-entropy-coders),
a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU
(Arithmetic Logic Unit), achieving extremely fast compression and decompression speeds.
This can be used for compressing input with a lot of similar input values to the smallest number of bytes.
This does not perform any multi-byte [dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ coders,
but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding.
* [Godoc documentation](https://godoc.org/github.com/klauspost/compress/huff0)
## News
This is used as part of the [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression and decompression package.
This ensures that most functionality is well tested.
# Usage
This package provides a low level interface that allows to compress single independent blocks.
Each block is separate, and there is no built in integrity checks.
This means that the caller should keep track of block sizes and also do checksums if needed.
Compressing a block is done via the [`Compress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress1X) and
[`Compress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress4X) functions.
You must provide input and will receive the output and maybe an error.
These error values can be returned:
| Error | Description |
|---------------------|-----------------------------------------------------------------------------|
| `<nil>` | Everything ok, output is returned |
| `ErrIncompressible` | Returned when input is judged to be too hard to compress |
| `ErrUseRLE` | Returned from the compressor when the input is a single byte value repeated |
| `ErrTooBig` | Returned if the input block exceeds the maximum allowed size (128 Kib) |
| `(error)` | An internal error occurred. |
As can be seen above some of there are errors that will be returned even under normal operation so it is important to handle these.
To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) object
that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same
object can be used for both.
Be aware, that when re-using a `Scratch` object that the *output* buffer is also re-used, so if you are still using this
you must set the `Out` field in the scratch to nil. The same buffer is used for compression and decompression output.
The `Scratch` object will retain state that allows to re-use previous tables for encoding and decoding.
## Tables and re-use
Huff0 allows for reusing tables from the previous block to save space if that is expected to give better/faster results.
The Scratch object allows you to set a [`ReusePolicy`](https://godoc.org/github.com/klauspost/compress/huff0#ReusePolicy)
that controls this behaviour. See the documentation for details. This can be altered between each block.
Do however note that this information is *not* stored in the output block and it is up to the users of the package to
record whether [`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable) should be called,
based on the boolean reported back from the CompressXX call.
If you want to store the table separate from the data, you can access them as `OutData` and `OutTable` on the
[`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) object.
## Decompressing
The first part of decoding is to initialize the decoding table through [`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable).
This will initialize the decoding tables.
You can supply the complete block to `ReadTable` and it will return the data part of the block
which can be given to the decompressor.
Decompressing is done by calling the [`Decompress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress1X)
or [`Decompress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress4X) function.
For concurrently decompressing content with a fixed table a stateless [`Decoder`](https://godoc.org/github.com/klauspost/compress/huff0#Decoder) can be requested which will remain correct as long as the scratch is unchanged. The capacity of the provided slice indicates the expected output size.
You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back
your input was likely corrupted.
It is important to note that a successful decoding does *not* mean your output matches your original input.
There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid.
# Contributing
Contributions are always welcome. Be aware that adding public functions will require good justification and breaking
changes will likely not be accepted. If in doubt open an issue before writing the PR. | {
"source": "yandex/perforator",
"title": "vendor/github.com/klauspost/compress/huff0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/klauspost/compress/huff0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 5491
} |
# S2 Compression
S2 is an extension of [Snappy](https://github.com/google/snappy).
S2 is aimed for high throughput, which is why it features concurrent compression for bigger payloads.
Decoding is compatible with Snappy compressed content, but content compressed with S2 cannot be decompressed by Snappy.
This means that S2 can seamlessly replace Snappy without converting compressed content.
S2 can produce Snappy compatible output, faster and better than Snappy.
If you want full benefit of the changes you should use s2 without Snappy compatibility.
S2 is designed to have high throughput on content that cannot be compressed.
This is important, so you don't have to worry about spending CPU cycles on already compressed data.
## Benefits over Snappy
* Better compression
* Adjustable compression (3 levels)
* Concurrent stream compression
* Faster decompression, even for Snappy compatible content
* Concurrent Snappy/S2 stream decompression
* Skip forward in compressed stream
* Random seeking with indexes
* Compatible with reading Snappy compressed content
* Smaller block size overhead on incompressible blocks
* Block concatenation
* Block Dictionary support
* Uncompressed stream mode
* Automatic stream size padding
* Snappy compatible block compression
## Drawbacks over Snappy
* Not optimized for 32 bit systems
* Streams use slightly more memory due to larger blocks and concurrency (configurable)
# Usage
Installation: `go get -u github.com/klauspost/compress/s2`
Full package documentation:
[![godoc][1]][2]
[1]: https://godoc.org/github.com/klauspost/compress?status.svg
[2]: https://godoc.org/github.com/klauspost/compress/s2
## Compression
```Go
func EncodeStream(src io.Reader, dst io.Writer) error {
enc := s2.NewWriter(dst)
_, err := io.Copy(enc, src)
if err != nil {
enc.Close()
return err
}
// Blocks until compression is done.
return enc.Close()
}
```
You should always call `enc.Close()`, otherwise you will leak resources and your encode will be incomplete.
For the best throughput, you should attempt to reuse the `Writer` using the `Reset()` method.
The Writer in S2 is always buffered, therefore `NewBufferedWriter` in Snappy can be replaced with `NewWriter` in S2.
It is possible to flush any buffered data using the `Flush()` method.
This will block until all data sent to the encoder has been written to the output.
S2 also supports the `io.ReaderFrom` interface, which will consume all input from a reader.
As a final method to compress data, if you have a single block of data you would like to have encoded as a stream,
a slightly more efficient method is to use the `EncodeBuffer` method.
This will take ownership of the buffer until the stream is closed.
```Go
func EncodeStream(src []byte, dst io.Writer) error {
enc := s2.NewWriter(dst)
// The encoder owns the buffer until Flush or Close is called.
err := enc.EncodeBuffer(buf)
if err != nil {
enc.Close()
return err
}
// Blocks until compression is done.
return enc.Close()
}
```
Each call to `EncodeBuffer` will result in discrete blocks being created without buffering,
so it should only be used a single time per stream.
If you need to write several blocks, you should use the regular io.Writer interface.
## Decompression
```Go
func DecodeStream(src io.Reader, dst io.Writer) error {
dec := s2.NewReader(src)
_, err := io.Copy(dst, dec)
return err
}
```
Similar to the Writer, a Reader can be reused using the `Reset` method.
For the best possible throughput, there is a `EncodeBuffer(buf []byte)` function available.
However, it requires that the provided buffer isn't used after it is handed over to S2 and until the stream is flushed or closed.
For smaller data blocks, there is also a non-streaming interface: `Encode()`, `EncodeBetter()` and `Decode()`.
Do however note that these functions (similar to Snappy) does not provide validation of data,
so data corruption may be undetected. Stream encoding provides CRC checks of data.
It is possible to efficiently skip forward in a compressed stream using the `Skip()` method.
For big skips the decompressor is able to skip blocks without decompressing them.
## Single Blocks
Similar to Snappy S2 offers single block compression.
Blocks do not offer the same flexibility and safety as streams,
but may be preferable for very small payloads, less than 100K.
Using a simple `dst := s2.Encode(nil, src)` will compress `src` and return the compressed result.
It is possible to provide a destination buffer.
If the buffer has a capacity of `s2.MaxEncodedLen(len(src))` it will be used.
If not a new will be allocated.
Alternatively `EncodeBetter`/`EncodeBest` can also be used for better, but slightly slower compression.
Similarly to decompress a block you can use `dst, err := s2.Decode(nil, src)`.
Again an optional destination buffer can be supplied.
The `s2.DecodedLen(src)` can be used to get the minimum capacity needed.
If that is not satisfied a new buffer will be allocated.
Block function always operate on a single goroutine since it should only be used for small payloads.
# Commandline tools
Some very simply commandline tools are provided; `s2c` for compression and `s2d` for decompression.
Binaries can be downloaded on the [Releases Page](https://github.com/klauspost/compress/releases).
Installing then requires Go to be installed. To install them, use:
`go install github.com/klauspost/compress/s2/cmd/s2c@latest && go install github.com/klauspost/compress/s2/cmd/s2d@latest`
To build binaries to the current folder use:
`go build github.com/klauspost/compress/s2/cmd/s2c && go build github.com/klauspost/compress/s2/cmd/s2d`
## s2c
```
Usage: s2c [options] file1 file2
Compresses all files supplied as input separately.
Output files are written as 'filename.ext.s2' or 'filename.ext.snappy'.
By default output files will be overwritten.
Use - as the only file name to read from stdin and write to stdout.
Wildcards are accepted: testdir/*.txt will compress all files in testdir ending with .txt
Directories can be wildcards as well. testdir/*/*.txt will match testdir/subdir/b.txt
File names beginning with 'http://' and 'https://' will be downloaded and compressed.
Only http response code 200 is accepted.
Options:
-bench int
Run benchmark n times. No output will be written
-blocksize string
Max block size. Examples: 64K, 256K, 1M, 4M. Must be power of two and <= 4MB (default "4M")
-c Write all output to stdout. Multiple input files will be concatenated
-cpu int
Compress using this amount of threads (default 32)
-faster
Compress faster, but with a minor compression loss
-help
Display help
-index
Add seek index (default true)
-o string
Write output to another file. Single input file only
-pad string
Pad size to a multiple of this value, Examples: 500, 64K, 256K, 1M, 4M, etc (default "1")
-q Don't write any output to terminal, except errors
-rm
Delete source file(s) after successful compression
-safe
Do not overwrite output files
-slower
Compress more, but a lot slower
-snappy
Generate Snappy compatible output stream
-verify
Verify written files
```
## s2d
```
Usage: s2d [options] file1 file2
Decompresses all files supplied as input. Input files must end with '.s2' or '.snappy'.
Output file names have the extension removed. By default output files will be overwritten.
Use - as the only file name to read from stdin and write to stdout.
Wildcards are accepted: testdir/*.txt will compress all files in testdir ending with .txt
Directories can be wildcards as well. testdir/*/*.txt will match testdir/subdir/b.txt
File names beginning with 'http://' and 'https://' will be downloaded and decompressed.
Extensions on downloaded files are ignored. Only http response code 200 is accepted.
Options:
-bench int
Run benchmark n times. No output will be written
-c Write all output to stdout. Multiple input files will be concatenated
-help
Display help
-o string
Write output to another file. Single input file only
-offset string
Start at offset. Examples: 92, 64K, 256K, 1M, 4M. Requires Index
-q Don't write any output to terminal, except errors
-rm
Delete source file(s) after successful decompression
-safe
Do not overwrite output files
-tail string
Return last of compressed file. Examples: 92, 64K, 256K, 1M, 4M. Requires Index
-verify
Verify files, but do not write output
```
## s2sx: self-extracting archives
s2sx allows creating self-extracting archives with no dependencies.
By default, executables are created for the same platforms as the host os,
but this can be overridden with `-os` and `-arch` parameters.
Extracted files have 0666 permissions, except when untar option used.
```
Usage: s2sx [options] file1 file2
Compresses all files supplied as input separately.
If files have '.s2' extension they are assumed to be compressed already.
Output files are written as 'filename.s2sx' and with '.exe' for windows targets.
If output is big, an additional file with ".more" is written. This must be included as well.
By default output files will be overwritten.
Wildcards are accepted: testdir/*.txt will compress all files in testdir ending with .txt
Directories can be wildcards as well. testdir/*/*.txt will match testdir/subdir/b.txt
Options:
-arch string
Destination architecture (default "amd64")
-c Write all output to stdout. Multiple input files will be concatenated
-cpu int
Compress using this amount of threads (default 32)
-help
Display help
-max string
Maximum executable size. Rest will be written to another file. (default "1G")
-os string
Destination operating system (default "windows")
-q Don't write any output to terminal, except errors
-rm
Delete source file(s) after successful compression
-safe
Do not overwrite output files
-untar
Untar on destination
```
Available platforms are:
* darwin-amd64
* darwin-arm64
* linux-amd64
* linux-arm
* linux-arm64
* linux-mips64
* linux-ppc64le
* windows-386
* windows-amd64
By default, there is a size limit of 1GB for the output executable.
When this is exceeded the remaining file content is written to a file called
output+`.more`. This file must be included for a successful extraction and
placed alongside the executable for a successful extraction.
This file *must* have the same name as the executable, so if the executable is renamed,
so must the `.more` file.
This functionality is disabled with stdin/stdout.
### Self-extracting TAR files
If you wrap a TAR file you can specify `-untar` to make it untar on the destination host.
Files are extracted to the current folder with the path specified in the tar file.
Note that tar files are not validated before they are wrapped.
For security reasons files that move below the root folder are not allowed.
# Performance
This section will focus on comparisons to Snappy.
This package is solely aimed at replacing Snappy as a high speed compression package.
If you are mainly looking for better compression [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd)
gives better compression, but typically at speeds slightly below "better" mode in this package.
Compression is increased compared to Snappy, mostly around 5-20% and the throughput is typically 25-40% increased (single threaded) compared to the Snappy Go implementation.
Streams are concurrently compressed. The stream will be distributed among all available CPU cores for the best possible throughput.
A "better" compression mode is also available. This allows to trade a bit of speed for a minor compression gain.
The content compressed in this mode is fully compatible with the standard decoder.
Snappy vs S2 **compression** speed on 16 core (32 thread) computer, using all threads and a single thread (1 CPU):
| File | S2 Speed | S2 Throughput | S2 % smaller | S2 "better" | "better" throughput | "better" % smaller |
|---------------------------------------------------------------------------------------------------------|----------|---------------|--------------|-------------|---------------------|--------------------|
| [rawstudio-mint14.tar](https://files.klauspost.com/compress/rawstudio-mint14.7z) | 16.33x | 10556 MB/s | 8.0% | 6.04x | 5252 MB/s | 14.7% |
| (1 CPU) | 1.08x | 940 MB/s | - | 0.46x | 400 MB/s | - |
| [github-june-2days-2019.json](https://files.klauspost.com/compress/github-june-2days-2019.json.zst) | 16.51x | 15224 MB/s | 31.70% | 9.47x | 8734 MB/s | 37.71% |
| (1 CPU) | 1.26x | 1157 MB/s | - | 0.60x | 556 MB/s | - |
| [github-ranks-backup.bin](https://files.klauspost.com/compress/github-ranks-backup.bin.zst) | 15.14x | 12598 MB/s | -5.76% | 6.23x | 5675 MB/s | 3.62% |
| (1 CPU) | 1.02x | 932 MB/s | - | 0.47x | 432 MB/s | - |
| [consensus.db.10gb](https://files.klauspost.com/compress/consensus.db.10gb.zst) | 11.21x | 12116 MB/s | 15.95% | 3.24x | 3500 MB/s | 18.00% |
| (1 CPU) | 1.05x | 1135 MB/s | - | 0.27x | 292 MB/s | - |
| [apache.log](https://files.klauspost.com/compress/apache.log.zst) | 8.55x | 16673 MB/s | 20.54% | 5.85x | 11420 MB/s | 24.97% |
| (1 CPU) | 1.91x | 1771 MB/s | - | 0.53x | 1041 MB/s | - |
| [gob-stream](https://files.klauspost.com/compress/gob-stream.7z) | 15.76x | 14357 MB/s | 24.01% | 8.67x | 7891 MB/s | 33.68% |
| (1 CPU) | 1.17x | 1064 MB/s | - | 0.65x | 595 MB/s | - |
| [10gb.tar](http://mattmahoney.net/dc/10gb.html) | 13.33x | 9835 MB/s | 2.34% | 6.85x | 4863 MB/s | 9.96% |
| (1 CPU) | 0.97x | 689 MB/s | - | 0.55x | 387 MB/s | - |
| sharnd.out.2gb | 9.11x | 13213 MB/s | 0.01% | 1.49x | 9184 MB/s | 0.01% |
| (1 CPU) | 0.88x | 5418 MB/s | - | 0.77x | 5417 MB/s | - |
| [sofia-air-quality-dataset csv](https://files.klauspost.com/compress/sofia-air-quality-dataset.tar.zst) | 22.00x | 11477 MB/s | 18.73% | 11.15x | 5817 MB/s | 27.88% |
| (1 CPU) | 1.23x | 642 MB/s | - | 0.71x | 642 MB/s | - |
| [silesia.tar](http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip) | 11.23x | 6520 MB/s | 5.9% | 5.35x | 3109 MB/s | 15.88% |
| (1 CPU) | 1.05x | 607 MB/s | - | 0.52x | 304 MB/s | - |
| [enwik9](https://files.klauspost.com/compress/enwik9.zst) | 19.28x | 8440 MB/s | 4.04% | 9.31x | 4076 MB/s | 18.04% |
| (1 CPU) | 1.12x | 488 MB/s | - | 0.57x | 250 MB/s | - |
### Legend
* `S2 Speed`: Speed of S2 compared to Snappy, using 16 cores and 1 core.
* `S2 Throughput`: Throughput of S2 in MB/s.
* `S2 % smaller`: How many percent of the Snappy output size is S2 better.
* `S2 "better"`: Speed when enabling "better" compression mode in S2 compared to Snappy.
* `"better" throughput`: Speed when enabling "better" compression mode in S2 compared to Snappy.
* `"better" % smaller`: How many percent of the Snappy output size is S2 better when using "better" compression.
There is a good speedup across the board when using a single thread and a significant speedup when using multiple threads.
Machine generated data gets by far the biggest compression boost, with size being reduced by up to 35% of Snappy size.
The "better" compression mode sees a good improvement in all cases, but usually at a performance cost.
Incompressible content (`sharnd.out.2gb`, 2GB random data) sees the smallest speedup.
This is likely dominated by synchronization overhead, which is confirmed by the fact that single threaded performance is higher (see above).
## Decompression
S2 attempts to create content that is also fast to decompress, except in "better" mode where the smallest representation is used.
S2 vs Snappy **decompression** speed. Both operating on single core:
| File | S2 Throughput | vs. Snappy | Better Throughput | vs. Snappy |
|-----------------------------------------------------------------------------------------------------|---------------|------------|-------------------|------------|
| [rawstudio-mint14.tar](https://files.klauspost.com/compress/rawstudio-mint14.7z) | 2117 MB/s | 1.14x | 1738 MB/s | 0.94x |
| [github-june-2days-2019.json](https://files.klauspost.com/compress/github-june-2days-2019.json.zst) | 2401 MB/s | 1.25x | 2307 MB/s | 1.20x |
| [github-ranks-backup.bin](https://files.klauspost.com/compress/github-ranks-backup.bin.zst) | 2075 MB/s | 0.98x | 1764 MB/s | 0.83x |
| [consensus.db.10gb](https://files.klauspost.com/compress/consensus.db.10gb.zst) | 2967 MB/s | 1.05x | 2885 MB/s | 1.02x |
| [adresser.json](https://files.klauspost.com/compress/adresser.json.zst) | 4141 MB/s | 1.07x | 4184 MB/s | 1.08x |
| [gob-stream](https://files.klauspost.com/compress/gob-stream.7z) | 2264 MB/s | 1.12x | 2185 MB/s | 1.08x |
| [10gb.tar](http://mattmahoney.net/dc/10gb.html) | 1525 MB/s | 1.03x | 1347 MB/s | 0.91x |
| sharnd.out.2gb | 3813 MB/s | 0.79x | 3900 MB/s | 0.81x |
| [enwik9](http://mattmahoney.net/dc/textdata.html) | 1246 MB/s | 1.29x | 967 MB/s | 1.00x |
| [silesia.tar](http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip) | 1433 MB/s | 1.12x | 1203 MB/s | 0.94x |
| [enwik10](https://encode.su/threads/3315-enwik10-benchmark-results) | 1284 MB/s | 1.32x | 1010 MB/s | 1.04x |
### Legend
* `S2 Throughput`: Decompression speed of S2 encoded content.
* `Better Throughput`: Decompression speed of S2 "better" encoded content.
* `vs Snappy`: Decompression speed of S2 "better" mode compared to Snappy and absolute speed.
While the decompression code hasn't changed, there is a significant speedup in decompression speed.
S2 prefers longer matches and will typically only find matches that are 6 bytes or longer.
While this reduces compression a bit, it improves decompression speed.
The "better" compression mode will actively look for shorter matches, which is why it has a decompression speed quite similar to Snappy.
Without assembly decompression is also very fast; single goroutine decompression speed. No assembly:
| File | S2 Throughput | S2 throughput |
|--------------------------------|---------------|---------------|
| consensus.db.10gb.s2 | 1.84x | 2289.8 MB/s |
| 10gb.tar.s2 | 1.30x | 867.07 MB/s |
| rawstudio-mint14.tar.s2 | 1.66x | 1329.65 MB/s |
| github-june-2days-2019.json.s2 | 2.36x | 1831.59 MB/s |
| github-ranks-backup.bin.s2 | 1.73x | 1390.7 MB/s |
| enwik9.s2 | 1.67x | 681.53 MB/s |
| adresser.json.s2 | 3.41x | 4230.53 MB/s |
| silesia.tar.s2 | 1.52x | 811.58 |
Even though S2 typically compresses better than Snappy, decompression speed is always better.
### Concurrent Stream Decompression
For full stream decompression S2 offers a [DecodeConcurrent](https://pkg.go.dev/github.com/klauspost/compress/s2#Reader.DecodeConcurrent)
that will decode a full stream using multiple goroutines.
Example scaling, AMD Ryzen 3950X, 16 cores, decompression using `s2d -bench=3 <input>`, best of 3:
| Input | `-cpu=1` | `-cpu=2` | `-cpu=4` | `-cpu=8` | `-cpu=16` |
|-------------------------------------------|------------|------------|------------|------------|-------------|
| enwik10.snappy | 1098.6MB/s | 1819.8MB/s | 3625.6MB/s | 6910.6MB/s | 10818.2MB/s |
| enwik10.s2 | 1303.5MB/s | 2606.1MB/s | 4847.9MB/s | 8878.4MB/s | 9592.1MB/s |
| sofia-air-quality-dataset.tar.snappy | 1302.0MB/s | 2165.0MB/s | 4244.5MB/s | 8241.0MB/s | 12920.5MB/s |
| sofia-air-quality-dataset.tar.s2 | 1399.2MB/s | 2463.2MB/s | 5196.5MB/s | 9639.8MB/s | 11439.5MB/s |
| sofia-air-quality-dataset.tar.s2 (no asm) | 837.5MB/s | 1652.6MB/s | 3183.6MB/s | 5945.0MB/s | 9620.7MB/s |
Scaling can be expected to be pretty linear until memory bandwidth is saturated.
For now the DecodeConcurrent can only be used for full streams without seeking or combining with regular reads.
## Block compression
When compressing blocks no concurrent compression is performed just as Snappy.
This is because blocks are for smaller payloads and generally will not benefit from concurrent compression.
An important change is that incompressible blocks will not be more than at most 10 bytes bigger than the input.
In rare, worst case scenario Snappy blocks could be significantly bigger than the input.
### Mixed content blocks
The most reliable is a wide dataset.
For this we use [`webdevdata.org-2015-01-07-subset`](https://files.klauspost.com/compress/webdevdata.org-2015-01-07-4GB-subset.7z),
53927 files, total input size: 4,014,735,833 bytes. Single goroutine used.
| * | Input | Output | Reduction | MB/s |
|-------------------|------------|------------|------------|------------|
| S2 | 4014735833 | 1059723369 | 73.60% | **936.73** |
| S2 Better | 4014735833 | 961580539 | 76.05% | 451.10 |
| S2 Best | 4014735833 | 899182886 | **77.60%** | 46.84 |
| Snappy | 4014735833 | 1128706759 | 71.89% | 790.15 |
| S2, Snappy Output | 4014735833 | 1093823291 | 72.75% | 936.60 |
| LZ4 | 4014735833 | 1063768713 | 73.50% | 452.02 |
S2 delivers both the best single threaded throughput with regular mode and the best compression rate with "best".
"Better" mode provides the same compression speed as LZ4 with better compression ratio.
When outputting Snappy compatible output it still delivers better throughput (150MB/s more) and better compression.
As can be seen from the other benchmarks decompression should also be easier on the S2 generated output.
Though they cannot be compared due to different decompression speeds here are the speed/size comparisons for
other Go compressors:
| * | Input | Output | Reduction | MB/s |
|-------------------|------------|------------|-----------|--------|
| Zstd Fastest (Go) | 4014735833 | 794608518 | 80.21% | 236.04 |
| Zstd Best (Go) | 4014735833 | 704603356 | 82.45% | 35.63 |
| Deflate (Go) l1 | 4014735833 | 871294239 | 78.30% | 214.04 |
| Deflate (Go) l9 | 4014735833 | 730389060 | 81.81% | 41.17 |
### Standard block compression
Benchmarking single block performance is subject to a lot more variation since it only tests a limited number of file patterns.
So individual benchmarks should only be seen as a guideline and the overall picture is more important.
These micro-benchmarks are with data in cache and trained branch predictors. For a more realistic benchmark see the mixed content above.
Block compression. Parallel benchmark running on 16 cores, 16 goroutines.
AMD64 assembly is use for both S2 and Snappy.
| Absolute Perf | Snappy size | S2 Size | Snappy Speed | S2 Speed | Snappy dec | S2 dec |
|-----------------------|-------------|---------|--------------|-------------|-------------|-------------|
| html | 22843 | 20868 | 16246 MB/s | 18617 MB/s | 40972 MB/s | 49263 MB/s |
| urls.10K | 335492 | 286541 | 7943 MB/s | 10201 MB/s | 22523 MB/s | 26484 MB/s |
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 303228 MB/s | 718321 MB/s | 827552 MB/s |
| fireworks.jpeg (200B) | 146 | 155 | 8869 MB/s | 20180 MB/s | 33691 MB/s | 52421 MB/s |
| paper-100k.pdf | 85304 | 84202 | 167546 MB/s | 112988 MB/s | 326905 MB/s | 291944 MB/s |
| html_x_4 | 92234 | 20870 | 15194 MB/s | 54457 MB/s | 30843 MB/s | 32217 MB/s |
| alice29.txt | 88034 | 85934 | 5936 MB/s | 6540 MB/s | 12882 MB/s | 20044 MB/s |
| asyoulik.txt | 77503 | 79575 | 5517 MB/s | 6657 MB/s | 12735 MB/s | 22806 MB/s |
| lcet10.txt | 234661 | 220383 | 6235 MB/s | 6303 MB/s | 14519 MB/s | 18697 MB/s |
| plrabn12.txt | 319267 | 318196 | 5159 MB/s | 6074 MB/s | 11923 MB/s | 19901 MB/s |
| geo.protodata | 23335 | 18606 | 21220 MB/s | 25432 MB/s | 56271 MB/s | 62540 MB/s |
| kppkn.gtb | 69526 | 65019 | 9732 MB/s | 8905 MB/s | 18491 MB/s | 18969 MB/s |
| alice29.txt (128B) | 80 | 82 | 6691 MB/s | 17179 MB/s | 31883 MB/s | 38874 MB/s |
| alice29.txt (1000B) | 774 | 774 | 12204 MB/s | 13273 MB/s | 48056 MB/s | 52341 MB/s |
| alice29.txt (10000B) | 6648 | 6933 | 10044 MB/s | 12824 MB/s | 32378 MB/s | 46322 MB/s |
| alice29.txt (20000B) | 12686 | 13516 | 7733 MB/s | 12160 MB/s | 30566 MB/s | 58969 MB/s |
Speed is generally at or above Snappy. Small blocks gets a significant speedup, although at the expense of size.
Decompression speed is better than Snappy, except in one case.
Since payloads are very small the variance in terms of size is rather big, so they should only be seen as a general guideline.
Size is on average around Snappy, but varies on content type.
In cases where compression is worse, it usually is compensated by a speed boost.
### Better compression
Benchmarking single block performance is subject to a lot more variation since it only tests a limited number of file patterns.
So individual benchmarks should only be seen as a guideline and the overall picture is more important.
| Absolute Perf | Snappy size | Better Size | Snappy Speed | Better Speed | Snappy dec | Better dec |
|-----------------------|-------------|-------------|--------------|--------------|-------------|-------------|
| html | 22843 | 18972 | 16246 MB/s | 8621 MB/s | 40972 MB/s | 40292 MB/s |
| urls.10K | 335492 | 248079 | 7943 MB/s | 5104 MB/s | 22523 MB/s | 20981 MB/s |
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 84429 MB/s | 718321 MB/s | 823698 MB/s |
| fireworks.jpeg (200B) | 146 | 149 | 8869 MB/s | 7125 MB/s | 33691 MB/s | 30101 MB/s |
| paper-100k.pdf | 85304 | 82887 | 167546 MB/s | 11087 MB/s | 326905 MB/s | 198869 MB/s |
| html_x_4 | 92234 | 18982 | 15194 MB/s | 29316 MB/s | 30843 MB/s | 30937 MB/s |
| alice29.txt | 88034 | 71611 | 5936 MB/s | 3709 MB/s | 12882 MB/s | 16611 MB/s |
| asyoulik.txt | 77503 | 65941 | 5517 MB/s | 3380 MB/s | 12735 MB/s | 14975 MB/s |
| lcet10.txt | 234661 | 184939 | 6235 MB/s | 3537 MB/s | 14519 MB/s | 16634 MB/s |
| plrabn12.txt | 319267 | 264990 | 5159 MB/s | 2960 MB/s | 11923 MB/s | 13382 MB/s |
| geo.protodata | 23335 | 17689 | 21220 MB/s | 10859 MB/s | 56271 MB/s | 57961 MB/s |
| kppkn.gtb | 69526 | 55398 | 9732 MB/s | 5206 MB/s | 18491 MB/s | 16524 MB/s |
| alice29.txt (128B) | 80 | 78 | 6691 MB/s | 7422 MB/s | 31883 MB/s | 34225 MB/s |
| alice29.txt (1000B) | 774 | 746 | 12204 MB/s | 5734 MB/s | 48056 MB/s | 42068 MB/s |
| alice29.txt (10000B) | 6648 | 6218 | 10044 MB/s | 6055 MB/s | 32378 MB/s | 28813 MB/s |
| alice29.txt (20000B) | 12686 | 11492 | 7733 MB/s | 3143 MB/s | 30566 MB/s | 27315 MB/s |
Except for the mostly incompressible JPEG image compression is better and usually in the
double digits in terms of percentage reduction over Snappy.
The PDF sample shows a significant slowdown compared to Snappy, as this mode tries harder
to compress the data. Very small blocks are also not favorable for better compression, so throughput is way down.
This mode aims to provide better compression at the expense of performance and achieves that
without a huge performance penalty, except on very small blocks.
Decompression speed suffers a little compared to the regular S2 mode,
but still manages to be close to Snappy in spite of increased compression.
# Best compression mode
S2 offers a "best" compression mode.
This will compress as much as possible with little regard to CPU usage.
Mainly for offline compression, but where decompression speed should still
be high and compatible with other S2 compressed data.
Some examples compared on 16 core CPU, amd64 assembly used:
```
* enwik10
Default... 10000000000 -> 4759950115 [47.60%]; 1.03s, 9263.0MB/s
Better... 10000000000 -> 4084706676 [40.85%]; 2.16s, 4415.4MB/s
Best... 10000000000 -> 3615520079 [36.16%]; 42.259s, 225.7MB/s
* github-june-2days-2019.json
Default... 6273951764 -> 1041700255 [16.60%]; 431ms, 13882.3MB/s
Better... 6273951764 -> 945841238 [15.08%]; 547ms, 10938.4MB/s
Best... 6273951764 -> 826392576 [13.17%]; 9.455s, 632.8MB/s
* nyc-taxi-data-10M.csv
Default... 3325605752 -> 1093516949 [32.88%]; 324ms, 9788.7MB/s
Better... 3325605752 -> 885394158 [26.62%]; 491ms, 6459.4MB/s
Best... 3325605752 -> 773681257 [23.26%]; 8.29s, 412.0MB/s
* 10gb.tar
Default... 10065157632 -> 5915541066 [58.77%]; 1.028s, 9337.4MB/s
Better... 10065157632 -> 5453844650 [54.19%]; 1.597s, 4862.7MB/s
Best... 10065157632 -> 5192495021 [51.59%]; 32.78s, 308.2MB/
* consensus.db.10gb
Default... 10737418240 -> 4549762344 [42.37%]; 882ms, 12118.4MB/s
Better... 10737418240 -> 4438535064 [41.34%]; 1.533s, 3500.9MB/s
Best... 10737418240 -> 4210602774 [39.21%]; 42.96s, 254.4MB/s
```
Decompression speed should be around the same as using the 'better' compression mode.
## Dictionaries
*Note: S2 dictionary compression is currently at an early implementation stage, with no assembly for
neither encoding nor decoding. Performance improvements can be expected in the future.*
Adding dictionaries allow providing a custom dictionary that will serve as lookup in the beginning of blocks.
The same dictionary *must* be used for both encoding and decoding.
S2 does not keep track of whether the same dictionary is used,
and using the wrong dictionary will most often not result in an error when decompressing.
Blocks encoded *without* dictionaries can be decompressed seamlessly *with* a dictionary.
This means it is possible to switch from an encoding without dictionaries to an encoding with dictionaries
and treat the blocks similarly.
Similar to [zStandard dictionaries](https://github.com/facebook/zstd#the-case-for-small-data-compression),
the same usage scenario applies to S2 dictionaries.
> Training works if there is some correlation in a family of small data samples. The more data-specific a dictionary is, the more efficient it is (there is no universal dictionary). Hence, deploying one dictionary per type of data will provide the greatest benefits. Dictionary gains are mostly effective in the first few KB. Then, the compression algorithm will gradually use previously decoded content to better compress the rest of the file.
S2 further limits the dictionary to only be enabled on the first 64KB of a block.
This will remove any negative (speed) impacts of the dictionaries on bigger blocks.
### Compression
Using the [github_users_sample_set](https://github.com/facebook/zstd/releases/download/v1.1.3/github_users_sample_set.tar.zst)
and a 64KB dictionary trained with zStandard the following sizes can be achieved.
| | Default | Better | Best |
|--------------------|------------------|------------------|-----------------------|
| Without Dictionary | 3362023 (44.92%) | 3083163 (41.19%) | 3057944 (40.86%) |
| With Dictionary | 921524 (12.31%) | 873154 (11.67%) | 785503 bytes (10.49%) |
So for highly repetitive content, this case provides an almost 3x reduction in size.
For less uniform data we will use the Go source code tree.
Compressing First 64KB of all `.go` files in `go/src`, Go 1.19.5, 8912 files, 51253563 bytes input:
| | Default | Better | Best |
|--------------------|-------------------|-------------------|-------------------|
| Without Dictionary | 22955767 (44.79%) | 20189613 (39.39% | 19482828 (38.01%) |
| With Dictionary | 19654568 (38.35%) | 16289357 (31.78%) | 15184589 (29.63%) |
| Saving/file | 362 bytes | 428 bytes | 472 bytes |
### Creating Dictionaries
There are no tools to create dictionaries in S2.
However, there are multiple ways to create a useful dictionary:
#### Using a Sample File
If your input is very uniform, you can just use a sample file as the dictionary.
For example in the `github_users_sample_set` above, the average compression only goes up from
10.49% to 11.48% by using the first file as dictionary compared to using a dedicated dictionary.
```Go
// Read a sample
sample, err := os.ReadFile("sample.json")
// Create a dictionary.
dict := s2.MakeDict(sample, nil)
// b := dict.Bytes() will provide a dictionary that can be saved
// and reloaded with s2.NewDict(b).
// To encode:
encoded := dict.Encode(nil, file)
// To decode:
decoded, err := dict.Decode(nil, file)
```
#### Using Zstandard
Zstandard dictionaries can easily be converted to S2 dictionaries.
This can be helpful to generate dictionaries for files that don't have a fixed structure.
Example, with training set files placed in `./training-set`:
`λ zstd -r --train-fastcover training-set/* --maxdict=65536 -o name.dict`
This will create a dictionary of 64KB, that can be converted to a dictionary like this:
```Go
// Decode the Zstandard dictionary.
insp, err := zstd.InspectDictionary(zdict)
if err != nil {
panic(err)
}
// We are only interested in the contents.
// Assume that files start with "// Copyright (c) 2023".
// Search for the longest match for that.
// This may save a few bytes.
dict := s2.MakeDict(insp.Content(), []byte("// Copyright (c) 2023"))
// b := dict.Bytes() will provide a dictionary that can be saved
// and reloaded with s2.NewDict(b).
// We can now encode using this dictionary
encodedWithDict := dict.Encode(nil, payload)
// To decode content:
decoded, err := dict.Decode(nil, encodedWithDict)
```
It is recommended to save the dictionary returned by ` b:= dict.Bytes()`, since that will contain only the S2 dictionary.
This dictionary can later be loaded using `s2.NewDict(b)`. The dictionary then no longer requires `zstd` to be initialized.
Also note how `s2.MakeDict` allows you to search for a common starting sequence of your files.
This can be omitted, at the expense of a few bytes.
# Snappy Compatibility
S2 now offers full compatibility with Snappy.
This means that the efficient encoders of S2 can be used to generate fully Snappy compatible output.
There is a [snappy](https://github.com/klauspost/compress/tree/master/snappy) package that can be used by
simply changing imports from `github.com/golang/snappy` to `github.com/klauspost/compress/snappy`.
This uses "better" mode for all operations.
If you would like more control, you can use the s2 package as described below:
## Blocks
Snappy compatible blocks can be generated with the S2 encoder.
Compression and speed is typically a bit better `MaxEncodedLen` is also smaller for smaller memory usage. Replace
| Snappy | S2 replacement |
|---------------------------|-----------------------|
| snappy.Encode(...) | s2.EncodeSnappy(...) |
| snappy.MaxEncodedLen(...) | s2.MaxEncodedLen(...) |
`s2.EncodeSnappy` can be replaced with `s2.EncodeSnappyBetter` or `s2.EncodeSnappyBest` to get more efficiently compressed snappy compatible output.
`s2.ConcatBlocks` is compatible with snappy blocks.
Comparison of [`webdevdata.org-2015-01-07-subset`](https://files.klauspost.com/compress/webdevdata.org-2015-01-07-4GB-subset.7z),
53927 files, total input size: 4,014,735,833 bytes. amd64, single goroutine used:
| Encoder | Size | MB/s | Reduction |
|-----------------------|------------|------------|------------|
| snappy.Encode | 1128706759 | 725.59 | 71.89% |
| s2.EncodeSnappy | 1093823291 | **899.16** | 72.75% |
| s2.EncodeSnappyBetter | 1001158548 | 578.49 | 75.06% |
| s2.EncodeSnappyBest | 944507998 | 66.00 | **76.47%** |
## Streams
For streams, replace `enc = snappy.NewBufferedWriter(w)` with `enc = s2.NewWriter(w, s2.WriterSnappyCompat())`.
All other options are available, but note that block size limit is different for snappy.
Comparison of different streams, AMD Ryzen 3950x, 16 cores. Size and throughput:
| File | snappy.NewWriter | S2 Snappy | S2 Snappy, Better | S2 Snappy, Best |
|-----------------------------|--------------------------|---------------------------|--------------------------|-------------------------|
| nyc-taxi-data-10M.csv | 1316042016 - 539.47MB/s | 1307003093 - 10132.73MB/s | 1174534014 - 5002.44MB/s | 1115904679 - 177.97MB/s |
| enwik10 (xml) | 5088294643 - 451.13MB/s | 5175840939 - 9440.69MB/s | 4560784526 - 4487.21MB/s | 4340299103 - 158.92MB/s |
| 10gb.tar (mixed) | 6056946612 - 729.73MB/s | 6208571995 - 9978.05MB/s | 5741646126 - 4919.98MB/s | 5548973895 - 180.44MB/s |
| github-june-2days-2019.json | 1525176492 - 933.00MB/s | 1476519054 - 13150.12MB/s | 1400547532 - 5803.40MB/s | 1321887137 - 204.29MB/s |
| consensus.db.10gb (db) | 5412897703 - 1102.14MB/s | 5354073487 - 13562.91MB/s | 5335069899 - 5294.73MB/s | 5201000954 - 175.72MB/s |
# Decompression
All decompression functions map directly to equivalent s2 functions.
| Snappy | S2 replacement |
|------------------------|--------------------|
| snappy.Decode(...) | s2.Decode(...) |
| snappy.DecodedLen(...) | s2.DecodedLen(...) |
| snappy.NewReader(...) | s2.NewReader(...) |
Features like [quick forward skipping without decompression](https://pkg.go.dev/github.com/klauspost/compress/s2#Reader.Skip)
are also available for Snappy streams.
If you know you are only decompressing snappy streams, setting [`ReaderMaxBlockSize(64<<10)`](https://pkg.go.dev/github.com/klauspost/compress/s2#ReaderMaxBlockSize)
on your Reader will reduce memory consumption.
# Concatenating blocks and streams.
Concatenating streams will concatenate the output of both without recompressing them.
While this is inefficient in terms of compression it might be usable in certain scenarios.
The 10 byte 'stream identifier' of the second stream can optionally be stripped, but it is not a requirement.
Blocks can be concatenated using the `ConcatBlocks` function.
Snappy blocks/streams can safely be concatenated with S2 blocks and streams.
Streams with indexes (see below) will currently not work on concatenated streams.
# Stream Seek Index
S2 and Snappy streams can have indexes. These indexes will allow random seeking within the compressed data.
The index can either be appended to the stream as a skippable block or returned for separate storage.
When the index is appended to a stream it will be skipped by regular decoders,
so the output remains compatible with other decoders.
## Creating an Index
To automatically add an index to a stream, add `WriterAddIndex()` option to your writer.
Then the index will be added to the stream when `Close()` is called.
```
// Add Index to stream...
enc := s2.NewWriter(w, s2.WriterAddIndex())
io.Copy(enc, r)
enc.Close()
```
If you want to store the index separately, you can use `CloseIndex()` instead of the regular `Close()`.
This will return the index. Note that `CloseIndex()` should only be called once, and you shouldn't call `Close()`.
```
// Get index for separate storage...
enc := s2.NewWriter(w)
io.Copy(enc, r)
index, err := enc.CloseIndex()
```
The `index` can then be used needing to read from the stream.
This means the index can be used without needing to seek to the end of the stream
or for manually forwarding streams. See below.
Finally, an existing S2/Snappy stream can be indexed using the `s2.IndexStream(r io.Reader)` function.
## Using Indexes
To use indexes there is a `ReadSeeker(random bool, index []byte) (*ReadSeeker, error)` function available.
Calling ReadSeeker will return an [io.ReadSeeker](https://pkg.go.dev/io#ReadSeeker) compatible version of the reader.
If 'random' is specified the returned io.Seeker can be used for random seeking, otherwise only forward seeking is supported.
Enabling random seeking requires the original input to support the [io.Seeker](https://pkg.go.dev/io#Seeker) interface.
```
dec := s2.NewReader(r)
rs, err := dec.ReadSeeker(false, nil)
rs.Seek(wantOffset, io.SeekStart)
```
Get a seeker to seek forward. Since no index is provided, the index is read from the stream.
This requires that an index was added and that `r` supports the [io.Seeker](https://pkg.go.dev/io#Seeker) interface.
A custom index can be specified which will be used if supplied.
When using a custom index, it will not be read from the input stream.
```
dec := s2.NewReader(r)
rs, err := dec.ReadSeeker(false, index)
rs.Seek(wantOffset, io.SeekStart)
```
This will read the index from `index`. Since we specify non-random (forward only) seeking `r` does not have to be an io.Seeker
```
dec := s2.NewReader(r)
rs, err := dec.ReadSeeker(true, index)
rs.Seek(wantOffset, io.SeekStart)
```
Finally, since we specify that we want to do random seeking `r` must be an io.Seeker.
The returned [ReadSeeker](https://pkg.go.dev/github.com/klauspost/compress/s2#ReadSeeker) contains a shallow reference to the existing Reader,
meaning changes performed to one is reflected in the other.
To check if a stream contains an index at the end, the `(*Index).LoadStream(rs io.ReadSeeker) error` can be used.
## Manually Forwarding Streams
Indexes can also be read outside the decoder using the [Index](https://pkg.go.dev/github.com/klauspost/compress/s2#Index) type.
This can be used for parsing indexes, either separate or in streams.
In some cases it may not be possible to serve a seekable stream.
This can for instance be an HTTP stream, where the Range request
is sent at the start of the stream.
With a little bit of extra code it is still possible to use indexes
to forward to specific offset with a single forward skip.
It is possible to load the index manually like this:
```
var index s2.Index
_, err = index.Load(idxBytes)
```
This can be used to figure out how much to offset the compressed stream:
```
compressedOffset, uncompressedOffset, err := index.Find(wantOffset)
```
The `compressedOffset` is the number of bytes that should be skipped
from the beginning of the compressed file.
The `uncompressedOffset` will then be offset of the uncompressed bytes returned
when decoding from that position. This will always be <= wantOffset.
When creating a decoder it must be specified that it should *not* expect a stream identifier
at the beginning of the stream. Assuming the io.Reader `r` has been forwarded to `compressedOffset`
we create the decoder like this:
```
dec := s2.NewReader(r, s2.ReaderIgnoreStreamIdentifier())
```
We are not completely done. We still need to forward the stream the uncompressed bytes we didn't want.
This is done using the regular "Skip" function:
```
err = dec.Skip(wantOffset - uncompressedOffset)
```
This will ensure that we are at exactly the offset we want, and reading from `dec` will start at the requested offset.
# Compact storage
For compact storage [RemoveIndexHeaders](https://pkg.go.dev/github.com/klauspost/compress/s2#RemoveIndexHeaders) can be used to remove any redundant info from
a serialized index. If you remove the header it must be restored before [Loading](https://pkg.go.dev/github.com/klauspost/compress/s2#Index.Load).
This is expected to save 20 bytes. These can be restored using [RestoreIndexHeaders](https://pkg.go.dev/github.com/klauspost/compress/s2#RestoreIndexHeaders). This removes a layer of security, but is the most compact representation. Returns nil if headers contains errors.
## Index Format:
Each block is structured as a snappy skippable block, with the chunk ID 0x99.
The block can be read from the front, but contains information so it can be read from the back as well.
Numbers are stored as fixed size little endian values or [zigzag encoded](https://developers.google.com/protocol-buffers/docs/encoding#signed_integers) [base 128 varints](https://developers.google.com/protocol-buffers/docs/encoding),
with un-encoded value length of 64 bits, unless other limits are specified.
| Content | Format |
|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| ID, `[1]byte` | Always 0x99. |
| Data Length, `[3]byte` | 3 byte little-endian length of the chunk in bytes, following this. |
| Header `[6]byte` | Header, must be `[115, 50, 105, 100, 120, 0]` or in text: "s2idx\x00". |
| UncompressedSize, Varint | Total Uncompressed size. |
| CompressedSize, Varint | Total Compressed size if known. Should be -1 if unknown. |
| EstBlockSize, Varint | Block Size, used for guessing uncompressed offsets. Must be >= 0. |
| Entries, Varint | Number of Entries in index, must be < 65536 and >=0. |
| HasUncompressedOffsets `byte` | 0 if no uncompressed offsets are present, 1 if present. Other values are invalid. |
| UncompressedOffsets, [Entries]VarInt | Uncompressed offsets. See below how to decode. |
| CompressedOffsets, [Entries]VarInt | Compressed offsets. See below how to decode. |
| Block Size, `[4]byte` | Little Endian total encoded size (including header and trailer). Can be used for searching backwards to start of block. |
| Trailer `[6]byte` | Trailer, must be `[0, 120, 100, 105, 50, 115]` or in text: "\x00xdi2s". Can be used for identifying block from end of stream. |
For regular streams the uncompressed offsets are fully predictable,
so `HasUncompressedOffsets` allows to specify that compressed blocks all have
exactly `EstBlockSize` bytes of uncompressed content.
Entries *must* be in order, starting with the lowest offset,
and there *must* be no uncompressed offset duplicates.
Entries *may* point to the start of a skippable block,
but it is then not allowed to also have an entry for the next block since
that would give an uncompressed offset duplicate.
There is no requirement for all blocks to be represented in the index.
In fact there is a maximum of 65536 block entries in an index.
The writer can use any method to reduce the number of entries.
An implicit block start at 0,0 can be assumed.
### Decoding entries:
```
// Read Uncompressed entries.
// Each assumes EstBlockSize delta from previous.
for each entry {
uOff = 0
if HasUncompressedOffsets == 1 {
uOff = ReadVarInt // Read value from stream
}
// Except for the first entry, use previous values.
if entryNum == 0 {
entry[entryNum].UncompressedOffset = uOff
continue
}
// Uncompressed uses previous offset and adds EstBlockSize
entry[entryNum].UncompressedOffset = entry[entryNum-1].UncompressedOffset + EstBlockSize + uOff
}
// Guess that the first block will be 50% of uncompressed size.
// Integer truncating division must be used.
CompressGuess := EstBlockSize / 2
// Read Compressed entries.
// Each assumes CompressGuess delta from previous.
// CompressGuess is adjusted for each value.
for each entry {
cOff = ReadVarInt // Read value from stream
// Except for the first entry, use previous values.
if entryNum == 0 {
entry[entryNum].CompressedOffset = cOff
continue
}
// Compressed uses previous and our estimate.
entry[entryNum].CompressedOffset = entry[entryNum-1].CompressedOffset + CompressGuess + cOff
// Adjust compressed offset for next loop, integer truncating division must be used.
CompressGuess += cOff/2
}
```
To decode from any given uncompressed offset `(wantOffset)`:
* Iterate entries until `entry[n].UncompressedOffset > wantOffset`.
* Start decoding from `entry[n-1].CompressedOffset`.
* Discard `entry[n-1].UncompressedOffset - wantOffset` bytes from the decoded stream.
See [using indexes](https://github.com/klauspost/compress/tree/master/s2#using-indexes) for functions that perform the operations with a simpler interface.
# Format Extensions
* Frame [Stream identifier](https://github.com/google/snappy/blob/master/framing_format.txt#L68) changed from `sNaPpY` to `S2sTwO`.
* [Framed compressed blocks](https://github.com/google/snappy/blob/master/format_description.txt) can be up to 4MB (up from 64KB).
* Compressed blocks can have an offset of `0`, which indicates to repeat the last seen offset.
Repeat offsets must be encoded as a [2.2.1. Copy with 1-byte offset (01)](https://github.com/google/snappy/blob/master/format_description.txt#L89), where the offset is 0.
The length is specified by reading the 3-bit length specified in the tag and decode using this table:
| Length | Actual Length |
|--------|----------------------|
| 0 | 4 |
| 1 | 5 |
| 2 | 6 |
| 3 | 7 |
| 4 | 8 |
| 5 | 8 + read 1 byte |
| 6 | 260 + read 2 bytes |
| 7 | 65540 + read 3 bytes |
This allows any repeat offset + length to be represented by 2 to 5 bytes.
It also allows to emit matches longer than 64 bytes with one copy + one repeat instead of several 64 byte copies.
Lengths are stored as little endian values.
The first copy of a block cannot be a repeat offset and the offset is reset on every block in streams.
Default streaming block size is 1MB.
# Dictionary Encoding
Adding dictionaries allow providing a custom dictionary that will serve as lookup in the beginning of blocks.
A dictionary provides an initial repeat value that can be used to point to a common header.
Other than that the dictionary contains values that can be used as back-references.
Often used data should be placed at the *end* of the dictionary since offsets < 2048 bytes will be smaller.
## Format
Dictionary *content* must at least 16 bytes and less or equal to 64KiB (65536 bytes).
Encoding: `[repeat value (uvarint)][dictionary content...]`
Before the dictionary content, an unsigned base-128 (uvarint) encoded value specifying the initial repeat offset.
This value is an offset into the dictionary content and not a back-reference offset,
so setting this to 0 will make the repeat value point to the first value of the dictionary.
The value must be less than the dictionary length-8
## Encoding
From the decoder point of view the dictionary content is seen as preceding the encoded content.
`[dictionary content][decoded output]`
Backreferences to the dictionary are encoded as ordinary backreferences that have an offset before the start of the decoded block.
Matches copying from the dictionary are **not** allowed to cross from the dictionary into the decoded data.
However, if a copy ends at the end of the dictionary the next repeat will point to the start of the decoded buffer, which is allowed.
The first match can be a repeat value, which will use the repeat offset stored in the dictionary.
When 64KB (65536 bytes) has been en/decoded it is no longer allowed to reference the dictionary,
neither by a copy nor repeat operations.
If the boundary is crossed while copying from the dictionary, the operation should complete,
but the next instruction is not allowed to reference the dictionary.
Valid blocks encoded *without* a dictionary can be decoded with any dictionary.
There are no checks whether the supplied dictionary is the correct for a block.
Because of this there is no overhead by using a dictionary.
## Example
This is the dictionary content. Elements are separated by `[]`.
Dictionary: `[0x0a][Yesterday 25 bananas were added to Benjamins brown bag]`.
Initial repeat offset is set at 10, which is the letter `2`.
Encoded `[LIT "10"][REPEAT len=10][LIT "hich"][MATCH off=50 len=6][MATCH off=31 len=6][MATCH off=61 len=10]`
Decoded: `[10][ bananas w][hich][ were ][brown ][were added]`
Output: `10 bananas which were brown were added`
## Streams
For streams each block can use the dictionary.
The dictionary cannot not currently be provided on the stream.
# LICENSE
This code is based on the [Snappy-Go](https://github.com/golang/snappy) implementation.
Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. | {
"source": "yandex/perforator",
"title": "vendor/github.com/klauspost/compress/s2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/klauspost/compress/s2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 57333
} |
# snappy
The Snappy compression format in the Go programming language.
This is a drop-in replacement for `github.com/golang/snappy`.
It provides a full, compatible replacement of the Snappy package by simply changing imports.
See [Snappy Compatibility](https://github.com/klauspost/compress/tree/master/s2#snappy-compatibility) in the S2 documentation.
"Better" compression mode is used. For buffered streams concurrent compression is used.
For more options use the [s2 package](https://pkg.go.dev/github.com/klauspost/compress/s2).
# usage
Replace imports `github.com/golang/snappy` with `github.com/klauspost/compress/snappy`. | {
"source": "yandex/perforator",
"title": "vendor/github.com/klauspost/compress/snappy/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/klauspost/compress/snappy/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 636
} |
# zstd
[Zstandard](https://facebook.github.io/zstd/) is a real-time compression algorithm, providing high compression ratios.
It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder.
A high performance compression algorithm is implemented. For now focused on speed.
This package provides [compression](#Compressor) to and [decompression](#Decompressor) of Zstandard content.
This package is pure Go and without use of "unsafe".
The `zstd` package is provided as open source software using a Go standard license.
Currently the package is heavily optimized for 64 bit processors and will be significantly slower on 32 bit processors.
For seekable zstd streams, see [this excellent package](https://github.com/SaveTheRbtz/zstd-seekable-format-go).
## Installation
Install using `go get -u github.com/klauspost/compress`. The package is located in `github.com/klauspost/compress/zstd`.
[](https://pkg.go.dev/github.com/klauspost/compress/zstd)
## Compressor
### Status:
STABLE - there may always be subtle bugs, a wide variety of content has been tested and the library is actively
used by several projects. This library is being [fuzz-tested](https://github.com/klauspost/compress-fuzz) for all updates.
There may still be specific combinations of data types/size/settings that could lead to edge cases,
so as always, testing is recommended.
For now, a high speed (fastest) and medium-fast (default) compressor has been implemented.
* The "Fastest" compression ratio is roughly equivalent to zstd level 1.
* The "Default" compression ratio is roughly equivalent to zstd level 3 (default).
* The "Better" compression ratio is roughly equivalent to zstd level 7.
* The "Best" compression ratio is roughly equivalent to zstd level 11.
In terms of speed, it is typically 2x as fast as the stdlib deflate/gzip in its fastest mode.
The compression ratio compared to stdlib is around level 3, but usually 3x as fast.
### Usage
An Encoder can be used for either compressing a stream via the
`io.WriteCloser` interface supported by the Encoder or as multiple independent
tasks via the `EncodeAll` function.
Smaller encodes are encouraged to use the EncodeAll function.
Use `NewWriter` to create a new instance that can be used for both.
To create a writer with default options, do like this:
```Go
// Compress input to output.
func Compress(in io.Reader, out io.Writer) error {
enc, err := zstd.NewWriter(out)
if err != nil {
return err
}
_, err = io.Copy(enc, in)
if err != nil {
enc.Close()
return err
}
return enc.Close()
}
```
Now you can encode by writing data to `enc`. The output will be finished writing when `Close()` is called.
Even if your encode fails, you should still call `Close()` to release any resources that may be held up.
The above is fine for big encodes. However, whenever possible try to *reuse* the writer.
To reuse the encoder, you can use the `Reset(io.Writer)` function to change to another output.
This will allow the encoder to reuse all resources and avoid wasteful allocations.
Currently stream encoding has 'light' concurrency, meaning up to 2 goroutines can be working on part
of a stream. This is independent of the `WithEncoderConcurrency(n)`, but that is likely to change
in the future. So if you want to limit concurrency for future updates, specify the concurrency
you would like.
If you would like stream encoding to be done without spawning async goroutines, use `WithEncoderConcurrency(1)`
which will compress input as each block is completed, blocking on writes until each has completed.
You can specify your desired compression level using `WithEncoderLevel()` option. Currently only pre-defined
compression settings can be specified.
#### Future Compatibility Guarantees
This will be an evolving project. When using this package it is important to note that both the compression efficiency and speed may change.
The goal will be to keep the default efficiency at the default zstd (level 3).
However the encoding should never be assumed to remain the same,
and you should not use hashes of compressed output for similarity checks.
The Encoder can be assumed to produce the same output from the exact same code version.
However, the may be modes in the future that break this,
although they will not be enabled without an explicit option.
This encoder is not designed to (and will probably never) output the exact same bitstream as the reference encoder.
Also note, that the cgo decompressor currently does not [report all errors on invalid input](https://github.com/DataDog/zstd/issues/59),
[omits error checks](https://github.com/DataDog/zstd/issues/61), [ignores checksums](https://github.com/DataDog/zstd/issues/43)
and seems to ignore concatenated streams, even though [it is part of the spec](https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#frames).
#### Blocks
For compressing small blocks, the returned encoder has a function called `EncodeAll(src, dst []byte) []byte`.
`EncodeAll` will encode all input in src and append it to dst.
This function can be called concurrently.
Each call will only run on a same goroutine as the caller.
Encoded blocks can be concatenated and the result will be the combined input stream.
Data compressed with EncodeAll can be decoded with the Decoder, using either a stream or `DecodeAll`.
Especially when encoding blocks you should take special care to reuse the encoder.
This will effectively make it run without allocations after a warmup period.
To make it run completely without allocations, supply a destination buffer with space for all content.
```Go
import "github.com/klauspost/compress/zstd"
// Create a writer that caches compressors.
// For this operation type we supply a nil Reader.
var encoder, _ = zstd.NewWriter(nil)
// Compress a buffer.
// If you have a destination buffer, the allocation in the call can also be eliminated.
func Compress(src []byte) []byte {
return encoder.EncodeAll(src, make([]byte, 0, len(src)))
}
```
You can control the maximum number of concurrent encodes using the `WithEncoderConcurrency(n)`
option when creating the writer.
Using the Encoder for both a stream and individual blocks concurrently is safe.
### Performance
I have collected some speed examples to compare speed and compression against other compressors.
* `file` is the input file.
* `out` is the compressor used. `zskp` is this package. `zstd` is the Datadog cgo library. `gzstd/gzkp` is gzip standard and this library.
* `level` is the compression level used. For `zskp` level 1 is "fastest", level 2 is "default"; 3 is "better", 4 is "best".
* `insize`/`outsize` is the input/output size.
* `millis` is the number of milliseconds used for compression.
* `mb/s` is megabytes (2^20 bytes) per second.
```
Silesia Corpus:
http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip
This package:
file out level insize outsize millis mb/s
silesia.tar zskp 1 211947520 73821326 634 318.47
silesia.tar zskp 2 211947520 67655404 1508 133.96
silesia.tar zskp 3 211947520 64746933 3000 67.37
silesia.tar zskp 4 211947520 60073508 16926 11.94
cgo zstd:
silesia.tar zstd 1 211947520 73605392 543 371.56
silesia.tar zstd 3 211947520 66793289 864 233.68
silesia.tar zstd 6 211947520 62916450 1913 105.66
silesia.tar zstd 9 211947520 60212393 5063 39.92
gzip, stdlib/this package:
silesia.tar gzstd 1 211947520 80007735 1498 134.87
silesia.tar gzkp 1 211947520 80088272 1009 200.31
GOB stream of binary data. Highly compressible.
https://files.klauspost.com/compress/gob-stream.7z
file out level insize outsize millis mb/s
gob-stream zskp 1 1911399616 233948096 3230 564.34
gob-stream zskp 2 1911399616 203997694 4997 364.73
gob-stream zskp 3 1911399616 173526523 13435 135.68
gob-stream zskp 4 1911399616 162195235 47559 38.33
gob-stream zstd 1 1911399616 249810424 2637 691.26
gob-stream zstd 3 1911399616 208192146 3490 522.31
gob-stream zstd 6 1911399616 193632038 6687 272.56
gob-stream zstd 9 1911399616 177620386 16175 112.70
gob-stream gzstd 1 1911399616 357382013 9046 201.49
gob-stream gzkp 1 1911399616 359136669 4885 373.08
The test data for the Large Text Compression Benchmark is the first
10^9 bytes of the English Wikipedia dump on Mar. 3, 2006.
http://mattmahoney.net/dc/textdata.html
file out level insize outsize millis mb/s
enwik9 zskp 1 1000000000 343833605 3687 258.64
enwik9 zskp 2 1000000000 317001237 7672 124.29
enwik9 zskp 3 1000000000 291915823 15923 59.89
enwik9 zskp 4 1000000000 261710291 77697 12.27
enwik9 zstd 1 1000000000 358072021 3110 306.65
enwik9 zstd 3 1000000000 313734672 4784 199.35
enwik9 zstd 6 1000000000 295138875 10290 92.68
enwik9 zstd 9 1000000000 278348700 28549 33.40
enwik9 gzstd 1 1000000000 382578136 8608 110.78
enwik9 gzkp 1 1000000000 382781160 5628 169.45
Highly compressible JSON file.
https://files.klauspost.com/compress/github-june-2days-2019.json.zst
file out level insize outsize millis mb/s
github-june-2days-2019.json zskp 1 6273951764 697439532 9789 611.17
github-june-2days-2019.json zskp 2 6273951764 610876538 18553 322.49
github-june-2days-2019.json zskp 3 6273951764 517662858 44186 135.41
github-june-2days-2019.json zskp 4 6273951764 464617114 165373 36.18
github-june-2days-2019.json zstd 1 6273951764 766284037 8450 708.00
github-june-2days-2019.json zstd 3 6273951764 661889476 10927 547.57
github-june-2days-2019.json zstd 6 6273951764 642756859 22996 260.18
github-june-2days-2019.json zstd 9 6273951764 601974523 52413 114.16
github-june-2days-2019.json gzstd 1 6273951764 1164397768 26793 223.32
github-june-2days-2019.json gzkp 1 6273951764 1120631856 17693 338.16
VM Image, Linux mint with a few installed applications:
https://files.klauspost.com/compress/rawstudio-mint14.7z
file out level insize outsize millis mb/s
rawstudio-mint14.tar zskp 1 8558382592 3718400221 18206 448.29
rawstudio-mint14.tar zskp 2 8558382592 3326118337 37074 220.15
rawstudio-mint14.tar zskp 3 8558382592 3163842361 87306 93.49
rawstudio-mint14.tar zskp 4 8558382592 2970480650 783862 10.41
rawstudio-mint14.tar zstd 1 8558382592 3609250104 17136 476.27
rawstudio-mint14.tar zstd 3 8558382592 3341679997 29262 278.92
rawstudio-mint14.tar zstd 6 8558382592 3235846406 77904 104.77
rawstudio-mint14.tar zstd 9 8558382592 3160778861 140946 57.91
rawstudio-mint14.tar gzstd 1 8558382592 3926234992 51345 158.96
rawstudio-mint14.tar gzkp 1 8558382592 3960117298 36722 222.26
CSV data:
https://files.klauspost.com/compress/nyc-taxi-data-10M.csv.zst
file out level insize outsize millis mb/s
nyc-taxi-data-10M.csv zskp 1 3325605752 641319332 9462 335.17
nyc-taxi-data-10M.csv zskp 2 3325605752 588976126 17570 180.50
nyc-taxi-data-10M.csv zskp 3 3325605752 529329260 32432 97.79
nyc-taxi-data-10M.csv zskp 4 3325605752 474949772 138025 22.98
nyc-taxi-data-10M.csv zstd 1 3325605752 687399637 8233 385.18
nyc-taxi-data-10M.csv zstd 3 3325605752 598514411 10065 315.07
nyc-taxi-data-10M.csv zstd 6 3325605752 570522953 20038 158.27
nyc-taxi-data-10M.csv zstd 9 3325605752 517554797 64565 49.12
nyc-taxi-data-10M.csv gzstd 1 3325605752 928654908 21270 149.11
nyc-taxi-data-10M.csv gzkp 1 3325605752 922273214 13929 227.68
```
## Decompressor
Status: STABLE - there may still be subtle bugs, but a wide variety of content has been tested.
This library is being continuously [fuzz-tested](https://github.com/klauspost/compress-fuzz),
kindly supplied by [fuzzit.dev](https://fuzzit.dev/).
The main purpose of the fuzz testing is to ensure that it is not possible to crash the decoder,
or run it past its limits with ANY input provided.
### Usage
The package has been designed for two main usages, big streams of data and smaller in-memory buffers.
There are two main usages of the package for these. Both of them are accessed by creating a `Decoder`.
For streaming use a simple setup could look like this:
```Go
import "github.com/klauspost/compress/zstd"
func Decompress(in io.Reader, out io.Writer) error {
d, err := zstd.NewReader(in)
if err != nil {
return err
}
defer d.Close()
// Copy content...
_, err = io.Copy(out, d)
return err
}
```
It is important to use the "Close" function when you no longer need the Reader to stop running goroutines,
when running with default settings.
Goroutines will exit once an error has been returned, including `io.EOF` at the end of a stream.
Streams are decoded concurrently in 4 asynchronous stages to give the best possible throughput.
However, if you prefer synchronous decompression, use `WithDecoderConcurrency(1)` which will decompress data
as it is being requested only.
For decoding buffers, it could look something like this:
```Go
import "github.com/klauspost/compress/zstd"
// Create a reader that caches decompressors.
// For this operation type we supply a nil Reader.
var decoder, _ = zstd.NewReader(nil, zstd.WithDecoderConcurrency(0))
// Decompress a buffer. We don't supply a destination buffer,
// so it will be allocated by the decoder.
func Decompress(src []byte) ([]byte, error) {
return decoder.DecodeAll(src, nil)
}
```
Both of these cases should provide the functionality needed.
The decoder can be used for *concurrent* decompression of multiple buffers.
By default 4 decompressors will be created.
It will only allow a certain number of concurrent operations to run.
To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder.
It is possible to use `WithDecoderConcurrency(0)` to create GOMAXPROCS decoders.
### Dictionaries
Data compressed with [dictionaries](https://github.com/facebook/zstd#the-case-for-small-data-compression) can be decompressed.
Dictionaries are added individually to Decoders.
Dictionaries are generated by the `zstd --train` command and contains an initial state for the decoder.
To add a dictionary use the `WithDecoderDicts(dicts ...[]byte)` option with the dictionary data.
Several dictionaries can be added at once.
The dictionary will be used automatically for the data that specifies them.
A re-used Decoder will still contain the dictionaries registered.
When registering multiple dictionaries with the same ID, the last one will be used.
It is possible to use dictionaries when compressing data.
To enable a dictionary use `WithEncoderDict(dict []byte)`. Here only one dictionary will be used
and it will likely be used even if it doesn't improve compression.
The used dictionary must be used to decompress the content.
For any real gains, the dictionary should be built with similar data.
If an unsuitable dictionary is used the output may be slightly larger than using no dictionary.
Use the [zstd commandline tool](https://github.com/facebook/zstd/releases) to build a dictionary from sample data.
For information see [zstd dictionary information](https://github.com/facebook/zstd#the-case-for-small-data-compression).
For now there is a fixed startup performance penalty for compressing content with dictionaries.
This will likely be improved over time. Just be aware to test performance when implementing.
### Allocation-less operation
The decoder has been designed to operate without allocations after a warmup.
This means that you should *store* the decoder for best performance.
To re-use a stream decoder, use the `Reset(r io.Reader) error` to switch to another stream.
A decoder can safely be re-used even if the previous stream failed.
To release the resources, you must call the `Close()` function on a decoder.
After this it can *no longer be reused*, but all running goroutines will be stopped.
So you *must* use this if you will no longer need the Reader.
For decompressing smaller buffers a single decoder can be used.
When decoding buffers, you can supply a destination slice with length 0 and your expected capacity.
In this case no unneeded allocations should be made.
### Concurrency
The buffer decoder does everything on the same goroutine and does nothing concurrently.
It can however decode several buffers concurrently. Use `WithDecoderConcurrency(n)` to limit that.
The stream decoder will create goroutines that:
1) Reads input and splits the input into blocks.
2) Decompression of literals.
3) Decompression of sequences.
4) Reconstruction of output stream.
So effectively this also means the decoder will "read ahead" and prepare data to always be available for output.
The concurrency level will, for streams, determine how many blocks ahead the compression will start.
Since "blocks" are quite dependent on the output of the previous block stream decoding will only have limited concurrency.
In practice this means that concurrency is often limited to utilizing about 3 cores effectively.
### Benchmarks
The first two are streaming decodes and the last are smaller inputs.
Running on AMD Ryzen 9 3950X 16-Core Processor. AMD64 assembly used.
```
BenchmarkDecoderSilesia-32 5 206878840 ns/op 1024.50 MB/s 49808 B/op 43 allocs/op
BenchmarkDecoderEnwik9-32 1 1271809000 ns/op 786.28 MB/s 72048 B/op 52 allocs/op
Concurrent blocks, performance:
BenchmarkDecoder_DecodeAllParallel/kppkn.gtb.zst-32 67356 17857 ns/op 10321.96 MB/s 22.48 pct 102 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/geo.protodata.zst-32 266656 4421 ns/op 26823.21 MB/s 11.89 pct 19 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/plrabn12.txt.zst-32 20992 56842 ns/op 8477.17 MB/s 39.90 pct 754 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/lcet10.txt.zst-32 27456 43932 ns/op 9714.01 MB/s 33.27 pct 524 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/asyoulik.txt.zst-32 78432 15047 ns/op 8319.15 MB/s 40.34 pct 66 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/alice29.txt.zst-32 65800 18436 ns/op 8249.63 MB/s 37.75 pct 88 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/html_x_4.zst-32 102993 11523 ns/op 35546.09 MB/s 3.637 pct 143 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/paper-100k.pdf.zst-32 1000000 1070 ns/op 95720.98 MB/s 80.53 pct 3 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/fireworks.jpeg.zst-32 749802 1752 ns/op 70272.35 MB/s 100.0 pct 5 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/urls.10K.zst-32 22640 52934 ns/op 13263.37 MB/s 26.25 pct 1014 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/html.zst-32 226412 5232 ns/op 19572.27 MB/s 14.49 pct 20 B/op 0 allocs/op
BenchmarkDecoder_DecodeAllParallel/comp-data.bin.zst-32 923041 1276 ns/op 3194.71 MB/s 31.26 pct 0 B/op 0 allocs/op
```
This reflects the performance around May 2022, but this may be out of date.
## Zstd inside ZIP files
It is possible to use zstandard to compress individual files inside zip archives.
While this isn't widely supported it can be useful for internal files.
To support the compression and decompression of these files you must register a compressor and decompressor.
It is highly recommended registering the (de)compressors on individual zip Reader/Writer and NOT
use the global registration functions. The main reason for this is that 2 registrations from
different packages will result in a panic.
It is a good idea to only have a single compressor and decompressor, since they can be used for multiple zip
files concurrently, and using a single instance will allow reusing some resources.
See [this example](https://pkg.go.dev/github.com/klauspost/compress/zstd#example-ZipCompressor) for
how to compress and decompress files inside zip archives.
# Contributions
Contributions are always welcome.
For new features/fixes, remember to add tests and for performance enhancements include benchmarks.
For general feedback and experience reports, feel free to open an issue or write me on [Twitter](https://twitter.com/sh0dan).
This package includes the excellent [`github.com/cespare/xxhash`](https://github.com/cespare/xxhash) package Copyright (c) 2016 Caleb Spare. | {
"source": "yandex/perforator",
"title": "vendor/github.com/klauspost/compress/zstd/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/klauspost/compress/zstd/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 21359
} |
# orb/clip [](https://pkg.go.dev/github.com/paulmach/orb/clip)
Package orb/clip provides functions for clipping lines and polygons to a bounding box.
- uses [Cohen-Sutherland algorithm](https://en.wikipedia.org/wiki/Cohen%E2%80%93Sutherland_algorithm) for line clipping
- uses [Sutherland-Hodgman algorithm](https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm) for polygon clipping
## Example
```go
bound := orb.Bound{Min: orb.Point{0, 0}, Max: orb.Point{30, 30}}
ls := orb.LineString{
{-10, 10}, {10, 10}, {10, -10}, {20, -10}, {20, 10},
{40, 10}, {40, 20}, {20, 20}, {20, 40}, {10, 40},
{10, 20}, {5, 20}, {-10, 20},
}
// works on and returns an orb.Geometry interface.
clipped = clip.Geometry(bound, ls)
// or clip the line string directly
clipped = clip.LineString(bound, ls)
```
## List of sub-package utilities
- [`smartclip`](smartclip) - handles partial 2d geometries
## Acknowledgements
This library is based on [mapbox/lineclip](https://github.com/mapbox/lineclip). | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/clip/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/clip/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1087
} |
# orb/geo [](https://pkg.go.dev/github.com/paulmach/orb/geo)
The geometries defined in the `orb` package are generic 2d geometries.
Depending on what projection they're in, e.g. lon/lat or flat on the plane,
area and distance calculations are different. This package implements methods
that assume the lon/lat or WGS84 projection.
## Examples
Area of the [San Francisco Main Library](https://www.openstreetmap.org/way/24446086):
```go
poly := orb.Polygon{
{
{ -122.4163816, 37.7792782 },
{ -122.4162786, 37.7787626 },
{ -122.4151027, 37.7789118 },
{ -122.4152143, 37.7794274 },
{ -122.4163816, 37.7792782 },
},
}
a := geo.Area(poly)
fmt.Printf("%f m^2", a)
// Output:
// 6073.368008 m^2
```
Distance between two points:
```go
oakland := orb.Point{-122.270833, 37.804444}
sf := orb.Point{-122.416667, 37.783333}
d := geo.Distance(oakland, sf)
fmt.Printf("%0.3f meters", d)
// Output:
// 13042.047 meters
```
Circumference of the [San Francisco Main Library](https://www.openstreetmap.org/way/24446086):
```go
poly := orb.Polygon{
{
{ -122.4163816, 37.7792782 },
{ -122.4162786, 37.7787626 },
{ -122.4151027, 37.7789118 },
{ -122.4152143, 37.7794274 },
{ -122.4163816, 37.7792782 },
},
}
l := geo.Length(poly)
fmt.Printf("%0.0f meters", l)
// Output:
// 325 meters
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/geo/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/geo/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1443
} |
# orb/geojson [](https://pkg.go.dev/github.com/paulmach/orb/geojson)
This package **encodes and decodes** [GeoJSON](http://geojson.org/) into Go structs
using the geometries in the [orb](https://github.com/paulmach/orb) package.
Supports both the [json.Marshaler](https://pkg.go.dev/encoding/json#Marshaler) and
[json.Unmarshaler](https://pkg.go.dev/encoding/json#Unmarshaler) interfaces.
The package also provides helper functions such as `UnmarshalFeatureCollection` and `UnmarshalFeature`.
The types also support BSON via the [bson.Marshaler](https://pkg.go.dev/go.mongodb.org/mongo-driver/bson#Marshaler) and
[bson.Unmarshaler](https://pkg.go.dev/go.mongodb.org/mongo-driver/bson#Unmarshaler) interfaces.
These types can be used directly when working with MongoDB.
## Unmarshalling (JSON -> Go)
```go
rawJSON := []byte(`
{ "type": "FeatureCollection",
"features": [
{ "type": "Feature",
"geometry": {"type": "Point", "coordinates": [102.0, 0.5]},
"properties": {"prop0": "value0"}
}
]
}`)
fc, _ := geojson.UnmarshalFeatureCollection(rawJSON)
// or
fc := geojson.NewFeatureCollection()
err := json.Unmarshal(rawJSON, &fc)
// Geometry will be unmarshalled into the correct geo.Geometry type.
point := fc.Features[0].Geometry.(orb.Point)
```
## Marshalling (Go -> JSON)
```go
fc := geojson.NewFeatureCollection()
fc.Append(geojson.NewFeature(orb.Point{1, 2}))
rawJSON, _ := fc.MarshalJSON()
// or
blob, _ := json.Marshal(fc)
```
## Foreign/extra members in a feature collection
```go
rawJSON := []byte(`
{ "type": "FeatureCollection",
"generator": "myapp",
"timestamp": "2020-06-15T01:02:03Z",
"features": [
{ "type": "Feature",
"geometry": {"type": "Point", "coordinates": [102.0, 0.5]},
"properties": {"prop0": "value0"}
}
]
}`)
fc, _ := geojson.UnmarshalFeatureCollection(rawJSON)
fc.ExtraMembers["generator"] // == "myApp"
fc.ExtraMembers["timestamp"] // == "2020-06-15T01:02:03Z"
// Marshalling will include values in `ExtraMembers` in the
// base featureCollection object.
```
## Performance
For performance critical applications, consider a
third party replacement of "encoding/json" like [github.com/json-iterator/go](https://github.com/json-iterator/go)
This can be enabled with something like this:
```go
import (
jsoniter "github.com/json-iterator/go"
"github.com/paulmach/orb"
)
var c = jsoniter.Config{
EscapeHTML: true,
SortMapKeys: false,
MarshalFloatWith6Digits: true,
}.Froze()
CustomJSONMarshaler = c
CustomJSONUnmarshaler = c
```
The above change can have dramatic performance implications, see the benchmarks below
on a 100k feature collection file:
```
benchmark old ns/op new ns/op delta
BenchmarkFeatureMarshalJSON-12 2694543 733480 -72.78%
BenchmarkFeatureUnmarshalJSON-12 5383825 2738183 -49.14%
BenchmarkGeometryMarshalJSON-12 210107 62789 -70.12%
BenchmarkGeometryUnmarshalJSON-12 691472 144689 -79.08%
benchmark old allocs new allocs delta
BenchmarkFeatureMarshalJSON-12 7818 2316 -70.38%
BenchmarkFeatureUnmarshalJSON-12 23047 31946 +38.61%
BenchmarkGeometryMarshalJSON-12 2 3 +50.00%
BenchmarkGeometryUnmarshalJSON-12 2042 18 -99.12%
benchmark old bytes new bytes delta
BenchmarkFeatureMarshalJSON-12 794088 490251 -38.26%
BenchmarkFeatureUnmarshalJSON-12 766354 1068497 +39.43%
BenchmarkGeometryMarshalJSON-12 24787 18650 -24.76%
BenchmarkGeometryUnmarshalJSON-12 79784 51374 -35.61%
```
## Feature Properties
GeoJSON features can have properties of any type. This can cause issues in a statically typed
language such as Go. Included is a `Properties` type with some helper methods that will try to
force convert a property. An optional default, will be used if the property is missing or the wrong
type.
```go
f.Properties.MustBool(key string, def ...bool) bool
f.Properties.MustFloat64(key string, def ...float64) float64
f.Properties.MustInt(key string, def ...int) int
f.Properties.MustString(key string, def ...string) string
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/geojson/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/geojson/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4461
} |
# orb/maptile [](https://pkg.go.dev/github.com/paulmach/orb/maptile)
Package `maptile` provides types and methods for working with
[web mercator map tiles](https://www.google.com/search?q=web+mercator+map+tiles).
It defines a tile as:
```go
type Tile struct {
X, Y uint32
Z Zoom
}
type Zoom uint32
```
Functions are provided to create tiles from lon/lat points as well as
[quadkeys](https://msdn.microsoft.com/en-us/library/bb259689.aspx).
The tile defines helper methods such as `Parent()`, `Children()`, `Siblings()`, etc.
## List of sub-package utilities
- [`tilecover`](tilecover) - computes the covering set of tiles for an `orb.Geometry`.
## Similar libraries in other languages:
- [mercantile](https://github.com/mapbox/mercantile) - Python
- [sphericalmercator](https://github.com/mapbox/sphericalmercator) - Node
- [tilebelt](https://github.com/mapbox/tilebelt) - Node | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/maptile/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/maptile/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 972
} |
# orb/planar [](https://pkg.go.dev/github.com/paulmach/orb/planar)
The geometries defined in the `orb` package are generic 2d geometries.
Depending on what projection they're in, e.g. lon/lat or flat on the plane,
area and distance calculations are different. This package implements methods
that assume the planar or Euclidean context.
## Examples
Area of 3-4-5 triangle:
```go
r := orb.Ring{{0, 0}, {3, 0}, {0, 4}, {0, 0}}
a := planar.Area(r)
fmt.Println(a)
// Output:
// 6
```
Distance between two points:
```go
d := planar.Distance(orb.Point{0, 0}, orb.Point{3, 4})
fmt.Println(d)
// Output:
// 5
```
Length/circumference of a 3-4-5 triangle:
```go
r := orb.Ring{{0, 0}, {3, 0}, {0, 4}, {0, 0}}
l := planar.Length(r)
fmt.Println(l)
// Output:
// 12
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/planar/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/planar/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 835
} |
# orb/project [](https://pkg.go.dev/github.com/paulmach/orb/project)
Package `project` has helper function for projecting geometries.
### Examples
Project `orb.Point` to Mercator:
```go
sf := orb.Point{-122.416667, 37.783333}
merc := project.Point(sf, project.WGS84.ToMercator)
fmt.Println(merc)
// Output:
// [-1.3627361035049736e+07 4.548863085837512e+06]
```
Find centroid of polygon in Mercator projection:
```go
poly := orb.Polygon{
{
{-122.4163816, 37.7792782},
{-122.4162786, 37.7787626},
{-122.4151027, 37.7789118},
{-122.4152143, 37.7794274},
{-122.4163816, 37.7792782},
},
}
merc := project.Polygon(poly, project.WGS84.ToMercator)
centroid, _ := planar.CentroidArea(merc)
centroid = project.Mercator.ToWGS84(centroid)
fmt.Println(centroid)
// Output:
// [-122.41574403384001 37.77909471899779]
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/project/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/project/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 933
} |
# orb/quadtree [](https://pkg.go.dev/github.com/paulmach/orb/quadtree)
Package `quadtree` implements a quadtree using rectangular partitions.
Each point exists in a unique node. This implementation is based off of the
[d3 implementation](https://github.com/mbostock/d3/wiki/Quadtree-Geom).
## API
```go
func New(bound orb.Bound) *Quadtree
func (q *Quadtree) Bound() orb.Bound
func (q *Quadtree) Add(p orb.Pointer) error
func (q *Quadtree) Remove(p orb.Pointer, eq FilterFunc) bool
func (q *Quadtree) Find(p orb.Point) orb.Pointer
func (q *Quadtree) Matching(p orb.Point, f FilterFunc) orb.Pointer
func (q *Quadtree) KNearest(buf []orb.Pointer, p orb.Point, k int, maxDistance ...float64) []orb.Pointer
func (q *Quadtree) KNearestMatching(buf []orb.Pointer, p orb.Point, k int, f FilterFunc, maxDistance ...float64) []orb.Pointer
func (q *Quadtree) InBound(buf []orb.Pointer, b orb.Bound) []orb.Pointer
func (q *Quadtree) InBoundMatching(buf []orb.Pointer, b orb.Bound, f FilterFunc) []orb.Pointer
```
## Examples
```go
func ExampleQuadtree_Find() {
r := rand.New(rand.NewSource(42)) // to make things reproducible
qt := quadtree.New(orb.Bound{Min: orb.Point{0, 0}, Max: orb.Point{1, 1}})
// add 1000 random points
for i := 0; i < 1000; i++ {
qt.Add(orb.Point{r.Float64(), r.Float64()})
}
nearest := qt.Find(orb.Point{0.5, 0.5})
fmt.Printf("nearest: %+v\n", nearest)
// Output:
// nearest: [0.4930591659434973 0.5196585530161364]
}
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/quadtree/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/quadtree/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1559
} |
# orb/resample [](https://pkg.go.dev/github.com/paulmach/orb/resample)
Package `resample` has a couple functions for resampling line geometry
into more or less evenly spaces points.
```go
func Resample(ls orb.LineString, df orb.DistanceFunc, totalPoints int) orb.LineString
func ToInterval(ls orb.LineString, df orb.DistanceFunc, dist float64) orb.LineString
```
For example, resampling a line string so the points are 1 planar unit apart:
```go
ls := resample.ToInterval(ls, planar.Distance, 1.0)
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/resample/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/resample/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 573
} |
# orb/simplify [](https://pkg.go.dev/github.com/paulmach/orb/simplify)
This package implements several reducing/simplifing function for `orb.Geometry` types.
Currently implemented:
- [Douglas-Peucker](#dp)
- [Visvalingam](#vis)
- [Radial](#radial)
**Note:** The geometry object CAN be modified, use `Clone()` if a copy is required.
## <a name="dp"></a>Douglas-Peucker
Probably the most popular simplification algorithm. For algorithm details, see
[wikipedia](http://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm).
The algorithm is a pass through for 1d geometry, e.g. Point and MultiPoint.
The algorithms can modify the original geometry, use `Clone()` if a copy is required.
Usage:
original := orb.LineString{}
reduced := simplify.DouglasPeucker(threshold).Simplify(original.Clone())
## <a name="vis"></a>Visvalingam
See Mike Bostock's explanation for
[algorithm details](http://bost.ocks.org/mike/simplify/).
The algorithm is a pass through for 1d geometry, e.g. Point and MultiPoint.
The algorithms can modify the original geometry, use `Clone()` if a copy is required.
Usage:
```go
original := orb.Ring{}
// will remove all whose triangle is smaller than `threshold`
reduced := simplify.VisvalingamThreshold(threshold).Simplify(original)
// will remove points until there are only `toKeep` points left.
reduced := simplify.VisvalingamKeep(toKeep).Simplify(original)
// One can also combine the parameters.
// This will continue to remove points until:
// - there are no more below the threshold,
// - or the new path is of length `toKeep`
reduced := simplify.Visvalingam(threshold, toKeep).Simplify(original)
```
## <a name="radial"></a>Radial
Radial reduces the path by removing points that are close together.
A full [algorithm description](http://psimpl.sourceforge.net/radial-distance.html).
The algorithm is a pass through for 1d geometry, like Point and MultiPoint.
The algorithms can modify the original geometry, use `Clone()` if a copy is required.
Usage:
```go
original := geo.Polygon{}
// this method uses a Euclidean distance measure.
reduced := simplify.Radial(planar.Distance, threshold).Simplify(path)
// if the points are in the lng/lat space Radial Geo will
// compute the geo distance between the coordinates.
reduced:= simplify.Radial(geo.Distance, meters).Simplify(path)
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/simplify/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/simplify/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2430
} |
# lz4 : LZ4 compression in pure Go
[](https://pkg.go.dev/github.com/pierrec/lz4/v4)
[](https://github.com/pierrec/lz4/actions)
[](https://goreportcard.com/report/github.com/pierrec/lz4)
[](https://github.com/pierrec/lz4/tags)
## Overview
This package provides a streaming interface to [LZ4 data streams](http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html) as well as low level compress and uncompress functions for LZ4 data blocks.
The implementation is based on the reference C [one](https://github.com/lz4/lz4).
## Install
Assuming you have the go toolchain installed:
```
go get github.com/pierrec/lz4/v4
```
There is a command line interface tool to compress and decompress LZ4 files.
```
go install github.com/pierrec/lz4/v4/cmd/lz4c@latest
```
Usage
```
Usage of lz4c:
-version
print the program version
Subcommands:
Compress the given files or from stdin to stdout.
compress [arguments] [<file name> ...]
-bc
enable block checksum
-l int
compression level (0=fastest)
-sc
disable stream checksum
-size string
block max size [64K,256K,1M,4M] (default "4M")
Uncompress the given files or from stdin to stdout.
uncompress [arguments] [<file name> ...]
```
## Example
```
// Compress and uncompress an input string.
s := "hello world"
r := strings.NewReader(s)
// The pipe will uncompress the data from the writer.
pr, pw := io.Pipe()
zw := lz4.NewWriter(pw)
zr := lz4.NewReader(pr)
go func() {
// Compress the input string.
_, _ = io.Copy(zw, r)
_ = zw.Close() // Make sure the writer is closed
_ = pw.Close() // Terminate the pipe
}()
_, _ = io.Copy(os.Stdout, zr)
// Output:
// hello world
```
## Contributing
Contributions are very welcome for bug fixing, performance improvements...!
- Open an issue with a proper description
- Send a pull request with appropriate test case(s)
## Contributors
Thanks to all [contributors](https://github.com/pierrec/lz4/graphs/contributors) so far!
Special thanks to [@Zariel](https://github.com/Zariel) for his asm implementation of the decoder.
Special thanks to [@greatroar](https://github.com/greatroar) for his work on the asm implementations of the decoder for amd64 and arm64.
Special thanks to [@klauspost](https://github.com/klauspost) for his work on optimizing the code. | {
"source": "yandex/perforator",
"title": "vendor/github.com/pierrec/lz4/v4/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/pierrec/lz4/v4/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2626
} |
See [](https://pkg.go.dev/github.com/prometheus/client_golang/prometheus). | {
"source": "yandex/perforator",
"title": "vendor/github.com/prometheus/client_golang/prometheus/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/prometheus/client_golang/prometheus/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 166
} |
# OpenTelemetry/OpenTracing Bridge
## Getting started
`go get go.opentelemetry.io/otel/bridge/opentracing`
Assuming you have configured an OpenTelemetry `TracerProvider`, these will be the steps to follow to wire up the bridge:
```go
import (
"go.opentelemetry.io/otel"
otelBridge "go.opentelemetry.io/otel/bridge/opentracing"
)
func main() {
/* Create tracerProvider and configure OpenTelemetry ... */
otelTracer := tracerProvider.Tracer("tracer_name")
// Use the bridgeTracer as your OpenTracing tracer.
bridgeTracer, wrapperTracerProvider := otelBridge.NewTracerPair(otelTracer)
// Set the wrapperTracerProvider as the global OpenTelemetry
// TracerProvider so instrumentation will use it by default.
otel.SetTracerProvider(wrapperTracerProvider)
/* ... */
}
```
## Interop from trace context from OpenTracing to OpenTelemetry
In order to get OpenTracing spans properly into the OpenTelemetry context, so they can be propagated (both internally, and externally), you will need to explicitly use the `BridgeTracer` for creating your OpenTracing spans, rather than a bare OpenTracing `Tracer` instance.
When you have started an OpenTracing Span, make sure the OpenTelemetry knows about it like this:
```go
ctxWithOTSpan := opentracing.ContextWithSpan(ctx, otSpan)
ctxWithOTAndOTelSpan := bridgeTracer.ContextWithSpanHook(ctxWithOTSpan, otSpan)
// Propagate the otSpan to both OpenTracing and OpenTelemetry
// instrumentation by using the ctxWithOTAndOTelSpan context.
```
## Extended Functionality
The bridge functionality can be extended beyond the OpenTracing API.
Any [`trace.SpanContext`](https://pkg.go.dev/go.opentelemetry.io/otel/trace#SpanContext) method can be accessed as following:
```go
type spanContextProvider interface {
IsSampled() bool
TraceID() trace.TraceID
SpanID() trace.SpanID
TraceFlags() trace.TraceFlags
... // any other available method can be added here to access it
}
var sc opentracing.SpanContext = ...
if s, ok := sc.(spanContextProvider); ok {
// Use TraceID by s.TraceID()
// Use SpanID by s.SpanID()
// Use TraceFlags by s.TraceFlags()
...
}
``` | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/bridge/opentracing/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/bridge/opentracing/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2120
} |
# Metric Embedded
[](https://pkg.go.dev/go.opentelemetry.io/otel/metric/embedded) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/metric/embedded/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/metric/embedded/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Metric Noop
[](https://pkg.go.dev/go.opentelemetry.io/otel/metric/noop) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/metric/noop/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/metric/noop/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 148
} |
# SDK Instrumentation
[](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/instrumentation) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/sdk/instrumentation/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/sdk/instrumentation/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 172
} |
# SDK Resource
[](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/sdk/resource/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/sdk/resource/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 151
} |
# SDK Trace
[](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/trace) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/sdk/trace/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/sdk/trace/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 142
} |
# Semconv v1.10.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.10.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.10.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.10.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.11.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.11.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.11.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.11.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.12.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.12.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.12.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.12.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.13.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.13.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.13.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.13.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.14.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.14.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.14.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.14.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.15.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.15.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.15.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.15.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.16.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.16.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.16.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.16.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.17.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.17.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.17.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.17.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.18.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.18.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.18.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.19.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.19.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.19.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.19.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.20.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.20.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.20.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.20.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.21.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.21.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.21.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.21.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.22.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.22.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.22.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.22.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.23.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.23.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.23.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.23.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.23.1
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.23.1) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.23.1/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.23.1/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.24.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.24.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.24.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.24.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.25.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.25.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.25.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.25.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.26.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.26.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.26.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.26.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.27.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.27.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.27.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.27.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 160
} |
# Semconv v1.4.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.4.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.4.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.4.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 157
} |
# Semconv v1.5.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.5.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.5.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.5.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 157
} |
# Semconv v1.6.1
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.6.1) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.6.1/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.6.1/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 157
} |
# Semconv v1.7.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.7.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.7.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.7.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 157
} |
# Semconv v1.8.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.8.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.8.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.8.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 157
} |
# Semconv v1.9.0
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.9.0) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.9.0/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.9.0/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 157
} |
# Trace Embedded
[](https://pkg.go.dev/go.opentelemetry.io/otel/trace/embedded) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/trace/embedded/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/trace/embedded/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 157
} |
# Trace Noop
[](https://pkg.go.dev/go.opentelemetry.io/otel/trace/noop) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/trace/noop/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/trace/noop/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 145
} |
# Building `sys/unix`
The sys/unix package provides access to the raw system call interface of the
underlying operating system. See: https://godoc.org/golang.org/x/sys/unix
Porting Go to a new architecture/OS combination or adding syscalls, types, or
constants to an existing architecture/OS pair requires some manual effort;
however, there are tools that automate much of the process.
## Build Systems
There are currently two ways we generate the necessary files. We are currently
migrating the build system to use containers so the builds are reproducible.
This is being done on an OS-by-OS basis. Please update this documentation as
components of the build system change.
### Old Build System (currently for `GOOS != "linux"`)
The old build system generates the Go files based on the C header files
present on your system. This means that files
for a given GOOS/GOARCH pair must be generated on a system with that OS and
architecture. This also means that the generated code can differ from system
to system, based on differences in the header files.
To avoid this, if you are using the old build system, only generate the Go
files on an installation with unmodified header files. It is also important to
keep track of which version of the OS the files were generated from (ex.
Darwin 14 vs Darwin 15). This makes it easier to track the progress of changes
and have each OS upgrade correspond to a single change.
To build the files for your current OS and architecture, make sure GOOS and
GOARCH are set correctly and run `mkall.sh`. This will generate the files for
your specific system. Running `mkall.sh -n` shows the commands that will be run.
Requirements: bash, go
### New Build System (currently for `GOOS == "linux"`)
The new build system uses a Docker container to generate the go files directly
from source checkouts of the kernel and various system libraries. This means
that on any platform that supports Docker, all the files using the new build
system can be generated at once, and generated files will not change based on
what the person running the scripts has installed on their computer.
The OS specific files for the new build system are located in the `${GOOS}`
directory, and the build is coordinated by the `${GOOS}/mkall.go` program. When
the kernel or system library updates, modify the Dockerfile at
`${GOOS}/Dockerfile` to checkout the new release of the source.
To build all the files under the new build system, you must be on an amd64/Linux
system and have your GOOS and GOARCH set accordingly. Running `mkall.sh` will
then generate all of the files for all of the GOOS/GOARCH pairs in the new build
system. Running `mkall.sh -n` shows the commands that will be run.
Requirements: bash, go, docker
## Component files
This section describes the various files used in the code generation process.
It also contains instructions on how to modify these files to add a new
architecture/OS or to add additional syscalls, types, or constants. Note that
if you are using the new build system, the scripts/programs cannot be called normally.
They must be called from within the docker container.
### asm files
The hand-written assembly file at `asm_${GOOS}_${GOARCH}.s` implements system
call dispatch. There are three entry points:
```
func Syscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr)
func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr)
func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr)
```
The first and second are the standard ones; they differ only in how many
arguments can be passed to the kernel. The third is for low-level use by the
ForkExec wrapper. Unlike the first two, it does not call into the scheduler to
let it know that a system call is running.
When porting Go to a new architecture/OS, this file must be implemented for
each GOOS/GOARCH pair.
### mksysnum
Mksysnum is a Go program located at `${GOOS}/mksysnum.go` (or `mksysnum_${GOOS}.go`
for the old system). This program takes in a list of header files containing the
syscall number declarations and parses them to produce the corresponding list of
Go numeric constants. See `zsysnum_${GOOS}_${GOARCH}.go` for the generated
constants.
Adding new syscall numbers is mostly done by running the build on a sufficiently
new installation of the target OS (or updating the source checkouts for the
new build system). However, depending on the OS, you may need to update the
parsing in mksysnum.
### mksyscall.go
The `syscall.go`, `syscall_${GOOS}.go`, `syscall_${GOOS}_${GOARCH}.go` are
hand-written Go files which implement system calls (for unix, the specific OS,
or the specific OS/Architecture pair respectively) that need special handling
and list `//sys` comments giving prototypes for ones that can be generated.
The mksyscall.go program takes the `//sys` and `//sysnb` comments and converts
them into syscalls. This requires the name of the prototype in the comment to
match a syscall number in the `zsysnum_${GOOS}_${GOARCH}.go` file. The function
prototype can be exported (capitalized) or not.
Adding a new syscall often just requires adding a new `//sys` function prototype
with the desired arguments and a capitalized name so it is exported. However, if
you want the interface to the syscall to be different, often one will make an
unexported `//sys` prototype, and then write a custom wrapper in
`syscall_${GOOS}.go`.
### types files
For each OS, there is a hand-written Go file at `${GOOS}/types.go` (or
`types_${GOOS}.go` on the old system). This file includes standard C headers and
creates Go type aliases to the corresponding C types. The file is then fed
through godef to get the Go compatible definitions. Finally, the generated code
is fed though mkpost.go to format the code correctly and remove any hidden or
private identifiers. This cleaned-up code is written to
`ztypes_${GOOS}_${GOARCH}.go`.
The hardest part about preparing this file is figuring out which headers to
include and which symbols need to be `#define`d to get the actual data
structures that pass through to the kernel system calls. Some C libraries
preset alternate versions for binary compatibility and translate them on the
way in and out of system calls, but there is almost always a `#define` that can
get the real ones.
See `types_darwin.go` and `linux/types.go` for examples.
To add a new type, add in the necessary include statement at the top of the
file (if it is not already there) and add in a type alias line. Note that if
your type is significantly different on different architectures, you may need
some `#if/#elif` macros in your include statements.
### mkerrors.sh
This script is used to generate the system's various constants. This doesn't
just include the error numbers and error strings, but also the signal numbers
and a wide variety of miscellaneous constants. The constants come from the list
of include files in the `includes_${uname}` variable. A regex then picks out
the desired `#define` statements, and generates the corresponding Go constants.
The error numbers and strings are generated from `#include <errno.h>`, and the
signal numbers and strings are generated from `#include <signal.h>`. All of
these constants are written to `zerrors_${GOOS}_${GOARCH}.go` via a C program,
`_errors.c`, which prints out all the constants.
To add a constant, add the header that includes it to the appropriate variable.
Then, edit the regex (if necessary) to match the desired constant. Avoid making
the regex too broad to avoid matching unintended constants.
### internal/mkmerge
This program is used to extract duplicate const, func, and type declarations
from the generated architecture-specific files listed below, and merge these
into a common file for each OS.
The merge is performed in the following steps:
1. Construct the set of common code that is identical in all architecture-specific files.
2. Write this common code to the merged file.
3. Remove the common code from all architecture-specific files.
## Generated files
### `zerrors_${GOOS}_${GOARCH}.go`
A file containing all of the system's generated error numbers, error strings,
signal numbers, and constants. Generated by `mkerrors.sh` (see above).
### `zsyscall_${GOOS}_${GOARCH}.go`
A file containing all the generated syscalls for a specific GOOS and GOARCH.
Generated by `mksyscall.go` (see above).
### `zsysnum_${GOOS}_${GOARCH}.go`
A list of numeric constants for all the syscall number of the specific GOOS
and GOARCH. Generated by mksysnum (see above).
### `ztypes_${GOOS}_${GOARCH}.go`
A file containing Go types for passing into (or returning from) syscalls.
Generated by godefs and the types file (see above). | {
"source": "yandex/perforator",
"title": "vendor/golang.org/x/sys/unix/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/golang.org/x/sys/unix/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 8685
} |
`testv3.go` was generated with an older version of codegen, to test reflection
behavior with `grpc.SupportPackageIsVersion3`. DO NOT REGENERATE!
`testv3.go` was then manually edited to replace `"golang.org/x/net/context"`
with `"context"`.
`dynamic.go` was generated with a newer protoc and manually edited to remove
everything except the descriptor bytes var, which is renamed and exported. | {
"source": "yandex/perforator",
"title": "vendor/google.golang.org/grpc/reflection/grpc_testing_not_regenerate/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/google.golang.org/grpc/reflection/grpc_testing_not_regenerate/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 393
} |
This directory contains x509 certificates and associated private keys used in
gRPC-Go tests.
How were these test certs/keys generated ?
------------------------------------------
Run `./create.sh` | {
"source": "yandex/perforator",
"title": "vendor/google.golang.org/grpc/testdata/x509/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/google.golang.org/grpc/testdata/x509/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 197
} |
# Minimal Go logging using klog
This package implements the [logr interface](https://github.com/go-logr/logr)
in terms of Kubernetes' [klog](https://github.com/kubernetes/klog). This
provides a relatively minimalist API to logging in Go, backed by a well-proven
implementation.
Because klogr was implemented before klog itself added supported for
structured logging, the default in klogr is to serialize key/value
pairs with JSON and log the result as text messages via klog. This
does not work well when klog itself forwards output to a structured
logger.
Therefore the recommended approach is to let klogr pass all log
messages through to klog and deal with structured logging there. Just
beware that the output of klog without a structured logger is meant to
be human-readable, in contrast to the JSON-based traditional format.
This is a BETA grade implementation. | {
"source": "yandex/perforator",
"title": "vendor/k8s.io/klog/v2/klogr/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/k8s.io/klog/v2/klogr/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 872
} |
# Semi-public headers
The headers listed here all export symbols into the ares namespace as public
symbols, but these headers are NOT included in the distribution. They are
meant to be used by other tools such as `adig` and `ahost`.
These are most likely going to be general purpose library functions such
as data structures and algorithms. | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/src/lib/include/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/src/lib/include/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 342
} |
## Logging types
This package contains shared [protocol buffer][protobuf] types that are populated
by the Stackdriver Logging API and consumed by other APIs.
### Key Concepts
- **HttpRequest**: Contains the complete set of information about a particular
HTTP request, such as HTTP method, request URL, status code, and other things.
- **LogSeverity**: The severity of a log entry (e.g. `DEBUG`, `INFO`, `WARNING`).
[protobuf]: https://developers.google.com/protocol-buffers/ | {
"source": "yandex/perforator",
"title": "contrib/libs/googleapis-common-protos/google/logging/type/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/googleapis-common-protos/google/logging/type/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 480
} |
# gRPC EventEngine
An EventEngine handles all cross-platform I/O, task execution, and DNS
resolution for gRPC. A default, cross-platform implementation is provided with
gRPC, but part of the intent here is to provide an interface for external
integrators to bring their own functionality. This allows for integration with
external event loops, siloing I/O and task execution between channels or
servers, and other custom integrations that were previously unsupported.
*WARNING*: This is experimental code and is subject to change.
## High level expectations of an EventEngine implementation
### Provide their own I/O threads
EventEngines are expected to internally create whatever threads are required to
perform I/O and execute callbacks. For example, an EventEngine implementation
may want to spawn separate thread pools for polling and callback execution.
### Provisioning data buffers via Slice allocation
At a high level, gRPC provides a `ResourceQuota` system that allows gRPC to
reclaim memory and degrade gracefully when memory reaches application-defined
thresholds. To enable this feature, the memory allocation of read/write buffers
within an EventEngine must be acquired in the form of Slices from
SliceAllocators. This is covered more fully in the gRFC and code.
### Documentating expectations around callback execution
Some callbacks may be expensive to run. EventEngines should decide on and
document whether callback execution might block polling operations. This way,
application developers can plan accordingly (e.g., run their expensive callbacks
on a separate thread if necessary).
### Handling concurrent usage
Assume that gRPC may use an EventEngine concurrently across multiple threads.
## TODO: documentation
* Example usage
* Link to gRFC | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/include/grpc/event_engine/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/include/grpc/event_engine/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1772
} |
**The APIs in this directory are not stable!**
This directory contains header files that need to be installed but are not part
of the public API. Users should not use these headers directly. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/include/grpcpp/impl/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/include/grpcpp/impl/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 191
} |
Optional plugins for gRPC Core: Modules in this directory extend gRPC Core in
useful ways. All optional code belongs here.
NOTE: The movement of code between lib and ext is an ongoing effort, so this
directory currently contains too much of the core library. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 259
} |
Required elements of gRPC Core: Each module in this directory is required to
build gRPC. If it's possible to envisage a configuration where code is not
required, then that code belongs in ext/ instead.
NOTE: The movement of code between lib and ext is an ongoing effort, so this
directory currently contains too much of the core library. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/lib/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/lib/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 338
} |
# Transport Security Interface
An abstraction library over crypto and auth modules (typically OpenSSL) | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/tsi/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/tsi/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 102
} |
Implementation of BLAKE3, originating from https://github.com/BLAKE3-team/BLAKE3/tree/1.3.1/c
# Example
An example program that hashes bytes from standard input and prints the
result:
Using the C++ API:
```c++
#include "llvm/Support/BLAKE3.h"
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main() {
// Initialize the hasher.
llvm::BLAKE3 hasher;
// Read input bytes from stdin.
char buf[65536];
while (1) {
ssize_t n = read(STDIN_FILENO, buf, sizeof(buf));
if (n > 0) {
hasher.update(llvm::StringRef(buf, n));
} else if (n == 0) {
break; // end of file
} else {
fprintf(stderr, "read failed: %s\n", strerror(errno));
exit(1);
}
}
// Finalize the hash. Default output length is 32 bytes.
auto output = hasher.final();
// Print the hash as hexadecimal.
for (uint8_t byte : output) {
printf("%02x", byte);
}
printf("\n");
return 0;
}
```
Using the C API:
```c
#include "llvm-c/blake3.h"
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main() {
// Initialize the hasher.
llvm_blake3_hasher hasher;
llvm_blake3_hasher_init(&hasher);
// Read input bytes from stdin.
unsigned char buf[65536];
while (1) {
ssize_t n = read(STDIN_FILENO, buf, sizeof(buf));
if (n > 0) {
llvm_blake3_hasher_update(&hasher, buf, n);
} else if (n == 0) {
break; // end of file
} else {
fprintf(stderr, "read failed: %s\n", strerror(errno));
exit(1);
}
}
// Finalize the hash. LLVM_BLAKE3_OUT_LEN is the default output length, 32 bytes.
uint8_t output[LLVM_BLAKE3_OUT_LEN];
llvm_blake3_hasher_finalize(&hasher, output, LLVM_BLAKE3_OUT_LEN);
// Print the hash as hexadecimal.
for (size_t i = 0; i < LLVM_BLAKE3_OUT_LEN; i++) {
printf("%02x", output[i]);
}
printf("\n");
return 0;
}
```
# API
## The Class/Struct
```c++
class BLAKE3 {
// API
private:
llvm_blake3_hasher Hasher;
};
```
```c
typedef struct {
// private fields
} llvm_blake3_hasher;
```
An incremental BLAKE3 hashing state, which can accept any number of
updates. This implementation doesn't allocate any heap memory, but
`sizeof(llvm_blake3_hasher)` itself is relatively large, currently 1912 bytes
on x86-64. This size can be reduced by restricting the maximum input
length, as described in Section 5.4 of [the BLAKE3
spec](https://github.com/BLAKE3-team/BLAKE3-specs/blob/master/blake3.pdf),
but this implementation doesn't currently support that strategy.
## Common API Functions
```c++
BLAKE3::BLAKE3();
void BLAKE3::init();
```
```c
void llvm_blake3_hasher_init(
llvm_blake3_hasher *self);
```
Initialize a `llvm_blake3_hasher` in the default hashing mode.
---
```c++
void BLAKE3::update(ArrayRef<uint8_t> Data);
void BLAKE3::update(StringRef Str);
```
```c
void llvm_blake3_hasher_update(
llvm_blake3_hasher *self,
const void *input,
size_t input_len);
```
Add input to the hasher. This can be called any number of times.
---
```c++
template <size_t NumBytes = LLVM_BLAKE3_OUT_LEN>
using BLAKE3Result = std::array<uint8_t, NumBytes>;
template <size_t NumBytes = LLVM_BLAKE3_OUT_LEN>
void BLAKE3::final(BLAKE3Result<NumBytes> &Result);
template <size_t NumBytes = LLVM_BLAKE3_OUT_LEN>
BLAKE3Result<NumBytes> BLAKE3::final();
```
```c
void llvm_blake3_hasher_finalize(
const llvm_blake3_hasher *self,
uint8_t *out,
size_t out_len);
```
Finalize the hasher and return an output of any length, given in bytes.
This doesn't modify the hasher itself, and it's possible to finalize
again after adding more input. The constant `LLVM_BLAKE3_OUT_LEN` provides
the default output length, 32 bytes, which is recommended for most
callers.
Outputs shorter than the default length of 32 bytes (256 bits) provide
less security. An N-bit BLAKE3 output is intended to provide N bits of
first and second preimage resistance and N/2 bits of collision
resistance, for any N up to 256. Longer outputs don't provide any
additional security.
Shorter BLAKE3 outputs are prefixes of longer ones. Explicitly
requesting a short output is equivalent to truncating the default-length
output. (Note that this is different between BLAKE2 and BLAKE3.)
## Less Common API Functions
```c
void llvm_blake3_hasher_init_keyed(
llvm_blake3_hasher *self,
const uint8_t key[LLVM_BLAKE3_KEY_LEN]);
```
Initialize a `llvm_blake3_hasher` in the keyed hashing mode. The key must be
exactly 32 bytes.
---
```c
void llvm_blake3_hasher_init_derive_key(
llvm_blake3_hasher *self,
const char *context);
```
Initialize a `llvm_blake3_hasher` in the key derivation mode. The context
string is given as an initialization parameter, and afterwards input key
material should be given with `llvm_blake3_hasher_update`. The context string
is a null-terminated C string which should be **hardcoded, globally
unique, and application-specific**. The context string should not
include any dynamic input like salts, nonces, or identifiers read from a
database at runtime. A good default format for the context string is
`"[application] [commit timestamp] [purpose]"`, e.g., `"example.com
2019-12-25 16:18:03 session tokens v1"`.
This function is intended for application code written in C. For
language bindings, see `llvm_blake3_hasher_init_derive_key_raw` below.
---
```c
void llvm_blake3_hasher_init_derive_key_raw(
llvm_blake3_hasher *self,
const void *context,
size_t context_len);
```
As `llvm_blake3_hasher_init_derive_key` above, except that the context string
is given as a pointer to an array of arbitrary bytes with a provided
length. This is intended for writing language bindings, where C string
conversion would add unnecessary overhead and new error cases. Unicode
strings should be encoded as UTF-8.
Application code in C should prefer `llvm_blake3_hasher_init_derive_key`,
which takes the context as a C string. If you need to use arbitrary
bytes as a context string in application code, consider whether you're
violating the requirement that context strings should be hardcoded.
---
```c
void llvm_blake3_hasher_finalize_seek(
const llvm_blake3_hasher *self,
uint64_t seek,
uint8_t *out,
size_t out_len);
```
The same as `llvm_blake3_hasher_finalize`, but with an additional `seek`
parameter for the starting byte position in the output stream. To
efficiently stream a large output without allocating memory, call this
function in a loop, incrementing `seek` by the output length each time.
---
```c
void llvm_blake3_hasher_reset(
llvm_blake3_hasher *self);
```
Reset the hasher to its initial state, prior to any calls to
`llvm_blake3_hasher_update`. Currently this is no different from calling
`llvm_blake3_hasher_init` or similar again. However, if this implementation gains
multithreading support in the future, and if `llvm_blake3_hasher` holds (optional)
threading resources, this function will reuse those resources.
# Building
This implementation is just C and assembly files.
## x86
Dynamic dispatch is enabled by default on x86. The implementation will
query the CPU at runtime to detect SIMD support, and it will use the
widest instruction set available. By default, `blake3_dispatch.c`
expects to be linked with code for five different instruction sets:
portable C, SSE2, SSE4.1, AVX2, and AVX-512.
For each of the x86 SIMD instruction sets, four versions are available:
three flavors of assembly (Unix, Windows MSVC, and Windows GNU) and one
version using C intrinsics. The assembly versions are generally
preferred. They perform better, they perform more consistently across
different compilers, and they build more quickly. On the other hand, the
assembly versions are x86\_64-only, and you need to select the right
flavor for your target platform.
## ARM NEON
The NEON implementation is enabled by default on AArch64, but not on
other ARM targets, since not all of them support it. To enable it, set
`BLAKE3_USE_NEON=1`.
To explicitiy disable using NEON instructions on AArch64, set
`BLAKE3_USE_NEON=0`.
## Other Platforms
The portable implementation should work on most other architectures.
# Multithreading
The implementation doesn't currently support multithreading. | {
"source": "yandex/perforator",
"title": "contrib/libs/llvm18/lib/Support/BLAKE3/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/llvm18/lib/Support/BLAKE3/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 8248
} |
# How to use this folder
Edit exiting config files. This files are passed into `tpl` function allowing for templating from values.
Note: The field storage.host should always be a template due to storage hostname dependency on release name. | {
"source": "yandex/perforator",
"title": "perforator/deploy/kubernetes/helm/perforator/config/README.md",
"url": "https://github.com/yandex/perforator/blob/main/perforator/deploy/kubernetes/helm/perforator/config/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 240
} |
# testdata
The private key used in these tests is of the correct format, but does not
really allow access to any cloud project. DO NOT put any real credentials in
this folder, it is strictly for use in unit tests. | {
"source": "yandex/perforator",
"title": "vendor/cloud.google.com/go/auth/internal/testdata/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/cloud.google.com/go/auth/internal/testdata/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 214
} |
# YAML reader for CUE
This yaml parser is a heavily modified version of Canonical's go-yaml parser,
which in turn is a port of the [libyaml](http://pyyaml.org/wiki/LibYAML) parser.
License
-------
The yaml package is licensed under the Apache License 2.0. Please see the LICENSE file for details. | {
"source": "yandex/perforator",
"title": "vendor/cuelang.org/go/internal/third_party/yaml/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/cuelang.org/go/internal/third_party/yaml/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 300
} |
bpf2go
===
`bpf2go` compiles a C source file into eBPF bytecode and then emits a
Go file containing the eBPF. The goal is to avoid loading the
eBPF from disk at runtime and to minimise the amount of manual
work required to interact with eBPF programs. It takes inspiration
from `bpftool gen skeleton`.
Invoke the program using go generate:
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go foo path/to/src.c -- -I/path/to/include
This will emit `foo_bpfel.go` and `foo_bpfeb.go`, with types using `foo`
as a stem. The two files contain compiled BPF for little and big
endian systems, respectively.
## Environment Variables
You can use environment variables to affect all bpf2go invocations
across a project, e.g. to set specific C flags:
BPF2GO_FLAGS="-O2 -g -Wall -Werror $(CFLAGS)" go generate ./...
Alternatively, by exporting `$BPF2GO_FLAGS` from your build system, you can
control all builds from a single location.
Most bpf2go arguments can be controlled this way. See `bpf2go -h` for an
up-to-date list.
## Generated types
`bpf2go` generates Go types for all map keys and values by default. You can
disable this behaviour using `-no-global-types`. You can add to the set of
types by specifying `-type foo` for each type you'd like to generate.
## Examples
See [examples/kprobe](../../examples/kprobe/main.go) for a fully worked out example. | {
"source": "yandex/perforator",
"title": "vendor/github.com/cilium/ebpf/cmd/bpf2go/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/cilium/ebpf/cmd/bpf2go/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1373
} |
CRDB
====
`crdb` is a wrapper around the logic for issuing SQL transactions which performs
retries (as required by CockroachDB).
Note that unfortunately there is no generic way of extracting a pg error code;
the library has to recognize driver-dependent error types. We currently use
the `SQLState() string` method that is implemented in both
[`github.com/lib/pq`](https://github.com/lib/pq), since version 1.10.6, and
[`github.com/jackc/pgx`](https://github.com/jackc/pgx) when used in database/sql
driver mode.
Subpackages provide support for [gorm](https://github.com/go-gorm/gorm), [pgx](https://github.com/jackc/pgx), and [sqlx](https://github.com/jmoiron/sqlx) used in standalone-library mode.
Note for developers: if you make any changes here (especially if they modify public
APIs), please verify that the code in https://github.com/cockroachdb/examples-go
still works and update as necessary. | {
"source": "yandex/perforator",
"title": "vendor/github.com/cockroachdb/cockroach-go/v2/crdb/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/cockroachdb/cockroach-go/v2/crdb/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 905
} |
# Deprecated
Use [cmd/migrate](../cmd/migrate) instead | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/cli/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/cli/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 55
} |
**This directory has the implementation of the S2Av2's gRPC-Go client libraries** | {
"source": "yandex/perforator",
"title": "vendor/github.com/google/s2a-go/internal/v2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/google/s2a-go/internal/v2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 81
} |
# pgconn
Package pgconn is a low-level PostgreSQL database driver. It operates at nearly the same level as the C library libpq.
It is primarily intended to serve as the foundation for higher level libraries such as https://github.com/jackc/pgx.
Applications should handle normal queries with a higher level library and only use pgconn directly when required for
low-level access to PostgreSQL functionality.
## Example Usage
```go
pgConn, err := pgconn.Connect(context.Background(), os.Getenv("DATABASE_URL"))
if err != nil {
log.Fatalln("pgconn failed to connect:", err)
}
defer pgConn.Close(context.Background())
result := pgConn.ExecParams(context.Background(), "SELECT email FROM users WHERE id=$1", [][]byte{[]byte("123")}, nil, nil, nil)
for result.NextRow() {
fmt.Println("User 123 has email:", string(result.Values()[0]))
}
_, err = result.Close()
if err != nil {
log.Fatalln("failed reading result:", err)
}
```
## Testing
See CONTRIBUTING.md for setup instructions. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/pgconn/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/pgconn/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 984
} |
# pgproto3
Package pgproto3 is an encoder and decoder of the PostgreSQL wire protocol version 3.
pgproto3 can be used as a foundation for PostgreSQL drivers, proxies, mock servers, load balancers and more.
See example/pgfortune for a playful example of a fake PostgreSQL server. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/pgproto3/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/pgproto3/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 281
} |
# Test Setup
This directory contains miscellaneous files used to setup a test database. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/testsetup/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/testsetup/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 88
} |
# orb/clip/smartclip [](https://pkg.go.dev/github.com/paulmach/orb/clip/smartclip)
This package extends the clip functionality to handle partial 2d geometries. The input polygon
rings need to only intersect the bound. The algorithm will use that, plus the orientation, to
wrap/close the rings around the edge of the bound.
The use case is [OSM multipolyon relations](https://wiki.openstreetmap.org/wiki/Relation#Multipolygon)
where a ring (inner or outer) contains multiple ways but only one is in the current viewport.
With only the ways intersecting the viewport and their orientation the correct shape can be drawn.
## Example
```go
bound := orb.Bound{Min: orb.Point{1, 1}, Max: orb.Point{10, 10}}
// a partial ring cutting the bound down the middle.
ring := orb.Ring{{0, 0}, {11, 11}}
clipped := smartclip.Ring(bound, ring, orb.CCW)
// clipped is a multipolyon with one ring that wraps counter-clockwise
// around the top triangle of the box
// [[[[1 1] [10 10] [5.5 10] [1 10] [1 5.5] [1 1]]]]
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/clip/smartclip/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/clip/smartclip/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1076
} |
# encoding/ewkb [](https://pkg.go.dev/github.com/paulmach/orb/encoding/ewkb)
This package provides encoding and decoding of [extended WKB](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry#Format_variations)
data. This format includes the [SRID](https://en.wikipedia.org/wiki/Spatial_reference_system) in the data.
If the SRID is not needed use the [wkb](../wkb) package for a simpler interface.
The interface is defined as:
```go
func Marshal(geom orb.Geometry, srid int, byteOrder ...binary.ByteOrder) ([]byte, error)
func MarshalToHex(geom orb.Geometry, srid int, byteOrder ...binary.ByteOrder) (string, error)
func MustMarshal(geom orb.Geometry, srid int, byteOrder ...binary.ByteOrder) []byte
func MustMarshalToHex(geom orb.Geometry, srid int, byteOrder ...binary.ByteOrder) string
func NewEncoder(w io.Writer) *Encoder
func (e *Encoder) SetByteOrder(bo binary.ByteOrder) *Encoder
func (e *Encoder) SetSRID(srid int) *Encoder
func (e *Encoder) Encode(geom orb.Geometry) error
func Unmarshal(b []byte) (orb.Geometry, int, error)
func NewDecoder(r io.Reader) *Decoder
func (d *Decoder) Decode() (orb.Geometry, int, error)
```
## Inserting geometry into a database
Depending on the database different formats and functions are supported.
### PostgreSQL and PostGIS
PostGIS stores geometry as EWKB internally. As a result it can be inserted without
a wrapper function.
```go
db.Exec("INSERT INTO geodata(geom) VALUES (ST_GeomFromEWKB($1))", ewkb.Value(coord, 4326))
db.Exec("INSERT INTO geodata(geom) VALUES ($1)", ewkb.Value(coord, 4326))
```
### MySQL/MariaDB
MySQL and MariaDB
[store geometry](https://dev.mysql.com/doc/refman/5.7/en/gis-data-formats.html)
data in WKB format with a 4 byte SRID prefix.
```go
coord := orb.Point{1, 2}
// as WKB in hex format
data := wkb.MustMarshalToHex(coord)
db.Exec("INSERT INTO geodata(geom) VALUES (ST_GeomFromWKB(UNHEX(?), 4326))", data)
// relying on the raw encoding
db.Exec("INSERT INTO geodata(geom) VALUES (?)", ewkb.ValuePrefixSRID(coord, 4326))
```
## Reading geometry from a database query
As stated above, different databases supported different formats and functions.
### PostgreSQL and PostGIS
When working with PostGIS the raw format is EWKB so the wrapper function is not necessary
```go
// both of these queries return the same data
row := db.QueryRow("SELECT ST_AsEWKB(geom) FROM geodata")
row := db.QueryRow("SELECT geom FROM geodata")
// if you don't need the SRID
p := orb.Point{}
err := row.Scan(ewkb.Scanner(&p))
log.Printf("geom: %v", p)
// if you need the SRID
p := orb.Point{}
gs := ewkb.Scanner(&p)
err := row.Scan(gs)
log.Printf("srid: %v", gs.SRID)
log.Printf("geom: %v", gs.Geometry)
log.Printf("also geom: %v", p)
```
### MySQL/MariaDB
```go
// using the ST_AsBinary function
row := db.QueryRow("SELECT st_srid(geom), ST_AsBinary(geom) FROM geodata")
row.Scan(&srid, ewkb.Scanner(&data))
// relying on the raw encoding
row := db.QueryRow("SELECT geom FROM geodata")
// if you don't need the SRID
p := orb.Point{}
err := row.Scan(ewkb.ScannerPrefixSRID(&p))
log.Printf("geom: %v", p)
// if you need the SRID
p := orb.Point{}
gs := ewkb.ScannerPrefixSRID(&p)
err := row.Scan(gs)
log.Printf("srid: %v", gs.SRID)
log.Printf("geom: %v", gs.Geometry)
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/encoding/ewkb/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/encoding/ewkb/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 3340
} |
# encoding/mvt [](https://pkg.go.dev/github.com/paulmach/orb/encoding/mvt)
Package mvt provides functions for encoding and decoding
[Mapbox Vector Tiles](https://www.mapbox.com/vector-tiles/specification/).
The interface is defined as:
```go
type Layer struct {
Name string
Version uint32
Extent uint32
Features []*geojson.Feature
}
func MarshalGzipped(layers Layers) ([]byte, error)
func Marshal(layers Layers) ([]byte, error)
func UnmarshalGzipped(data []byte) (Layers, error)
func Unmarshal(data []byte) (Layers, error)
```
These function decode the geometry and leave it in the "tile coordinates".
To project it to and from WGS84 (standard lon/lat) use:
```go
func (l Layer) ProjectToTile(tile maptile.Tile)
func (l Layer) ProjectToWGS84(tile maptile.Tile)
```
## Version 1 vs. Version 2
There is no data format difference between v1 and v2. The difference is v2 requires geometries
be simple/clean. e.g. lines that are not self intersecting and polygons that are encoded in the correct winding order.
This library does not do anything to validate the geometry, it just encodes what you give it, so it defaults to v1.
I've seen comments from Mapbox about this and they only want you to claim your library is a v2 encoder if it does cleanup/validation.
However, if you know your geometry is simple/clean you can change the [layer version](https://pkg.go.dev/github.com/paulmach/orb/encoding/mvt#Layer) manually.
## Encoding example
```go
// Start with a set of feature collections defining each layer in lon/lat (WGS84).
collections := map[string]*geojson.FeatureCollection{}
// Convert to a layers object and project to tile coordinates.
layers := mvt.NewLayers(collections)
layers.ProjectToTile(maptile.New(x, y, z))
// In order to be used as source for MapboxGL geometries need to be clipped
// to max allowed extent. (uncomment next line)
// layers.Clip(mvt.MapboxGLDefaultExtentBound)
// Simplify the geometry now that it's in the tile coordinate space.
layers.Simplify(simplify.DouglasPeucker(1.0))
// Depending on use-case remove empty geometry, those two small to be
// represented in this tile space.
// In this case lines shorter than 1, and areas smaller than 1.
layers.RemoveEmpty(1.0, 1.0)
// encoding using the Mapbox Vector Tile protobuf encoding.
data, err := mvt.Marshal(layers) // this data is NOT gzipped.
// Sometimes MVT data is stored and transfered gzip compressed. In that case:
data, err := mvt.MarshalGzipped(layers)
```
## Feature IDs
Since GeoJSON ids can be any number or string they won't necessarily map to vector tile uint64 ids.
This is a common incompatibility between the two types.
During marshaling the code tries to convert the geojson.Feature.ID to a positive integer, possibly parsing a string.
If the number is negative, the id is omitted. If the number is a positive decimal the number is truncated.
For unmarshaling the id will be converted into a float64 to be consistent with how
the encoding/json package decodes numbers.
## Geometry Collections
GeoJSON geometry collections are flattened and their features are encoded individually.
As a result the "collection" information is lost when encoding and there could be more
features in the output (mvt) vs. in the input (GeoJSON) | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/encoding/mvt/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/encoding/mvt/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 3340
} |
# encoding/wkb [](https://pkg.go.dev/github.com/paulmach/orb/encoding/wkb)
This package provides encoding and decoding of [WKB](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry#Well-known_binary)
data. The interface is defined as:
```go
func Marshal(geom orb.Geometry, byteOrder ...binary.ByteOrder) ([]byte, error)
func MarshalToHex(geom orb.Geometry, byteOrder ...binary.ByteOrder) (string, error)
func MustMarshal(geom orb.Geometry, byteOrder ...binary.ByteOrder) []byte
func MustMarshalToHex(geom orb.Geometry, byteOrder ...binary.ByteOrder) string
func NewEncoder(w io.Writer) *Encoder
func (e *Encoder) SetByteOrder(bo binary.ByteOrder)
func (e *Encoder) Encode(geom orb.Geometry) error
func Unmarshal(b []byte) (orb.Geometry, error)
func NewDecoder(r io.Reader) *Decoder
func (d *Decoder) Decode() (orb.Geometry, error)
```
## Reading and Writing to a SQL database
This package provides wrappers for `orb.Geometry` types that implement
`sql.Scanner` and `driver.Value`. For example:
```go
row := db.QueryRow("SELECT ST_AsBinary(point_column) FROM postgis_table")
var p orb.Point
err := row.Scan(wkb.Scanner(&p))
db.Exec("INSERT INTO table (point_column) VALUES (?)", wkb.Value(p))
```
The column can also be wrapped in `ST_AsEWKB`. The SRID will be ignored.
If you don't know the type of the geometry try something like
```go
s := wkb.Scanner(nil)
err := row.Scan(&s)
switch g := s.Geometry.(type) {
case orb.Point:
case orb.LineString:
}
```
Scanning directly from MySQL columns is supported. By default MySQL returns geometry
data as WKB but prefixed with a 4 byte SRID. To support this, if the data is not
valid WKB, the code will strip the first 4 bytes, the SRID, and try again.
This works for most use cases. | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/encoding/wkb/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/encoding/wkb/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1828
} |
# encoding/wkt [](https://pkg.go.dev/github.com/paulmach/orb/encoding/wkt)
This package provides encoding and decoding of [WKT](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry)
data. The interface is defined as:
```go
func MarshalString(orb.Geometry) string
func Unmarshal(string) (orb.Geometry, error)
func UnmarshalPoint(string) (orb.Point, err error)
func UnmarshalMultiPoint(string) (orb.MultiPoint, err error)
func UnmarshalLineString(string) (orb.LineString, err error)
func UnmarshalMultiLineString(string) (orb.MultiLineString, err error)
func UnmarshalPolygon(string) (orb.Polygon, err error)
func UnmarshalMultiPolygon(string) (orb.MultiPolygon, err error)
func UnmarshalCollection(string) (orb.Collection, err error)
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/encoding/wkt/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/encoding/wkt/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 827
} |
# orb/maptile/tilecover [](https://pkg.go.dev/github.com/paulmach/orb/maptile/tilecover)
Package `tilecover` computes the covering set of tiles for an `orb.Geometry`.
It is a a port of the nodejs library [tile-cover](https://github.com/mapbox/tile-cover)
which is a port from Google's S2 library. The same set of tests pass.
## Usage
```go
poly := orb.Polygon{}
tiles, err := tilecover.Geometry(poly, zoom)
if err != nil {
// indicates a non-closed ring
}
for t := range tiles {
// do something with tile
}
// to merge up to as much as possible to a specific zoom
tiles = tilecover.MergeUp(tiles, 0)
```
## Similar libraries in other languages:
- [tilecover](https://github.com/mapbox/tile-cover) - Node | {
"source": "yandex/perforator",
"title": "vendor/github.com/paulmach/orb/maptile/tilecover/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/paulmach/orb/maptile/tilecover/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 785
} |
# OpenTracing Migration
[](https://pkg.go.dev/go.opentelemetry.io/otel/bridge/opentracing/migration) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/bridge/opentracing/migration/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/bridge/opentracing/migration/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 192
} |
# OTLP Trace Exporter
[](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 182
} |
Subsets and Splits