text
stringlengths 55
456k
| metadata
dict |
---|---|
# Financial Analyst with CrewAI and DeepSeek using SambaNova
This project implements a Financial Analyst with CrewAI and DeepSeek using SambaNova.
- [SambaNova](https://fnf.dev/4jH8edk) is used to as the inference engine to run the DeepSeek model.
- CrewAI is used to analyze the user query and generate a summary.
- Streamlit is used to create a web interface for the project.
---
## Setup and installations
**Get SambaNova API Key**:
- Go to [SambaNova](https://fnf.dev/4jH8edk) and sign up for an account.
- Once you have an account, go to the API Key page and copy your API key.
- Paste your API key by creating a `.env` file as shown below:
```
SAMBANOVA_API_KEY=your_api_key
```
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install streamlit openai crewai crewai-tools
```
---
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "financial-analyst-deepseek/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/financial-analyst-deepseek/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1418
} |
# 100% local RAG app to chat with GitHub!
This project leverages GitIngest to parse a GitHub repo in markdown format and the use LlamaIndex for RAG orchestration over it.
## Installation and setup
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install gitingest llama-index llama-index-llms-ollama llama-index-agent-openai llama-index-llms-openai --upgrade --quiet
```
**Running**:
Make sure you have Ollama Server running then you can run following command to start the streamlit application ```streamlit run app_local.py```.
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "github-rag/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/github-rag/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1157
} |
# Image-gen and multimodal QA app ft. DeepSeek Janus-Pro
This project leverages DeepSeek Janus-pro 7B and Streamlit to create a 100% locally running image gen and multimodal QA app.
## Installation and setup
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
!git clone https://github.com/deepseek-ai/Janus.git
%cd Janus
!pip install -e .
!pip install flash-attn
!pip install streamlit
```
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "imagegen-janus-pro/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/imagegen-janus-pro/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1002
} |
# LLama3.2-OCR
This project leverages Llama 3.2 vision and Streamlit to create a 100% locally running OCR app.
## Installation and setup
**Setup Ollama**:
```bash
# setup ollama on linux
curl -fsSL https://ollama.com/install.sh | sh
# pull llama 3.2 vision model
ollama run llama3.2-vision
```
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install streamlit ollama
```
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "llama-ocr/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/llama-ocr/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1019
} |
# Local ChatGPT
This project leverages DeepSeek-R1 and Chainlit to create a 100% locally running mini-ChatGPT app.
## Installation and setup
**Setup Ollama**:
```bash
# setup ollama on linux
curl -fsSL https://ollama.com/install.sh | sh
# pull the DeepSeek-R1 model
ollama pull deepseek-r1
```
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install pydantic==2.10.1 chainlit ollama
```
**Run the app**:
Run the chainlit app as follows:
```bash
chainlit run app.py -w
```
## Demo Video
Click below to watch the demo video of the AI Assistant in action:
[Watch the video](deepseek-chatgpt.mp4)
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "local-chatgpt with DeepSeek/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/local-chatgpt with DeepSeek/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1258
} |
# Local ChatGPT
This project leverages Llama 3.2 vision and Chainlit to create a 100% locally running ChatGPT app.
## Installation and setup
**Setup Ollama**:
```bash
# setup ollama on linux
curl -fsSL https://ollama.com/install.sh | sh
# pull llama 3.2 vision model
ollama pull llama3.2-vision
```
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install pydantic==2.10.1 chainlit ollama
```
**Run the app**:
Run the chainlit app as follows:
```bash
chainlit run app.py -w
```
## Demo Video
Click below to watch the demo video of the AI Assistant in action:
[Watch the video](video-demo.mp4)
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "local-chatgpt/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/local-chatgpt/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1257
} |
# LLama3.2-RAG application powered by ModernBert
This project leverages a locally Llama 3.2 to build a RAG application to **chat with your docs** powered by
- ModernBert for embeddings.
- Llama 3.2 for the LLM.
- Streamlit to build the UI.
## Demo
Watch the demo video:

## Installation and setup
**Setup Transformers**:
As of now, ModernBERT requires transformers to be installed from the (stable) main branch of the transformers repository. After the next transformers release (4.48.x), it will be supported in the python package available everywhere.
So first, create a new virtual environment.
```bash
python -m venv modernbert-env
source modernbert-env/bin/activate
```
Then, install the latest transformers.
```bash
pip install git+https://github.com/huggingface/transformers
```
**Setup Ollama**:
```bash
# setup ollama on linux
curl -fsSL https://ollama.com/install.sh | sh
# pull llama 3.2
ollama pull llama3.2
```
**Install Dependencies (in the virtual environment)**:
Ensure you have Python 3.11 or later installed.
```bash
pip install streamlit ollama llama_index-llms-ollama llama_index-embeddings-huggingface
```
## Running the app
Finally, run the app.
```bash
streamlit run rag-modernbert.py
```
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "modernbert-rag/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/modernbert-rag/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1878
} |
# Compare Claud 3.7 Sonnet and OpenAI o3 using RAG over code (GitHub).
This project will also leverages [CometML Opik](https://github.com/comet-ml/opik) to build an e2e evaluation and observability pipeline for a RAG application.
## Installation and setup
**Get API Keys**:
- [Opik API Key](https://www.comet.com/signup)
- [Open AI API Key](https://platform.openai.com/api-keys)
- [Anthropic AI API Key](https://www.anthropic.com/api)
Add these to your .env file, refer ```.env.example```
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install opik llama-index llama-index-agent-openai llama-index-llms-openai llama-index-llms-anthropic --upgrade --quiet
```
**Running the app**:
Run streamlit app using ``` streamlit run app.py```.
**Running Evaluation**:
You can run the code in notebook ```Opik for LLM evaluation.ipynb```.
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "o3-vs-claude-code/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/o3-vs-claude-code/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1474
} |
# RAG over excel sheets
This project leverages LlamaIndex and IBM's Docling for RAG over excel sheets. You can also use it for ppts and other complex docs,
## Installation and setup
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install -q --progress-bar off --no-warn-conflicts llama-index-core llama-index-readers-docling llama-index-node-parser-docling llama-index-embeddings-huggingface llama-index-llms-huggingface-api llama-index-readers-file python-dotenv llama-index-llms-ollama
```
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "rag-with-dockling/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/rag-with-dockling/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1118
} |
# [Realtime Voice Bot](https://blog.dailydoseofds.com/p/assemblyai-voicebot)
This application provides a real-time, conversational travel guide for tourists visiting London, UK. Powered by AssemblyAI, ElevenLabs, and OpenAI, it transcribes your speech, generates AI responses, and plays them back as audio. It serves as a friendly assistant to help plan your trip, providing concise and conversational guidance.
## Demo Video
Click below to watch the demo video of the AI Assistant in action:
[Watch the video](Voicebot%20video.MP4)
## Features
- Real-time speech-to-text transcription using AssemblyAI.
- AI-generated responses using OpenAI's GPT-3.5-Turbo.
- Voice synthesis and playback with ElevenLabs.
## API Key Setup
Before running the application, you need API keys for the following services:
- [Get the API key for AssemblyAI here →](https://www.assemblyai.com/dashboard/signup)
- [Get the API key for OpenAI here →](https://platform.openai.com/api-keys)
- [Get the API key for ElevenLabs here →](https://elevenlabs.io/app/sign-in)
Update the API keys in the code by replacing the placeholders in the `AI_Assistant` class.
## Run the application
```bash
python app.py
```
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "real-time-voicebot/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/real-time-voicebot/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1758
} |
# Siamese Network
This notebook implements a Siamese Network on the MNIST dataset to detect if two images are of the same digit.
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "siamese-network/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/siamese-network/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 704
} |
# Trustworthy RAG over complex documents using TLM and LlamaParse
The project uses a trustworthy language model from Cleanlab (TLM) that prvides a confidence score and reasoning on the generated output. It also uses [LlamaParse](https://docs.cloud.llamaindex.ai/llamacloud/getting_started/api_key) to parse complex documents into LLM ready clean markdown format.
Before you start, grab your API keys for LlamaParse and TLM
- [LlamaParse API Key](https://docs.cloud.llamaindex.ai/llamacloud/getting_started/api_key)
- [Cleanlab TLM API Key](https://tlm.cleanlab.ai/)
---
## Setup and installations
**Setup Environment**:
- Paste your API keys by creating a `.env`
- Refer `.env.example` file
**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install llama-index-llms-cleanlab llama-index llama-index-embeddings-huggingface
```
**Running the app**:
```bash
streamlit run app.py
```
---
## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)
[](https://join.dailydoseofds.com)
---
## Contribution
Contributions are welcome! Please fork the repository and submit a pull request with your improvements. | {
"source": "patchy631/ai-engineering-hub",
"title": "trustworthy-rag/README.md",
"url": "https://github.com/patchy631/ai-engineering-hub/blob/main/trustworthy-rag/README.md",
"date": "2024-10-21T10:43:24",
"stars": 2930,
"description": null,
"file_size": 1511
} |
# Contributing to Perforator
We always appreciate contributions from the community. Thank you for your interest!
## Reporting bugs and requesting enhancements
We use GitHub Issues for tracking bug reports and feature requests. You can use [this link](https://github.com/yandex/perforator/issues/new) to create a new issue.
Please note that all issues should be in English so that they are accessible to the whole community.
## General discussions
You can use [this link](https://github.com/yandex/perforator/discussions/new/choose) to start a new discussion.
## Contributing patches
We use Pull Requests to receive patches from external contributors.
Each non-trivial pull request should be linked to an issue. Additionally, issue should have `accepted` label. This way, risk of PR rejection is minimized.
### Legal notice to external contributors
#### General info
In order for us (YANDEX LLC) to accept patches and other contributions from you, you will have to adopt our Contributor License Agreement (the “CLA”). The current version of the CLA you may find here:
* https://yandex.ru/legal/cla/?lang=en (in English)
* https://yandex.ru/legal/cla/?lang=ru (in Russian).
By adopting the CLA, you state the following:
* You obviously wish and are willingly licensing your contributions to us for our open source projects under the terms of the CLA,
* You have read the terms and conditions of the CLA and agree with them in full,
* You are legally able to provide and license your contributions as stated,
* We may use your contributions for our open source projects and for any other our project too,
* We rely on your assurances concerning the rights of third parties in relation to your contributions.
If you agree with these principles, please read and adopt our CLA. By providing us your contributions, you hereby declare that you have read and adopted our CLA, and we may freely merge your contributions with our corresponding open source project and use it in further in accordance with terms and conditions of the CLA.
#### Provide contributions
If you have adopted terms and conditions of the CLA, you are able to provide your contributions. When you submit your pull request, please add the following information into it:
```
I hereby agree to the terms of the CLA available at: [link].
```
Replace the bracketed text as follows:
* [link] is the link at the current version of the CLA (you may add here a link https://yandex.ru/legal/cla/?lang=en (in English) or a link https://yandex.ru/legal/cla/?lang=ru (in Russian).
It is enough to provide us with such notification once.
## Other questions
If you have any questions, feel free to discuss them in a discussion or an issue.
Alternatively, you may send email to the Yandex Open Source team at [email protected]. | {
"source": "yandex/perforator",
"title": "CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2800
} |
<img width="64" src="docs/_assets/logo.svg" /><br/>
[](https://github.com/yandex/perforator/blob/main/LICENSE)
[](https://github.com/yandex/perforator/tree/main/perforator/agent/collector/progs/unwinder/LICENSE)
[](https://t.me/perforator_ru)
[](https://t.me/perforator_en)
# Perforator
[Documentation](https://perforator.tech/docs/) | [Post on Medium](https://medium.com/yandex/yandexs-high-performance-profiler-is-now-open-source-95e291df9d18) | [Post on Habr](https://habr.com/ru/companies/yandex/articles/875070)
Perforator is a production-ready, open-source Continuous Profiling app that can collect CPU profiles from your production without affecting its performance, made by Yandex and inspired by [Google-Wide Profiling](https://research.google/pubs/google-wide-profiling-a-continuous-profiling-infrastructure-for-data-centers/). Perforator is deployed on tens of thousands of servers in Yandex and already has helped many developers to fix performance issues in their services.
## Main features
- Efficient and high-quality collection of kernel + userspace stacks via eBPF.
- Scalable storage for storing profiles and binaries.
- Support of unwinding without frame pointers and debug symbols on host.
- Convenient query language and UI to inspect CPU usage of applications via flamegraphs.
- Support for C++, C, Go, and Rust, with experimental support for Java and Python.
- Generation of sPGO profiles for building applications with Profile Guided Optimization (PGO) via [AutoFDO](https://github.com/google/autofdo).
## Minimal system requirements
Perforator runs on x86 64-bit Linux platforms consuming 512Mb of RAM (more on very large hosts with many CPUs) and <1% of host CPUs.
## Quick start
You can profile your laptop using local [perforator record CLI command](https://perforator.tech/docs/en/tutorials/native-profiling).
You can also deploy Perforator on playground/production Kubernetes cluster using our [Helm chart](https://perforator.tech/docs/en/guides/helm-chart).
## How to build
- Instructions on how to build from source are located [here](https://perforator.tech/docs/en/guides/build).
- If you want to use prebuilt binaries, you can find them [here](https://github.com/yandex/perforator/releases).
## How to Contribute
We are welcome to contributions! The [contributor's guide](CONTRIBUTING.md) provides more details on how to get started as a contributor.
## License
This project is licensed under the MIT License (MIT). [MIT License](https://github.com/yandex/perforator/tree/main/LICENSE)
The eBPF source code is licensed under the GPL 2.0 license. [GPL 2.0](https://github.com/yandex/perforator/tree/main/perforator/agent/collector/progs/unwinder/LICENSE) | {
"source": "yandex/perforator",
"title": "README.md",
"url": "https://github.com/yandex/perforator/blob/main/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2978
} |
# Coding style
Style guide for the util folder is a stricter version of
[general style guide](https://docs.yandex-team.ru/arcadia-cpp/cpp_style_guide)
(mostly in terms of ambiguity resolution).
* all {} must be in K&R style
* &, * tied closer to a type, not to variable
* always use `using` not `typedef`
* _ at the end of private data member of a class - `First_`, `Second_`
* every .h file must be accompanied with corresponding .cpp to avoid a leakage and check that it is self contained
* prohibited to use `printf`-like functions
Things declared in the general style guide, which sometimes are missed:
* `template <`, not `template<`
* `noexcept`, not `throw ()` nor `throw()`, not required for destructors
* indents inside `namespace` same as inside `class`
Requirements for a new code (and for corrections in an old code which involves change of behaviour) in util:
* presence of UNIT-tests
* presence of comments in Doxygen style
* accessors without Get prefix (`Length()`, but not `GetLength()`)
This guide is not a mandatory as there is the general style guide.
Nevertheless if it is not followed, then a next `ya style .` run in the util folder will undeservedly update authors of some lines of code.
Thus before a commit it is recommended to run `ya style .` in the util folder.
Don't forget to run tests from folder `tests`: `ya make -t tests`
**Note:** tests are designed to run using `autocheck/` solution.
# Submitting a patch
In order to make a commit, you have to get approval from one of
[util](https://arcanum.yandex-team.ru/arc/trunk/arcadia/groups/util) members.
If no comments have been received withing 1–2 days, it is OK
to send a graceful ping into [Igni et ferro](https://wiki.yandex-team.ru/ignietferro/) chat.
Certain exceptions apply. The following trivial changes do not need to be reviewed:
* docs, comments, typo fixes,
* renaming of an internal variable to match the styleguide.
Whenever a breaking change happens to accidentally land into trunk, reverting it does not need to be reviewed.
## Stale/abandoned review request policy
Sometimes review requests are neither merged nor discarded, and stay in review request queue forever.
To limit the incoming review request queue size, util reviewers follow these rules:
- A review request is considered stale if it is not updated by its author for at least 3 months, or if its author has left Yandex.
- A stale review request may be marked as discarded by util reviewers.
Review requests discarded as stale may be reopened or resubmitted by any committer willing to push them to completion.
**Note:** It's an author's duty to push the review request to completion.
If util reviewers stop responding to updates, they should be politely pinged via appropriate means of communication. | {
"source": "yandex/perforator",
"title": "util/README.md",
"url": "https://github.com/yandex/perforator/blob/main/util/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2798
} |
# Note
При добавлении новых скриптов в данную директорию не забывайте указывать две вещи:
1. Явное разрешать импорт модулей из текущей директории, если это вам необходимо, с помощью строк:
```python3
import os.path, sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
```
2. В командах вызова скриптов прописывать все их зависимые модули через `${input:"build/scripts/module_1.py"}`, `${input:"build/scripts/module_2.py"}` ... | {
"source": "yandex/perforator",
"title": "build/scripts/Readme.md",
"url": "https://github.com/yandex/perforator/blob/main/build/scripts/Readme.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 439
} |
# Perforator
## What is Perforator?
Perforator is a modern profiling tool designed for large data centers. Perforator can be easily deployed onto your Kubernetes cluster to collect performance profiles with negligible overhead. Perforator can also be launched as a standalone replacement for Linux perf without the need to recompile your programs.
The profiler is designed to be as non-invasive as possible using beautiful technology called [eBPF](https://ebpf.io). That allows Perforator to profile different languages and runtimes without modification on the build side. Also Perforator supports many advanced features like [sPGO](./guides/autofdo.md) or discriminated profiles for A/B tests.
Perforator is developed by Yandex and used inside Yandex as the main cluster-wide profiling service.
## Quick start
You can start with [tutorial on local usage](./tutorials/native-profiling.md) or delve into [architecture overview](./explanation/architecture/overview.md). Alternatively see a [guide to deploy Perforator on a Kubernetes cluster](guides/helm-chart.md).
## Useful links
- [GitHub repository](https://github.com/yandex/perforator)
- [Documentation](https://perforator.tech/docs)
- [Post on Habr in Russian](https://habr.com/ru/companies/yandex/articles/875070/)
- [Telegram Community chat (RU)](https://t.me/perforator_ru)
- [Telegram Community chat (EN)](https://t.me/perforator_en) | {
"source": "yandex/perforator",
"title": "docs/en/index.md",
"url": "https://github.com/yandex/perforator/blob/main/docs/en/index.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1398
} |
# How to contribute
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.
## Contributor License Agreement
Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution,
this simply gives us permission to use and redistribute your contributions as
part of the project. Head over to <https://cla.developers.google.com/> to see
your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Code reviews
All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult [GitHub Help] for more
information on using pull requests.
[GitHub Help]: https://help.github.com/articles/about-pull-requests/
## Instructions
Fork the repo, checkout the upstream repo to your GOPATH by:
```
$ go get -d go.opencensus.io
```
Add your fork as an origin:
```
cd $(go env GOPATH)/src/go.opencensus.io
git remote add fork [email protected]:YOUR_GITHUB_USERNAME/opencensus-go.git
```
Run tests:
```
$ make install-tools # Only first time.
$ make
```
Checkout a new branch, make modifications and push the branch to your fork:
```
$ git checkout -b feature
# edit files
$ git commit
$ git push fork feature
```
Open a pull request against the main opencensus-go repo.
## General Notes
This project uses Appveyor and Travis for CI.
The dependencies are managed with `go mod` if you work with the sources under your
`$GOPATH` you need to set the environment variable `GO111MODULE=on`. | {
"source": "yandex/perforator",
"title": "vendor/go.opencensus.io/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opencensus.io/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1759
} |
# OpenCensus Libraries for Go
[![Build Status][travis-image]][travis-url]
[![Windows Build Status][appveyor-image]][appveyor-url]
[![GoDoc][godoc-image]][godoc-url]
[![Gitter chat][gitter-image]][gitter-url]
OpenCensus Go is a Go implementation of OpenCensus, a toolkit for
collecting application performance and behavior monitoring data.
Currently it consists of three major components: tags, stats and tracing.
#### OpenCensus and OpenTracing have merged to form OpenTelemetry, which serves as the next major version of OpenCensus and OpenTracing. OpenTelemetry will offer backwards compatibility with existing OpenCensus integrations, and we will continue to make security patches to existing OpenCensus libraries for two years. Read more about the merger [here](https://medium.com/opentracing/a-roadmap-to-convergence-b074e5815289).
## Installation
```
$ go get -u go.opencensus.io
```
The API of this project is still evolving, see: [Deprecation Policy](#deprecation-policy).
The use of vendoring or a dependency management tool is recommended.
## Prerequisites
OpenCensus Go libraries require Go 1.8 or later.
## Getting Started
The easiest way to get started using OpenCensus in your application is to use an existing
integration with your RPC framework:
* [net/http](https://godoc.org/go.opencensus.io/plugin/ochttp)
* [gRPC](https://godoc.org/go.opencensus.io/plugin/ocgrpc)
* [database/sql](https://godoc.org/github.com/opencensus-integrations/ocsql)
* [Go kit](https://godoc.org/github.com/go-kit/kit/tracing/opencensus)
* [Groupcache](https://godoc.org/github.com/orijtech/groupcache)
* [Caddy webserver](https://godoc.org/github.com/orijtech/caddy)
* [MongoDB](https://godoc.org/github.com/orijtech/mongo-go-driver)
* [Redis gomodule/redigo](https://godoc.org/github.com/orijtech/redigo)
* [Redis goredis/redis](https://godoc.org/github.com/orijtech/redis)
* [Memcache](https://godoc.org/github.com/orijtech/gomemcache)
If you're using a framework not listed here, you could either implement your own middleware for your
framework or use [custom stats](#stats) and [spans](#spans) directly in your application.
## Exporters
OpenCensus can export instrumentation data to various backends.
OpenCensus has exporter implementations for the following, users
can implement their own exporters by implementing the exporter interfaces
([stats](https://godoc.org/go.opencensus.io/stats/view#Exporter),
[trace](https://godoc.org/go.opencensus.io/trace#Exporter)):
* [Prometheus][exporter-prom] for stats
* [OpenZipkin][exporter-zipkin] for traces
* [Stackdriver][exporter-stackdriver] Monitoring for stats and Trace for traces
* [Jaeger][exporter-jaeger] for traces
* [AWS X-Ray][exporter-xray] for traces
* [Datadog][exporter-datadog] for stats and traces
* [Graphite][exporter-graphite] for stats
* [Honeycomb][exporter-honeycomb] for traces
* [New Relic][exporter-newrelic] for stats and traces
## Overview

In a microservices environment, a user request may go through
multiple services until there is a response. OpenCensus allows
you to instrument your services and collect diagnostics data all
through your services end-to-end.
## Tags
Tags represent propagated key-value pairs. They are propagated using `context.Context`
in the same process or can be encoded to be transmitted on the wire. Usually, this will
be handled by an integration plugin, e.g. `ocgrpc.ServerHandler` and `ocgrpc.ClientHandler`
for gRPC.
Package `tag` allows adding or modifying tags in the current context.
[embedmd]:# (internal/readme/tags.go new)
```go
ctx, err := tag.New(ctx,
tag.Insert(osKey, "macOS-10.12.5"),
tag.Upsert(userIDKey, "cde36753ed"),
)
if err != nil {
log.Fatal(err)
}
```
## Stats
OpenCensus is a low-overhead framework even if instrumentation is always enabled.
In order to be so, it is optimized to make recording of data points fast
and separate from the data aggregation.
OpenCensus stats collection happens in two stages:
* Definition of measures and recording of data points
* Definition of views and aggregation of the recorded data
### Recording
Measurements are data points associated with a measure.
Recording implicitly tags the set of Measurements with the tags from the
provided context:
[embedmd]:# (internal/readme/stats.go record)
```go
stats.Record(ctx, videoSize.M(102478))
```
### Views
Views are how Measures are aggregated. You can think of them as queries over the
set of recorded data points (measurements).
Views have two parts: the tags to group by and the aggregation type used.
Currently three types of aggregations are supported:
* CountAggregation is used to count the number of times a sample was recorded.
* DistributionAggregation is used to provide a histogram of the values of the samples.
* SumAggregation is used to sum up all sample values.
[embedmd]:# (internal/readme/stats.go aggs)
```go
distAgg := view.Distribution(1<<32, 2<<32, 3<<32)
countAgg := view.Count()
sumAgg := view.Sum()
```
Here we create a view with the DistributionAggregation over our measure.
[embedmd]:# (internal/readme/stats.go view)
```go
if err := view.Register(&view.View{
Name: "example.com/video_size_distribution",
Description: "distribution of processed video size over time",
Measure: videoSize,
Aggregation: view.Distribution(1<<32, 2<<32, 3<<32),
}); err != nil {
log.Fatalf("Failed to register view: %v", err)
}
```
Register begins collecting data for the view. Registered views' data will be
exported via the registered exporters.
## Traces
A distributed trace tracks the progression of a single user request as
it is handled by the services and processes that make up an application.
Each step is called a span in the trace. Spans include metadata about the step,
including especially the time spent in the step, called the span’s latency.
Below you see a trace and several spans underneath it.

### Spans
Span is the unit step in a trace. Each span has a name, latency, status and
additional metadata.
Below we are starting a span for a cache read and ending it
when we are done:
[embedmd]:# (internal/readme/trace.go startend)
```go
ctx, span := trace.StartSpan(ctx, "cache.Get")
defer span.End()
// Do work to get from cache.
```
### Propagation
Spans can have parents or can be root spans if they don't have any parents.
The current span is propagated in-process and across the network to allow associating
new child spans with the parent.
In the same process, `context.Context` is used to propagate spans.
`trace.StartSpan` creates a new span as a root if the current context
doesn't contain a span. Or, it creates a child of the span that is
already in current context. The returned context can be used to keep
propagating the newly created span in the current context.
[embedmd]:# (internal/readme/trace.go startend)
```go
ctx, span := trace.StartSpan(ctx, "cache.Get")
defer span.End()
// Do work to get from cache.
```
Across the network, OpenCensus provides different propagation
methods for different protocols.
* gRPC integrations use the OpenCensus' [binary propagation format](https://godoc.org/go.opencensus.io/trace/propagation).
* HTTP integrations use Zipkin's [B3](https://github.com/openzipkin/b3-propagation)
by default but can be configured to use a custom propagation method by setting another
[propagation.HTTPFormat](https://godoc.org/go.opencensus.io/trace/propagation#HTTPFormat).
## Execution Tracer
With Go 1.11, OpenCensus Go will support integration with the Go execution tracer.
See [Debugging Latency in Go](https://medium.com/observability/debugging-latency-in-go-1-11-9f97a7910d68)
for an example of their mutual use.
## Profiles
OpenCensus tags can be applied as profiler labels
for users who are on Go 1.9 and above.
[embedmd]:# (internal/readme/tags.go profiler)
```go
ctx, err = tag.New(ctx,
tag.Insert(osKey, "macOS-10.12.5"),
tag.Insert(userIDKey, "fff0989878"),
)
if err != nil {
log.Fatal(err)
}
tag.Do(ctx, func(ctx context.Context) {
// Do work.
// When profiling is on, samples will be
// recorded with the key/values from the tag map.
})
```
A screenshot of the CPU profile from the program above:

## Deprecation Policy
Before version 1.0.0, the following deprecation policy will be observed:
No backwards-incompatible changes will be made except for the removal of symbols that have
been marked as *Deprecated* for at least one minor release (e.g. 0.9.0 to 0.10.0). A release
removing the *Deprecated* functionality will be made no sooner than 28 days after the first
release in which the functionality was marked *Deprecated*.
[travis-image]: https://travis-ci.org/census-instrumentation/opencensus-go.svg?branch=master
[travis-url]: https://travis-ci.org/census-instrumentation/opencensus-go
[appveyor-image]: https://ci.appveyor.com/api/projects/status/vgtt29ps1783ig38?svg=true
[appveyor-url]: https://ci.appveyor.com/project/opencensusgoteam/opencensus-go/branch/master
[godoc-image]: https://godoc.org/go.opencensus.io?status.svg
[godoc-url]: https://godoc.org/go.opencensus.io
[gitter-image]: https://badges.gitter.im/census-instrumentation/lobby.svg
[gitter-url]: https://gitter.im/census-instrumentation/lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
[new-ex]: https://godoc.org/go.opencensus.io/tag#example-NewMap
[new-replace-ex]: https://godoc.org/go.opencensus.io/tag#example-NewMap--Replace
[exporter-prom]: https://godoc.org/contrib.go.opencensus.io/exporter/prometheus
[exporter-stackdriver]: https://godoc.org/contrib.go.opencensus.io/exporter/stackdriver
[exporter-zipkin]: https://godoc.org/contrib.go.opencensus.io/exporter/zipkin
[exporter-jaeger]: https://godoc.org/contrib.go.opencensus.io/exporter/jaeger
[exporter-xray]: https://github.com/census-ecosystem/opencensus-go-exporter-aws
[exporter-datadog]: https://github.com/DataDog/opencensus-go-exporter-datadog
[exporter-graphite]: https://github.com/census-ecosystem/opencensus-go-exporter-graphite
[exporter-honeycomb]: https://github.com/honeycombio/opencensus-exporter
[exporter-newrelic]: https://github.com/newrelic/newrelic-opencensus-exporter-go | {
"source": "yandex/perforator",
"title": "vendor/go.opencensus.io/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opencensus.io/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 10291
} |
### Правки в Ubuntu 14.04 SDK
* `lib/x86_64-linux-gnu/libc-2.19.so` — удалены dynamic версии символов
* `__cxa_thread_atexit_impl`
* `getauxval`
* `__getauxval`
* `usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19` — удалены dynamic версии символов
* `__cxa_thread_atexit_impl` | {
"source": "yandex/perforator",
"title": "build/platform/linux_sdk/README.md",
"url": "https://github.com/yandex/perforator/blob/main/build/platform/linux_sdk/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 280
} |
This is supporting library for DYNAMIC_LIBRARY module.
It sets LDFLAG that brings support of dynamic loading from binary's directory on Linux. On Darwin and Windows this behavior is enabled by default. | {
"source": "yandex/perforator",
"title": "build/platform/local_so/readme.md",
"url": "https://github.com/yandex/perforator/blob/main/build/platform/local_so/readme.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 202
} |
### Usage
Не используйте эту библиотеку напрямую. Следует пользоваться `library/cpp/svnversion/svnversion.h`. | {
"source": "yandex/perforator",
"title": "build/scripts/c_templates/README.md",
"url": "https://github.com/yandex/perforator/blob/main/build/scripts/c_templates/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 110
} |
# libbacktrace
A C library that may be linked into a C/C++ program to produce symbolic backtraces
Initially written by Ian Lance Taylor <[email protected]>.
This is version 1.0.
It is likely that this will always be version 1.0.
The libbacktrace library may be linked into a program or library and
used to produce symbolic backtraces.
Sample uses would be to print a detailed backtrace when an error
occurs or to gather detailed profiling information.
In general the functions provided by this library are async-signal-safe,
meaning that they may be safely called from a signal handler.
That said, on systems that use `dl_iterate_phdr`, such as GNU/Linux,
the first call to a libbacktrace function will call `dl_iterate_phdr`,
which is not in general async-signal-safe. Therefore, programs
that call libbacktrace from a signal handler should ensure that they
make an initial call from outside of a signal handler.
Similar considerations apply when arranging to call libbacktrace
from within malloc; `dl_iterate_phdr` can also call malloc,
so make an initial call to a libbacktrace function outside of
malloc before trying to call libbacktrace functions within malloc.
The libbacktrace library is provided under a BSD license.
See the source files for the exact license text.
The public functions are declared and documented in the header file
backtrace.h, which should be #include'd by a user of the library.
Building libbacktrace will generate a file backtrace-supported.h,
which a user of the library may use to determine whether backtraces
will work.
See the source file backtrace-supported.h.in for the macros that it
defines.
As of July 2024, libbacktrace supports ELF, PE/COFF, Mach-O, and
XCOFF executables with DWARF debugging information.
In other words, it supports GNU/Linux, *BSD, macOS, Windows, and AIX.
The library is written to make it straightforward to add support for
other object file and debugging formats.
The library relies on the C++ unwind API defined at
https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html
This API is provided by GCC and clang. | {
"source": "yandex/perforator",
"title": "contrib/libs/backtrace/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/backtrace/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2080
} |
# libb2
C library providing BLAKE2b, BLAKE2s, BLAKE2bp, BLAKE2sp
Installation:
```
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install
```
Contact: [email protected] | {
"source": "yandex/perforator",
"title": "contrib/libs/blake2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/blake2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 174
} |
Contributing to c-ares
======================
To contribute patches to c-ares, please generate a GitHub pull request
and follow these guidelines:
- Check that the CI/CD builds are green for your pull request.
- Please update the test suite to add a test case for any new functionality.
- Build the library on your own machine and ensure there are no new warnings. | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 368
} |
Developer Notes
===============
* The distributed `ares_build.h` in the official release tarballs is only
intended to be used on systems which can also not run the also distributed
`configure` or `CMake` scripts. It is generated as a copy of
`ares_build.h.dist` as can be seen in the code repository.
* If you check out from git on a non-`configure` or `CMake` platform, you must run
the appropriate `buildconf*` script to set up `ares_build.h` and other local
files before being able to compile the library. There are pre-made makefiles
for a subset of such systems such as Watcom, NMake, and MinGW Makefiles.
* On systems capable of running the `configure` or `CMake` scripts, the process
will overwrite the distributed `ares_build.h` file with one that is suitable
and specific to the library being configured and built, this new file is
generated from the `ares_build.h.in` and `ares_build.h.cmake` template files.
* If you intend to distribute an already compiled c-ares library you **MUST**
also distribute along with it the generated `ares_build.h` which has been
used to compile it. Otherwise, the library will be of no use for the users of
the library that you have built. It is **your** responsibility to provide this
file. No one at the c-ares project can know how you have built the library.
The generated file includes platform and configuration dependent info,
and must not be modified by anyone.
* We support both the AutoTools `configure` based build system as well as the
`CMake` build system. Any new code changes must work with both.
* The files that get compiled and are present in the distribution are referenced
in the `Makefile.inc` in the current directory. This file gets included in
every build system supported by c-ares so that the list of files doesn't need
to be maintained per build system. Don't forget to reference new header files
otherwise they won't be included in the official release tarballs.
* We cannot assume anything else but very basic C89 compiler features being
present. The lone exception is the requirement for 64bit integers which is
not a requirement for C89 compilers to support. Please do not use any extended
features released by later standards.
* Newlines must remain unix-style for older compilers' sake.
* Comments must be written in the old-style `/* unnested C-fashion */`
* Try to keep line lengths below 80 columns and formatted as the existing code.
There is a `.clang-format` in the repository that can be used to run the
automated code formatter as such: `clang-format -i */*.c */*.h */*/*.c */*/*.h` | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/DEVELOPER-NOTES.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/DEVELOPER-NOTES.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2630
} |
# Features
- [Dynamic Server Timeout Calculation](#dynamic-server-timeout-calculation)
- [Failed Server Isolation](#failed-server-isolation)
- [Query Cache](#query-cache)
- [DNS 0x20 Query Name Case Randomization](#dns-0x20-query-name-case-randomization)
- [DNS Cookies](#dns-cookies)
- [TCP FastOpen (0-RTT)](#tcp-fastopen-0-rtt)
- [Event Thread](#event-thread)
- [System Configuration Change Monitoring](#system-configuration-change-monitoring)
## Dynamic Server Timeout Calculation
Metrics are stored for every server in time series buckets for both the current
time span and prior time span in 1 minute, 15 minute, 1 hour, and 1 day
intervals, plus a single since-inception bucket (of the server in the c-ares
channel).
These metrics are then used to calculate the average latency for queries on
each server, which automatically adjusts to network conditions. This average
is then multiplied by 5 to come up with a timeout to use for the query before
re-queuing it. If there is not sufficient data yet to calculate a timeout
(need at least 3 prior queries), then the default of 2000ms is used (or an
administrator-set `ARES_OPT_TIMEOUTMS`).
The timeout is then adjusted to a minimum bound of 250ms which is the
approximate RTT of network traffic half-way around the world, to account for the
upstream server needing to recurse to a DNS server far away. It is also
bounded on the upper end to 5000ms (or an administrator-set
`ARES_OPT_MAXTIMEOUTMS`).
If a server does not reply within the given calculated timeout, the next time
the query is re-queued to the same server, the timeout will approximately
double thus leading to adjustments in timeouts automatically when a successful
reply is recorded.
In order to calculate the optimal timeout, it is highly recommended to ensure
`ARES_OPT_QUERY_CACHE` is enabled with a non-zero `qcache_max_ttl` (which it
is enabled by default with a 3600s default max ttl). The goal is to record
the recursion time as part of query latency as the upstream server will also
cache results.
This feature requires the c-ares channel to persist for the lifetime of the
application.
## Failed Server Isolation
Each server is tracked for failures relating to consecutive connectivity issues
or unrecoverable response codes. Servers are sorted in priority order based
on this metric. Downed servers will be brought back online either when the
current highest priority server has failed, or has been determined to be online
when a query is randomly selected to probe a downed server.
By default a downed server won't be retried for 5 seconds, and queries will
have a 10% chance of being chosen after this timeframe to test a downed server.
When a downed server is selected to be probed, the query will be duplicated
and sent to the downed server independent of the original query itself. This
means that probing a downed server will always use an intended legitimate
query, but not have a negative impact of a delayed response in case that server
is still down.
Administrators may customize these settings via `ARES_OPT_SERVER_FAILOVER`.
Additionally, when using `ARES_OPT_ROTATE` or a system configuration option of
`rotate`, c-ares will randomly select a server from the list of highest priority
servers based on failures. Any servers in any lower priority bracket will be
omitted from the random selection.
This feature requires the c-ares channel to persist for the lifetime of the
application.
## Query Cache
Every successful query response, as well as `NXDOMAIN` responses containing
an `SOA` record are cached using the `TTL` returned or the SOA Minimum as
appropriate. This timeout is bounded by the `ARES_OPT_QUERY_CACHE`
`qcache_max_ttl`, which defaults to 1hr.
The query is cached at the lowest possible layer, meaning a call into
`ares_search_dnsrec()` or `ares_getaddrinfo()` may spawn multiple queries
in order to complete its lookup, each individual backend query result will
be cached.
Any server list change will automatically invalidate the cache in order to
purge any possible stale data. For example, if `NXDOMAIN` is cached but system
configuration has changed due to a VPN connection, the same query might now
result in a valid response.
This feature is not expected to cause any issues that wouldn't already be
present due to the upstream DNS server having substantially similar caching
already. However if desired it can be disabled by setting `qcache_max_ttl` to
`0`.
This feature requires the c-ares channel to persist for the lifetime of the
application.
## DNS 0x20 Query Name Case Randomization
DNS 0x20 is the name of the feature which automatically randomizes the case
of the characters in a UDP query as defined in
[draft-vixie-dnsext-dns0x20-00](https://datatracker.ietf.org/doc/html/draft-vixie-dnsext-dns0x20-00).
For example, if name resolution is performed for `www.example.com`, the actual
query sent to the upstream name server may be `Www.eXaMPlE.cOM`.
The reason to randomize case characters is to provide additional entropy in the
query to be able to detect off-path cache poisoning attacks for UDP. This is
not used for TCP connections which are not known to be vulnerable to such
attacks due to their stateful nature.
Much research has been performed by
[Google](https://groups.google.com/g/public-dns-discuss/c/KxIDPOydA5M)
on case randomization and in general have found it to be effective and widely
supported.
This feature is disabled by default and can be enabled via `ARES_FLAG_DNS0x20`.
There are some instances where servers do not properly facilitate this feature
and unlike in a recursive resolver where it may be possible to determine an
authoritative server is incapable, its much harder to come to any reliable
conclusion as a stub resolver as to where in the path the issue resides. Due to
the recent wide deployment of DNS 0x20 in large public DNS servers, it is
expected compatibility will improve rapidly where this feature, in time, may be
able to be enabled by default.
Another feature which can be used to prevent off-path cache poisoning attacks
is [DNS Cookies](#dns-cookies).
## DNS Cookies
DNS Cookies are are a method of learned mutual authentication between a server
and a client as defined in
[RFC7873](https://datatracker.ietf.org/doc/html/rfc7873)
and [RFC9018](https://datatracker.ietf.org/doc/html/rfc9018).
This mutual authentication ensures clients are protected from off-path cache
poisoning attacks, and protects servers from being used as DNS amplification
attack sources. Many servers will disable query throttling limits when DNS
Cookies are in use. It only applies to UDP connections.
Since DNS Cookies are optional and learned dynamically, this is an always-on
feature and will automatically adjust based on the upstream server state. The
only potential issue is if a server has once supported DNS Cookies then stops
supporting them, it must clear a regression timeout of 2 minutes before it can
accept responses without cookies. Such a scenario would be exceedingly rare.
Interestingly, the large public recursive DNS servers such as provided by
[Google](https://developers.google.com/speed/public-dns/docs/using),
[CloudFlare](https://one.one.one.one/), and
[OpenDNS](https://opendns.com) do not have this feature enabled. That said,
most DNS products like [BIND](https://www.isc.org/bind/) enable DNS Cookies
by default.
This feature requires the c-ares channel to persist for the lifetime of the
application.
## TCP FastOpen (0-RTT)
TCP Fast Open is defined in [RFC7413](https://datatracker.ietf.org/doc/html/rfc7413)
and enables data to be sent with the TCP SYN packet when establishing the
connection, thus rivaling the performance of UDP. A previous connection must
have already have been established in order to obtain the client cookie to
allow the server to trust the data sent in the first packet and know it was not
an off-path attack.
TCP FastOpen can only be used with idempotent requests since in timeout
conditions the SYN packet with data may be re-sent which may cause the server
to process the packet more than once. Luckily DNS requests are idempotent by
nature.
TCP FastOpen is supported on Linux, MacOS, and FreeBSD. Most other systems do
not support this feature, or like on Windows require use of completion
notifications to use it whereas c-ares relies on readiness notifications.
Supported systems also need to be configured appropriately on both the client
and server systems.
### Linux TFO
In linux a single sysctl value is used with flags to set the desired fastopen
behavior.
It is recommended to make any changes permanent by creating a file in
`/etc/sysctl.d/` with the appropriate key and value. Legacy Linux systems
might need to update `/etc/sysctl.conf` directly. After modifying the
configuration, it can be loaded via `sysctl -p`.
`net.ipv4.tcp_fastopen`:
- `1` = client only (typically default)
- `2` = server only
- `3` = client and server
### MacOS TFO
In MacOS, TCP FastOpen is enabled by default for clients and servers. You can
verify via the `net.inet.tcp.fastopen` sysctl.
If any change is needed, you should make it persistent as per this guidance:
[Persistent Sysctl Settings](https://discussions.apple.com/thread/253840320?)
`net.inet.tcp.fastopen`
- `1` = client only
- `2` = server only
- `3` = client and server (typically default)
### FreeBSD TFO
In FreeBSD, server mode TCP FastOpen is typically enabled by default but
client mode is disabled. It is recommended to edit `/etc/sysctl.conf` and
place in the values you wish to persist to enable or disable TCP Fast Open.
Once the file is modified, it can be loaded via `sysctl -f /etc/sysctl.conf`.
- `net.inet.tcp.fastopen.server_enable` (boolean) - enable/disable server
- `net.inet.tcp.fastopen.client_enable` (boolean) - enable/disable client
## Event Thread
Historic c-ares integrations required integrators to have their own event loop
which would be required to notify c-ares of read and write events for each
socket. It was also required to notify c-ares at the appropriate timeout if
no events had occurred. This could be difficult to do correctly and could
lead to stalls or other issues.
The Event Thread is currently supported on all systems except DOS which does
not natively support threading (however it could in theory be possible to
enable with something like [FSUpthreads](https://arcb.csc.ncsu.edu/~mueller/pthreads/)).
c-ares is built by default with threading support enabled, however it may
disabled at compile time. The event thread must also be specifically enabled
via `ARES_OPT_EVENT_THREAD`.
Using the Event Thread feature also facilitates some other features like
[System Configuration Change Monitoring](#system-configuration-change-monitoring),
and automatically enables the `ares_set_pending_write_cb()` feature to optimize
multi-query writing.
## System Configuration Change Monitoring
The system configuration is automatically monitored for changes to the network
and DNS settings. When a change is detected a thread is spawned to read the
new configuration then apply it to the current c-ares configuration.
This feature requires the [Event Thread](#event-thread) to be enabled via
`ARES_OPT_EVENT_THREAD`. Otherwise it is up to the integrator to do their own
configuration monitoring and call `ares_reinit()` to reload the system
configuration.
It is supported on Windows, MacOS, iOS and any system configuration that uses
`/etc/resolv.conf` and similar files such as Linux and FreeBSD. Specifically
excluded are DOS and Android due to missing mechanisms to support such a
feature. On linux file monitoring will result in immediate change detection,
however on other unix-like systems a polling mechanism is used that checks every
30s for changes.
This feature requires the c-ares channel to persist for the lifetime of the
application. | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/FEATURES.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/FEATURES.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 11900
} |
# Fuzzing Hints
## LibFuzzer
1. Set compiler that supports fuzzing, this is an example on MacOS using
a homebrew-installed clang/llvm:
```
export CC="/opt/homebrew/Cellar/llvm/18.1.8/bin/clang"
export CXX="/opt/homebrew/Cellar/llvm/18.1.8/bin/clang++"
```
2. Compile c-ares with both ASAN and fuzzing support. We want an optimized
debug build so we will use `RelWithDebInfo`:
```
export CFLAGS="-fsanitize=address,fuzzer-no-link -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION"
export CXXFLAGS="-fsanitize=address,fuzzer-no-link -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION"
export LDFLAGS="-fsanitize=address,fuzzer-no-link"
mkdir buildfuzz
cd buildfuzz
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -G Ninja ..
ninja
```
3. Build the fuzz test itself linked against our fuzzing-enabled build:
```
${CC} -W -Wall -Og -fsanitize=address,fuzzer -I../include -I../src/lib -I. -o ares-test-fuzz ../test/ares-test-fuzz.c -L./lib -Wl,-rpath ./lib -lcares
${CC} -W -Wall -Og -fsanitize=address,fuzzer -I../include -I../src/lib -I. -o ares-test-fuzz-name ../test/ares-test-fuzz-name.c -L./lib -Wl,-rpath ./lib -lcares
```
4. Run the fuzzer, its better if you can provide seed input but it does pretty
well on its own since it uses coverage data to determine how to proceed.
You can play with other flags etc, like `-jobs=XX` for parallelism. See
https://llvm.org/docs/LibFuzzer.html
```
mkdir corpus
cp ../test/fuzzinput/* corpus
./ares-test-fuzz -max_len=65535 corpus
```
or
```
mkdir corpus
cp ../test/fuzznames/* corpus
./ares-test-fuzz-name -max_len=1024 corpus
```
## AFL
To fuzz using AFL, follow the
[AFL quick start guide](http://lcamtuf.coredump.cx/afl/QuickStartGuide.txt):
- Download and build AFL.
- Configure the c-ares library and test tool to use AFL's compiler wrappers:
```console
% export CC=$AFLDIR/afl-gcc
% ./configure --disable-shared && make
% cd test && ./configure && make aresfuzz aresfuzzname
```
- Run the AFL fuzzer against the starting corpus:
```console
% mkdir fuzzoutput
% $AFLDIR/afl-fuzz -i fuzzinput -o fuzzoutput -- ./aresfuzz # OR
% $AFLDIR/afl-fuzz -i fuzznames -o fuzzoutput -- ./aresfuzzname
```
## AFL Persistent Mode
If a recent version of Clang is available, AFL can use its built-in compiler
instrumentation; this configuration also allows the use of a (much) faster
persistent mode, where multiple fuzz inputs are run for each process invocation.
- Download and build a recent AFL, and run `make` in the `llvm_mode`
subdirectory to ensure that `afl-clang-fast` gets built.
- Configure the c-ares library and test tool to use AFL's clang wrappers that
use compiler instrumentation:
```console
% export CC=$AFLDIR/afl-clang-fast
% ./configure --disable-shared && make
% cd test && ./configure && make aresfuzz
```
- Run the AFL fuzzer (in persistent mode) against the starting corpus:
```console
% mkdir fuzzoutput
% $AFLDIR/afl-fuzz -i fuzzinput -o fuzzoutput -- ./aresfuzz
``` | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/FUZZING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/FUZZING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 3007
} |
```
___ __ _ _ __ ___ ___
/ __| ___ / _` | '__/ _ \/ __|
| (_ |___| (_| | | | __/\__ \
\___| \__,_|_| \___||___/
How To Compile
```
Installing Binary Packages
==========================
Lots of people download binary distributions of c-ares. This document
does not describe how to install c-ares using such a binary package.
This document describes how to compile, build and install c-ares from
source code.
Building from Git
=================
If you get your code off a Git repository rather than an official
release tarball, see the [GIT-INFO](GIT-INFO) file in the root directory
for specific instructions on how to proceed.
In particular, if not using CMake you will need to run `./buildconf` (Unix) or
`buildconf.bat` (Windows) to generate build files, and for the former
you will need a local installation of Autotools. If using CMake the steps are
the same for both Git and official release tarballs.
AutoTools Build
===============
### General Information, works on most Unix Platforms (Linux, FreeBSD, etc.)
A normal Unix installation is made in three or four steps (after you've
unpacked the source archive):
./configure
make
make install
You probably need to be root when doing the last command.
If you have checked out the sources from the git repository, read the
[GIT-INFO](GIT-INFO) on how to proceed.
Get a full listing of all available configure options by invoking it like:
./configure --help
If you want to install c-ares in a different file hierarchy than /usr/local,
you need to specify that already when running configure:
./configure --prefix=/path/to/c-ares/tree
If you happen to have write permission in that directory, you can do `make
install` without being root. An example of this would be to make a local
installation in your own home directory:
./configure --prefix=$HOME
make
make install
### More Options
To force configure to use the standard cc compiler if both cc and gcc are
present, run configure like
CC=cc ./configure
# or
env CC=cc ./configure
To force a static library compile, disable the shared library creation
by running configure like:
./configure --disable-shared
If you're a c-ares developer and use gcc, you might want to enable more
debug options with the `--enable-debug` option.
### Special Cases
Some versions of uClibc require configuring with `CPPFLAGS=-D_GNU_SOURCE=1`
to get correct large file support.
The Open Watcom C compiler on Linux requires configuring with the variables:
./configure CC=owcc AR="$WATCOM/binl/wlib" AR_FLAGS=-q \
RANLIB=/bin/true STRIP="$WATCOM/binl/wstrip" CFLAGS=-Wextra
### CROSS COMPILE
(This section was graciously brought to us by Jim Duey, with additions by
Dan Fandrich)
Download and unpack the c-ares package.
`cd` to the new directory. (e.g. `cd c-ares-1.7.6`)
Set environment variables to point to the cross-compile toolchain and call
configure with any options you need. Be sure and specify the `--host` and
`--build` parameters at configuration time. The following script is an
example of cross-compiling for the IBM 405GP PowerPC processor using the
toolchain from MonteVista for Hardhat Linux.
```sh
#! /bin/sh
export PATH=$PATH:/opt/hardhat/devkit/ppc/405/bin
export CPPFLAGS="-I/opt/hardhat/devkit/ppc/405/target/usr/include"
export AR=ppc_405-ar
export AS=ppc_405-as
export LD=ppc_405-ld
export RANLIB=ppc_405-ranlib
export CC=ppc_405-gcc
export NM=ppc_405-nm
./configure --target=powerpc-hardhat-linux \
--host=powerpc-hardhat-linux \
--build=i586-pc-linux-gnu \
--prefix=/opt/hardhat/devkit/ppc/405/target/usr/local \
--exec-prefix=/usr/local
```
You may also need to provide a parameter like `--with-random=/dev/urandom`
to configure as it cannot detect the presence of a random number
generating device for a target system. The `--prefix` parameter
specifies where c-ares will be installed. If `configure` completes
successfully, do `make` and `make install` as usual.
In some cases, you may be able to simplify the above commands to as
little as:
./configure --host=ARCH-OS
### Cygwin (Windows)
Almost identical to the unix installation. Run the configure script in the
c-ares root with `sh configure`. Make sure you have the sh executable in
`/bin/` or you'll see the configure fail toward the end.
Run `make`
### QNX
(This section was graciously brought to us by David Bentham)
As QNX is targeted for resource constrained environments, the QNX headers
set conservative limits. This includes the `FD_SETSIZE` macro, set by default
to 32. Socket descriptors returned within the c-ares library may exceed this,
resulting in memory faults/SIGSEGV crashes when passed into `select(..)`
calls using `fd_set` macros.
A good all-round solution to this is to override the default when building
c-ares, by overriding `CFLAGS` during configure, example:
# configure CFLAGS='-DFD_SETSIZE=64 -g -O2'
### RISC OS
The library can be cross-compiled using gccsdk as follows:
CC=riscos-gcc AR=riscos-ar RANLIB='riscos-ar -s' ./configure \
--host=arm-riscos-aof --without-random --disable-shared
make
where `riscos-gcc` and `riscos-ar` are links to the gccsdk tools.
You can then link your program with `c-ares/lib/.libs/libcares.a`.
### Android
Method using a configure cross-compile (tested with Android NDK r7b):
- prepare the toolchain of the Android NDK for standalone use; this can
be done by invoking the script:
./tools/make-standalone-toolchain.sh
which creates a usual cross-compile toolchain. Let's assume that you put
this toolchain below `/opt` then invoke configure with something
like:
```
export PATH=/opt/arm-linux-androideabi-4.4.3/bin:$PATH
./configure --host=arm-linux-androideabi [more configure options]
make
```
- if you want to compile directly from our GIT repo you might run into
this issue with older automake stuff:
```
checking host system type...
Invalid configuration `arm-linux-androideabi':
system `androideabi' not recognized
configure: error: /bin/sh ./config.sub arm-linux-androideabi failed
```
this issue can be fixed with using more recent versions of `config.sub`
and `config.guess` which can be obtained here:
http://git.savannah.gnu.org/gitweb/?p=config.git;a=tree
you need to replace your system-own versions which usually can be
found in your automake folder:
`find /usr -name config.sub`
CMake builds
============
Current releases of c-ares introduce a CMake v3+ build system that has been
tested on most platforms including Windows, Linux, FreeBSD, macOS, AIX and
Solaris.
In the most basic form, building with CMake might look like:
```sh
cd /path/to/cmake/source
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/cares ..
make
sudo make install
```
Options
-------
Options to CMake are passed on the command line using "-D${OPTION}=${VALUE}".
The values defined are all boolean and take values like On, Off, True, False.
| Option Name | Description | Default Value |
|-----------------------------|-----------------------------------------------------------------------|----------------|
| CARES_STATIC | Build the static library | Off |
| CARES_SHARED | Build the shared library | On |
| CARES_INSTALL | Hook in installation, useful to disable if chain building | On |
| CARES_STATIC_PIC | Build the static library as position-independent | Off |
| CARES_BUILD_TESTS | Build and run tests | Off |
| CARES_BUILD_CONTAINER_TESTS | Build and run container tests (implies CARES_BUILD_TESTS, Linux only) | Off |
| CARES_BUILD_TOOLS | Build tools | On |
| CARES_SYMBOL_HIDING | Hide private symbols in shared libraries | Off |
| CARES_THREADS | Build with thread-safety support | On |
Ninja
-----
Ninja is the next-generation build system meant for generators like CMake that
heavily parallelize builds. Its use is very similar to the normal build:
```sh
cd /path/to/cmake/source
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/cares -G "Ninja" ..
ninja
sudo ninja install
```
Windows MSVC Command Line
-------------------------
```
cd \path\to\cmake\source
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=C:\cares -G "NMake Makefiles" ..
nmake
nmake install
```
Windows MinGW-w64 Command Line via MSYS
---------------------------------------
```
cd \path\to\cmake\source
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=C:\cares -G "MSYS Makefiles" ..
make
make install
```
Platform-specific build systems
===============================
Win32
-----
### Building Windows DLLs and C run-time (CRT) linkage issues
As a general rule, building a DLL with static CRT linkage is highly
discouraged, and intermixing CRTs in the same app is something to
avoid at any cost.
Reading and comprehension of the following Microsoft Learn article
is a must for any Windows developer. Especially
important is full understanding if you are not going to follow the
advice given above.
- [Use the C Run-Time](https://learn.microsoft.com/en-us/troubleshoot/developer/visualstudio/cpp/libraries/use-c-run-time)
If your app is misbehaving in some strange way, or it is suffering
from memory corruption, before asking for further help, please try
first to rebuild every single library your app uses as well as your
app using the debug multithreaded dynamic C runtime.
### MSYS
Building is supported for native windows via both AutoTools and CMake. When
building with autotools, you can only build either a shared version or a static
version (use `--disable-shared` or `--disable-static`). CMake can build both
simultaneously.
All of the MSYS environments are supported: `MINGW32`, `MINGW64`, `UCRT64`,
`CLANG32`, `CLANG64`, `CLANGARM64`.
### MingW32
Make sure that MinGW32's bin dir is in the search path, for example:
set PATH=c:\mingw32\bin;%PATH%
then run 'make -f Makefile.m32' in the root dir.
### MSVC 6 caveats
If you use MSVC 6 it is required that you use the February 2003 edition PSDK:
http://www.microsoft.com/msdownload/platformsdk/sdkupdate/psdk-full.htm
### MSVC from command line
Run the `vcvars32.bat` file to get a proper environment. The
`vcvars32.bat` file is part of the Microsoft development environment and
you may find it in `C:\Program Files\Microsoft Visual Studio\vc98\bin`
provided that you installed Visual C/C++ 6 in the default directory.
Further details in [README.msvc](README.msvc)
### Important static c-ares usage note
When building an application that uses the static c-ares library, you must
add `-DCARES_STATICLIB` to your `CFLAGS`. Otherwise the linker will look for
dynamic import symbols.
DOS
---
c-ares supports building as a 32bit protected mode application via
[DJGPP](https://www.delorie.com/djgpp/). It is recommended to use a DJGPP
cross compiler from [Andrew Wu](https://github.com/andrewwutw/build-djgpp)
as building directly in a DOS environment can be difficult.
It is required to also have [Watt-32](https://www.watt-32.net/) available
built using the same compiler. It is recommended to build the latest `master`
branch from [GitHub](https://github.com/sezero/watt32/tree/master).
Finally, the `DJ_PREFIX` and `WATT_ROOT` environment variables must be set
appropriately before calling `make Makefile.dj` to build c-ares.
Please refer to our CI
[GitHub Actions Workflow](https://github.com/c-ares/c-ares/blob/main/.github/workflows/djgpp.yml)
for a full build example, including building the latest Watt-32 release.
IBM OS/2
--------
Building under OS/2 is not much different from building under unix.
You need:
- emx 0.9d
- GNU make
- GNU patch
- ksh
- GNU bison
- GNU file utilities
- GNU sed
- autoconf 2.13
If during the linking you get an error about `_errno` being an undefined
symbol referenced from the text segment, you need to add `-D__ST_MT_ERRNO__`
in your definitions.
If you're getting huge binaries, probably your makefiles have the `-g` in
`CFLAGS`.
NetWare
-------
To compile `libcares.a` / `libcares.lib` you need:
- either any gcc / nlmconv, or CodeWarrior 7 PDK 4 or later.
- gnu make and awk running on the platform you compile on;
native Win32 versions can be downloaded from:
http://www.gknw.net/development/prgtools/
- recent Novell LibC SDK available from:
http://developer.novell.com/ndk/libc.htm
- or recent Novell CLib SDK available from:
http://developer.novell.com/ndk/clib.htm
Set a search path to your compiler, linker and tools; on Linux make
sure that the var `OSTYPE` contains the string 'linux'; set the var
`NDKBASE` to point to the base of your Novell NDK; and then type
`make -f Makefile.netware` from the top source directory;
VCPKG
=====
You can build and install c-ares using [vcpkg](https://github.com/Microsoft/vcpkg/) dependency manager:
```sh or powershell
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install c-ares
```
The c-ares port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository.
WATCOM
=====
To build c-ares with OpenWatcom, you need to have at least version 1.9 of OpenWatcom. You can get the latest version from [http://openwatcom.org/ftp/install/](http://openwatcom.org/ftp/install/). Install the version that corresponds to your current host platform.
After installing OpenWatcom, open a new command prompt and execute the following commands:
```
cd \path\to\cmake\source
buildconf.bat
wmake -u -f Makefile.Watcom
```
After running wmake, you should get adig.exe, ahost.exe, and the static and dynamic versions of libcares.
PORTS
=====
This is a probably incomplete list of known hardware and operating systems
that c-ares has been compiled for. If you know a system c-ares compiles and
runs on, that isn't listed, please let us know!
- Linux (i686, x86_64, AARCH64, and more)
- MacOS 10.4+
- iOS
- Windows 8+ (i686, x86_64)
- Android (ARM, AARCH64, x86_64)
- FreeBSD
- NetBSD
- OpenBSD
- Solaris (SPARC, x86_64)
- AIX (POWER)
- Tru64 (Alpha)
- IRIX (MIPS)
- Novell NetWare (i386)
Useful URLs
===========
- c-ares: https://c-ares.org/
- MinGW-w64: http://mingw-w64.sourceforge.net/
- MSYS2: https://msys2.org
- OpenWatcom: http://www.openwatcom.org/ | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/INSTALL.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/INSTALL.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 15364
} |
MIT License
Copyright (c) 1998 Massachusetts Institute of Technology
Copyright (c) 2007 - 2023 Daniel Stenberg with many contributors, see AUTHORS
file.
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice (including the next
paragraph) shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/LICENSE.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/LICENSE.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1210
} |
# [](https://c-ares.org/)
[](https://cirrus-ci.com/github/c-ares/c-ares)
[](https://ci.appveyor.com/project/c-ares/c-ares/branch/main)
[](https://coveralls.io/github/c-ares/c-ares?branch=main)
[](https://bestpractices.coreinfrastructure.org/projects/291)
[](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:c-ares)
[](https://sonarcloud.io/summary/new_code?id=c-ares_c-ares)
[](https://scan.coverity.com/projects/c-ares)
- [Overview](#overview)
- [Code](#code)
- [Communication](#communication)
- [Release Keys](#release-keys)
- [Verifying signatures](#verifying-signatures)
- [Features](#features)
- [RFCs and Proposals](#supported-rfcs-and-proposals)
## Overview
[c-ares](https://c-ares.org) is a modern DNS (stub) resolver library, written in
C. It provides interfaces for asynchronous queries while trying to abstract the
intricacies of the underlying DNS protocol. It was originally intended for
applications which need to perform DNS queries without blocking, or need to
perform multiple DNS queries in parallel.
One of the goals of c-ares is to be a better DNS resolver than is provided by
your system, regardless of which system you use. We recommend using
the c-ares library in all network applications even if the initial goal of
asynchronous resolution is not necessary to your application.
c-ares will build with any C89 compiler and is [MIT licensed](LICENSE.md),
which makes it suitable for both free and commercial software. c-ares runs on
Linux, FreeBSD, OpenBSD, MacOS, Solaris, AIX, Windows, Android, iOS and many
more operating systems.
c-ares has a strong focus on security, implementing safe parsers and data
builders used throughout the code, thus avoiding many of the common pitfalls
of other C libraries. Through automated testing with our extensive testing
framework, c-ares is constantly validated with a range of static and dynamic
analyzers, as well as being constantly fuzzed by [OSS Fuzz](https://github.com/google/oss-fuzz).
While c-ares has been around for over 20 years, it has been actively maintained
both in regards to the latest DNS RFCs as well as updated to follow the latest
best practices in regards to C coding standards.
## Code
The full source code and revision history is available in our
[GitHub repository](https://github.com/c-ares/c-ares). Our signed releases
are available in the [release archives](https://c-ares.org/download/).
See the [INSTALL.md](INSTALL.md) file for build information.
## Communication
**Issues** and **Feature Requests** should be reported to our
[GitHub Issues](https://github.com/c-ares/c-ares/issues) page.
**Discussions** around c-ares and its use, are held on
[GitHub Discussions](https://github.com/c-ares/c-ares/discussions/categories/q-a)
or the [Mailing List](https://lists.haxx.se/mailman/listinfo/c-ares). Mailing
List archive [here](https://lists.haxx.se/pipermail/c-ares/).
Please, do not mail volunteers privately about c-ares.
**Security vulnerabilities** are treated according to our
[Security Procedure](SECURITY.md), please email c-ares-security at
haxx.se if you suspect one.
## Release keys
Primary GPG keys for c-ares Releasers (some Releasers sign with subkeys):
* **Daniel Stenberg** <<[email protected]>>
`27EDEAF22F3ABCEB50DB9A125CC908FDB71E12C2`
* **Brad House** <<[email protected]>>
`DA7D64E4C82C6294CB73A20E22E3D13B5411B7CA`
To import the full set of trusted release keys (including subkeys possibly used
to sign releases):
```bash
gpg --keyserver hkps://keyserver.ubuntu.com --recv-keys 27EDEAF22F3ABCEB50DB9A125CC908FDB71E12C2 # Daniel Stenberg
gpg --keyserver hkps://keyserver.ubuntu.com --recv-keys DA7D64E4C82C6294CB73A20E22E3D13B5411B7CA # Brad House
```
### Verifying signatures
For each release `c-ares-X.Y.Z.tar.gz` there is a corresponding
`c-ares-X.Y.Z.tar.gz.asc` file which contains the detached signature for the
release.
After fetching all of the possible valid signing keys and loading into your
keychain as per the prior section, you can simply run the command below on
the downloaded package and detached signature:
```bash
% gpg -v --verify c-ares-1.29.0.tar.gz.asc c-ares-1.29.0.tar.gz
gpg: enabled compatibility flags:
gpg: Signature made Fri May 24 02:50:38 2024 EDT
gpg: using RSA key 27EDEAF22F3ABCEB50DB9A125CC908FDB71E12C2
gpg: using pgp trust model
gpg: Good signature from "Daniel Stenberg <[email protected]>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 27ED EAF2 2F3A BCEB 50DB 9A12 5CC9 08FD B71E 12C2
gpg: binary signature, digest algorithm SHA512, key algorithm rsa2048
```
## Features
See [Features](FEATURES.md)
### Supported RFCs and Proposals
- [RFC1035](https://datatracker.ietf.org/doc/html/rfc1035).
Initial/Base DNS RFC
- [RFC2671](https://datatracker.ietf.org/doc/html/rfc2671),
[RFC6891](https://datatracker.ietf.org/doc/html/rfc6891).
EDNS0 option (meta-RR)
- [RFC3596](https://datatracker.ietf.org/doc/html/rfc3596).
IPv6 Address. `AAAA` Record.
- [RFC2782](https://datatracker.ietf.org/doc/html/rfc2782).
Server Selection. `SRV` Record.
- [RFC3403](https://datatracker.ietf.org/doc/html/rfc3403).
Naming Authority Pointer. `NAPTR` Record.
- [RFC6698](https://datatracker.ietf.org/doc/html/rfc6698).
DNS-Based Authentication of Named Entities (DANE) Transport Layer Security (TLS) Protocol.
`TLSA` Record.
- [RFC9460](https://datatracker.ietf.org/doc/html/rfc9460).
General Purpose Service Binding, Service Binding type for use with HTTPS.
`SVCB` and `HTTPS` Records.
- [RFC7553](https://datatracker.ietf.org/doc/html/rfc7553).
Uniform Resource Identifier. `URI` Record.
- [RFC6844](https://datatracker.ietf.org/doc/html/rfc6844).
Certification Authority Authorization. `CAA` Record.
- [RFC2535](https://datatracker.ietf.org/doc/html/rfc2535),
[RFC2931](https://datatracker.ietf.org/doc/html/rfc2931).
`SIG0` Record. Only basic parser, not full implementation.
- [RFC7873](https://datatracker.ietf.org/doc/html/rfc7873),
[RFC9018](https://datatracker.ietf.org/doc/html/rfc9018).
DNS Cookie off-path dns poisoning and amplification mitigation.
- [draft-vixie-dnsext-dns0x20-00](https://datatracker.ietf.org/doc/html/draft-vixie-dnsext-dns0x20-00).
DNS 0x20 query name case randomization to prevent cache poisioning attacks.
- [RFC7686](https://datatracker.ietf.org/doc/html/rfc7686).
Reject queries for `.onion` domain names with `NXDOMAIN`.
- [RFC2606](https://datatracker.ietf.org/doc/html/rfc2606),
[RFC6761](https://datatracker.ietf.org/doc/html/rfc6761).
Special case treatment for `localhost`/`.localhost`.
- [RFC2308](https://datatracker.ietf.org/doc/html/rfc2308),
[RFC9520](https://datatracker.ietf.org/doc/html/rfc9520).
Negative Caching of DNS Resolution Failures.
- [RFC6724](https://datatracker.ietf.org/doc/html/rfc6724).
IPv6 address sorting as used by `ares_getaddrinfo()`.
- [RFC7413](https://datatracker.ietf.org/doc/html/rfc7413).
TCP FastOpen (TFO) for 0-RTT TCP Connection Resumption.
- [RFC3986](https://datatracker.ietf.org/doc/html/rfc3986).
Uniform Resource Identifier (URI). Used for server configuration. | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 7911
} |
## c-ares version 1.34.4 - December 14 2024
This is a bugfix release.
Changes:
* QNX Port: Port to QNX 8, add primary config reading support, add CI build. [PR #934](https://github.com/c-ares/c-ares/pull/934), [PR #937](https://github.com/c-ares/c-ares/pull/937), [PR #938](https://github.com/c-ares/c-ares/pull/938)
Bugfixes:
* Empty TXT records were not being preserved. [PR #922](https://github.com/c-ares/c-ares/pull/922)
* docs: update deprecation notices for `ares_create_query()` and `ares_mkquery()`. [PR #910](https://github.com/c-ares/c-ares/pull/910)
* license: some files weren't properly updated. [PR #920](https://github.com/c-ares/c-ares/pull/920)
* Fix bind local device regression from 1.34.0. [PR #929](https://github.com/c-ares/c-ares/pull/929), [PR #931](https://github.com/c-ares/c-ares/pull/931), [PR #935](https://github.com/c-ares/c-ares/pull/935)
* CMake: set policy version to prevent deprecation warnings. [PR #932](https://github.com/c-ares/c-ares/pull/932)
* CMake: shared and static library names should be the same on unix platforms like autotools uses. [PR #933](https://github.com/c-ares/c-ares/pull/933)
* Update to latest autoconf archive macros for enhanced system compatibility. [PR #936](https://github.com/c-ares/c-ares/pull/936)
Thanks go to these friendly people for their efforts and contributions for this
release:
* Brad House (@bradh352)
* Daniel Stenberg (@bagder)
* Gregor Jasny (@gjasny)
* @marcovsz
* Nikolaos Chatzikonstantinou (@createyourpersonalaccount)
* @vlasovsoft1979 | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/RELEASE-NOTES.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/RELEASE-NOTES.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1529
} |
c-ares release procedure - how to do a release
==============================================
in the source code repo
-----------------------
- edit `RELEASE-NOTES.md` to be accurate
- edit `configure.ac`'s `CARES_VERSION_INFO`, and `CMakeLists.txt`'s
`CARES_LIB_VERSIONINFO` set to the same value to denote the current shared
object versioning.
- edit `include/ares_version.h` and set `ARES_VERSION_*` definitions to reflect
the current version.
- All release tags need to be made off a release branch named `vX.Y`, where `X`
is the Major version number, and `Y` is the minor version number. We also
want to create an empty commit in the branch with a message, this ensures
when we tag a release from the branch, it gets tied to the branch itself and
not a commit which may be shared across this branch and `main`. Create the
branch like:
```
BRANCH=1.35
git pull && \
git checkout main && \
git checkout -b v${BRANCH} main && \
git commit --allow-empty -m "Created release branch v${BRANCH}" && \
git push -u origin v${BRANCH}
```
- make sure all relevant changes are committed on the release branch
- Create a signed tag for the release using a name of `vX.Y.Z` where `X` is the
Major version number, `Y` is the minor version number, and `Z` is the release.
This tag needs to be created from the release branch, for example:
```
BRANCH=1.35
RELEASE=1.35.0
git checkout v${BRANCH} && \
git pull && \
git tag -s v${RELEASE} -m 'c-ares release v${RELEASE}' v${BRANCH} && \
git push origin --tags
```
- When a tag is created, it will spawn off a github action to generate a new
draft release based on this workflow: [package.yml](https://github.com/c-ares/c-ares/blob/main/.github/workflows/package.yml).
Wait for this workflow to complete then fetch the generated source tarball:
```
wget https://github.com/c-ares/c-ares/releases/download/v${RELEASE}/c-ares-${RELEASE}.tar.gz
```
- GPG sign the release with a detached signature. Valid signing keys are currently:
- Daniel Stenberg <[email protected]> - 27EDEAF22F3ABCEB50DB9A125CC908FDB71E12C2
- Brad House <[email protected]> - DA7D64E4C82C6294CB73A20E22E3D13B5411B7CA
```
gpg -ab c-ares-${RELEASE}.tar.gz
```
- Upload the generated `c-ares-${RELEASE}.tar.gz.asc` signature as a release
asset, then unmark the release as being a draft.
in the c-ares-www repo
----------------------
- edit `index.md`, change version and date in frontmatter
- edit `changelog.md`, copy `RELEASE-NOTES.md` content
- edit `download.md`, add new version and date in frontmatter
- commit all local changes
- push the git commits
inform
------
- send an email to the c-ares mailing list. Insert the RELEASE-NOTES.md into the
mail.
- Create an announcement in the GitHub Discussions Announcements section:
https://github.com/c-ares/c-ares/discussions/categories/announcements
celebrate
---------
- suitable beverage intake is encouraged for the festivities | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/RELEASE-PROCEDURE.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/RELEASE-PROCEDURE.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2926
} |
c-ares security
===============
This document is intended to provide guidance on how security vulnerabilities
should be handled in the c-ares project.
Publishing Information
----------------------
All known and public c-ares vulnerabilities will be listed on [the c-ares web
site](https://c-ares.org/vulns.html).
Security vulnerabilities should not be entered in the project's public bug
tracker unless the necessary configuration is in place to limit access to the
issue to only the reporter and the project's security team.
Vulnerability Handling
----------------------
The typical process for handling a new security vulnerability is as follows.
No information should be made public about a vulnerability until it is
formally announced at the end of this process. That means, for example that a
bug tracker entry must NOT be created to track the issue since that will make
the issue public and it should not be discussed on the project's public
mailing list. Also messages associated with any commits should not make any
reference to the security nature of the commit if done prior to the public
announcement.
- The person discovering the issue, the reporter, reports the vulnerability
privately to `[email protected]`. That's an email alias that reaches a
handful of selected and trusted people.
- Messages that do not relate to the reporting or managing of an undisclosed
security vulnerability in c-ares are ignored and no further action is
required.
- A person in the security team sends an e-mail to the original reporter to
acknowledge the report.
- The security team investigates the report and either rejects it or accepts
it.
- If the report is rejected, the team writes to the reporter to explain why.
- If the report is accepted, the team writes to the reporter to let them
know it is accepted and that they are working on a fix.
- The security team discusses the problem, works out a fix, considers the
impact of the problem and suggests a release schedule. This discussion
should involve the reporter as much as possible.
- The release of the information should be "as soon as possible" and is most
often synced with an upcoming release that contains the fix. If the
reporter, or anyone else, thinks the next planned release is too far away
then a separate earlier release for security reasons should be considered.
- Write a security advisory draft about the problem that explains what the
problem is, its impact, which versions it affects, solutions or
workarounds, when the release is out and make sure to credit all
contributors properly.
- Request a CVE number from
[distros@openwall](http://oss-security.openwall.org/wiki/mailing-lists/distros)
when also informing and preparing them for the upcoming public security
vulnerability announcement - attach the advisory draft for information. Note
that 'distros' won't accept an embargo longer than 19 days.
- Update the "security advisory" with the CVE number.
- The security team commits the fix in a private branch. The commit message
should ideally contain the CVE number. This fix is usually also distributed
to the 'distros' mailing list to allow them to use the fix prior to the
public announcement.
- At the day of the next release, the private branch is merged into the master
branch and pushed. Once pushed, the information is accessible to the public
and the actual release should follow suit immediately afterwards.
- The project team creates a release that includes the fix.
- The project team announces the release and the vulnerability to the world in
the same manner we always announce releases. It gets sent to the c-ares
mailing list and the oss-security mailing list.
- The security web page on the web site should get the new vulnerability
mentioned.
C-ARES-SECURITY (at haxx dot se)
--------------------------------
Who is on this list? There are a couple of criteria you must meet, and then we
might ask you to join the list or you can ask to join it. It really isn't very
formal. We basically only require that you have a long-term presence in the
c-ares project and you have shown an understanding for the project and its way
of working. You must've been around for a good while and you should have no
plans in vanishing in the near future.
We do not make the list of partipants public mostly because it tends to vary
somewhat over time and a list somewhere will only risk getting outdated. | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4468
} |
Double Conversion
========
https://github.com/google/double-conversion
[](https://securityscorecards.dev/viewer/?uri=github.com/google/double-conversion)
This project (double-conversion) provides binary-decimal and decimal-binary
routines for IEEE doubles.
The library consists of efficient conversion routines that have been extracted
from the V8 JavaScript engine. The code has been refactored and improved so that
it can be used more easily in other projects.
There is extensive documentation in `double-conversion/string-to-double.h` and
`double-conversion/double-to-string.h`. Other examples can be found in
`test/cctest/test-conversions.cc`.
Building
========
This library can be built with [scons][0], [cmake][1] or [bazel][2].
The checked-in Makefile simply forwards to scons, and provides a
shortcut to run all tests:
make
make test
Scons
-----
The easiest way to install this library is to use `scons`. It builds
the static and shared library, and is set up to install those at the
correct locations:
scons install
Use the `DESTDIR` option to change the target directory:
scons DESTDIR=alternative_directory install
Cmake
-----
To use cmake run `cmake .` in the root directory. This overwrites the
existing Makefile.
Use `-DBUILD_SHARED_LIBS=ON` to enable the compilation of shared libraries.
Note that this disables static libraries. There is currently no way to
build both libraries at the same time with cmake.
Use `-DBUILD_TESTING=ON` to build the test executable.
cmake . -DBUILD_TESTING=ON
make
test/cctest/cctest
Bazel
---
The simplest way to adopt this library is through the [Bazel Central Registry](https://registry.bazel.build/modules/double-conversion).
To build the library from the latest repository, run:
```
bazel build //:double-conversion
```
To run the unit test, run:
```
bazel test //:cctest
```
[0]: http://www.scons.org/
[1]: https://cmake.org/
[2]: https://bazel.build/ | {
"source": "yandex/perforator",
"title": "contrib/libs/double-conversion/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/double-conversion/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2063
} |
[](https://github.com/libexpat/libexpat/actions/workflows/linux.yml)
[](https://ci.appveyor.com/project/libexpat/libexpat)
[](https://repology.org/metapackage/expat/versions)
[](https://sourceforge.net/projects/expat/files/)
[](https://github.com/libexpat/libexpat/releases)
> [!CAUTION]
>
> Expat is **understaffed** and without funding.
> There is a [call for help with details](https://github.com/libexpat/libexpat/blob/master/expat/Changes)
> at the top of the `Changes` file.
# Expat, Release 2.6.4
This is Expat, a C99 library for parsing
[XML 1.0 Fourth Edition](https://www.w3.org/TR/2006/REC-xml-20060816/), started by
[James Clark](https://en.wikipedia.org/wiki/James_Clark_%28programmer%29) in 1997.
Expat is a stream-oriented XML parser. This means that you register
handlers with the parser before starting the parse. These handlers
are called when the parser discovers the associated structures in the
document being parsed. A start tag is an example of the kind of
structures for which you may register handlers.
Expat supports the following compilers:
- GNU GCC >=4.5
- LLVM Clang >=3.5
- Microsoft Visual Studio >=16.0/2019 (rolling `${today} minus 5 years`)
Windows users can use the
[`expat-win32bin-*.*.*.{exe,zip}` download](https://github.com/libexpat/libexpat/releases),
which includes both pre-compiled libraries and executables, and source code for
developers.
Expat is [free software](https://www.gnu.org/philosophy/free-sw.en.html).
You may copy, distribute, and modify it under the terms of the License
contained in the file
[`COPYING`](https://github.com/libexpat/libexpat/blob/master/expat/COPYING)
distributed with this package.
This license is the same as the MIT/X Consortium license.
## Using libexpat in your CMake-Based Project
There are three documented ways of using libexpat with CMake:
### a) `find_package` with Module Mode
This approach leverages CMake's own [module `FindEXPAT`](https://cmake.org/cmake/help/latest/module/FindEXPAT.html).
Notice the *uppercase* `EXPAT` in the following example:
```cmake
cmake_minimum_required(VERSION 3.0) # or 3.10, see below
project(hello VERSION 1.0.0)
find_package(EXPAT 2.2.8 MODULE REQUIRED)
add_executable(hello
hello.c
)
# a) for CMake >=3.10 (see CMake's FindEXPAT docs)
target_link_libraries(hello PUBLIC EXPAT::EXPAT)
# b) for CMake >=3.0
target_include_directories(hello PRIVATE ${EXPAT_INCLUDE_DIRS})
target_link_libraries(hello PUBLIC ${EXPAT_LIBRARIES})
```
### b) `find_package` with Config Mode
This approach requires files from…
- libexpat >=2.2.8 where packaging uses the CMake build system
or
- libexpat >=2.3.0 where packaging uses the GNU Autotools build system
on Linux
or
- libexpat >=2.4.0 where packaging uses the GNU Autotools build system
on macOS or MinGW.
Notice the *lowercase* `expat` in the following example:
```cmake
cmake_minimum_required(VERSION 3.0)
project(hello VERSION 1.0.0)
find_package(expat 2.2.8 CONFIG REQUIRED char dtd ns)
add_executable(hello
hello.c
)
target_link_libraries(hello PUBLIC expat::expat)
```
### c) The `FetchContent` module
This approach — as demonstrated below — requires CMake >=3.18 for both the
[`FetchContent` module](https://cmake.org/cmake/help/latest/module/FetchContent.html)
and its support for the `SOURCE_SUBDIR` option to be available.
Please note that:
- Use of the `FetchContent` module with *non-release* SHA1s or `master`
of libexpat is neither advised nor considered officially supported.
- Pinning to a specific commit is great for robust CI.
- Pinning to a specific commit needs updating every time there is a new
release of libexpat — either manually or through automation —,
to not miss out on libexpat security updates.
For an example that pulls in libexpat via Git:
```cmake
cmake_minimum_required(VERSION 3.18)
include(FetchContent)
project(hello VERSION 1.0.0)
FetchContent_Declare(
expat
GIT_REPOSITORY https://github.com/libexpat/libexpat/
GIT_TAG 000000000_GIT_COMMIT_SHA1_HERE_000000000 # i.e. Git tag R_0_Y_Z
SOURCE_SUBDIR expat/
)
FetchContent_MakeAvailable(expat)
add_executable(hello
hello.c
)
target_link_libraries(hello PUBLIC expat)
```
## Building from a Git Clone
If you are building Expat from a check-out from the
[Git repository](https://github.com/libexpat/libexpat/),
you need to run a script that generates the configure script using the
GNU autoconf and libtool tools. To do this, you need to have
autoconf 2.58 or newer. Run the script like this:
```console
./buildconf.sh
```
Once this has been done, follow the same instructions as for building
from a source distribution.
## Building from a Source Distribution
### a) Building with the configure script (i.e. GNU Autotools)
To build Expat from a source distribution, you first run the
configuration shell script in the top level distribution directory:
```console
./configure
```
There are many options which you may provide to configure (which you
can discover by running configure with the `--help` option). But the
one of most interest is the one that sets the installation directory.
By default, the configure script will set things up to install
libexpat into `/usr/local/lib`, `expat.h` into `/usr/local/include`, and
`xmlwf` into `/usr/local/bin`. If, for example, you'd prefer to install
into `/home/me/mystuff/lib`, `/home/me/mystuff/include`, and
`/home/me/mystuff/bin`, you can tell `configure` about that with:
```console
./configure --prefix=/home/me/mystuff
```
Another interesting option is to enable 64-bit integer support for
line and column numbers and the over-all byte index:
```console
./configure CPPFLAGS=-DXML_LARGE_SIZE
```
However, such a modification would be a breaking change to the ABI
and is therefore not recommended for general use — e.g. as part of
a Linux distribution — but rather for builds with special requirements.
After running the configure script, the `make` command will build
things and `make install` will install things into their proper
location. Have a look at the `Makefile` to learn about additional
`make` options. Note that you need to have write permission into
the directories into which things will be installed.
If you are interested in building Expat to provide document
information in UTF-16 encoding rather than the default UTF-8, follow
these instructions (after having run `make distclean`).
Please note that we configure with `--without-xmlwf` as xmlwf does not
support this mode of compilation (yet):
1. Mass-patch `Makefile.am` files to use `libexpatw.la` for a library name:
<br/>
`find . -name Makefile.am -exec sed
-e 's,libexpat\.la,libexpatw.la,'
-e 's,libexpat_la,libexpatw_la,'
-i.bak {} +`
1. Run `automake` to re-write `Makefile.in` files:<br/>
`automake`
1. For UTF-16 output as unsigned short (and version/error strings as char),
run:<br/>
`./configure CPPFLAGS=-DXML_UNICODE --without-xmlwf`<br/>
For UTF-16 output as `wchar_t` (incl. version/error strings), run:<br/>
`./configure CFLAGS="-g -O2 -fshort-wchar" CPPFLAGS=-DXML_UNICODE_WCHAR_T
--without-xmlwf`
<br/>Note: The latter requires libc compiled with `-fshort-wchar`, as well.
1. Run `make` (which excludes xmlwf).
1. Run `make install` (again, excludes xmlwf).
Using `DESTDIR` is supported. It works as follows:
```console
make install DESTDIR=/path/to/image
```
overrides the in-makefile set `DESTDIR`, because variable-setting priority is
1. commandline
1. in-makefile
1. environment
Note: This only applies to the Expat library itself, building UTF-16 versions
of xmlwf and the tests is currently not supported.
When using Expat with a project using autoconf for configuration, you
can use the probing macro in `conftools/expat.m4` to determine how to
include Expat. See the comments at the top of that file for more
information.
A reference manual is available in the file `doc/reference.html` in this
distribution.
### b) Building with CMake
The CMake build system is still *experimental* and may replace the primary
build system based on GNU Autotools at some point when it is ready.
#### Available Options
For an idea of the available (non-advanced) options for building with CMake:
```console
# rm -f CMakeCache.txt ; cmake -D_EXPAT_HELP=ON -LH . | grep -B1 ':.*=' | sed 's,^--$,,'
// Choose the type of build, options are: None Debug Release RelWithDebInfo MinSizeRel ...
CMAKE_BUILD_TYPE:STRING=
// Install path prefix, prepended onto install directories.
CMAKE_INSTALL_PREFIX:PATH=/usr/local
// Path to a program.
DOCBOOK_TO_MAN:FILEPATH=/usr/bin/docbook2x-man
// Build man page for xmlwf
EXPAT_BUILD_DOCS:BOOL=ON
// Build the examples for expat library
EXPAT_BUILD_EXAMPLES:BOOL=ON
// Build fuzzers for the expat library
EXPAT_BUILD_FUZZERS:BOOL=OFF
// Build pkg-config file
EXPAT_BUILD_PKGCONFIG:BOOL=ON
// Build the tests for expat library
EXPAT_BUILD_TESTS:BOOL=ON
// Build the xmlwf tool for expat library
EXPAT_BUILD_TOOLS:BOOL=ON
// Character type to use (char|ushort|wchar_t) [default=char]
EXPAT_CHAR_TYPE:STRING=char
// Install expat files in cmake install target
EXPAT_ENABLE_INSTALL:BOOL=ON
// Use /MT flag (static CRT) when compiling in MSVC
EXPAT_MSVC_STATIC_CRT:BOOL=OFF
// Build fuzzers via ossfuzz for the expat library
EXPAT_OSSFUZZ_BUILD:BOOL=OFF
// Build a shared expat library
EXPAT_SHARED_LIBS:BOOL=ON
// Treat all compiler warnings as errors
EXPAT_WARNINGS_AS_ERRORS:BOOL=OFF
// Make use of getrandom function (ON|OFF|AUTO) [default=AUTO]
EXPAT_WITH_GETRANDOM:STRING=AUTO
// Utilize libbsd (for arc4random_buf)
EXPAT_WITH_LIBBSD:BOOL=OFF
// Make use of syscall SYS_getrandom (ON|OFF|AUTO) [default=AUTO]
EXPAT_WITH_SYS_GETRANDOM:STRING=AUTO
``` | {
"source": "yandex/perforator",
"title": "contrib/libs/expat/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/expat/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 10248
} |
Contributing to {fmt}
=====================
By submitting a pull request or a patch, you represent that you have the right
to license your contribution to the {fmt} project owners and the community,
agree that your contributions are licensed under the {fmt} license, and agree
to future changes to the licensing.
All C++ code must adhere to [Google C++ Style Guide](
https://google.github.io/styleguide/cppguide.html) with the following
exceptions:
* Exceptions are permitted
* snake_case should be used instead of UpperCamelCase for function and type
names
All documentation must adhere to the [Google Developer Documentation Style
Guide](https://developers.google.com/style).
Thanks for contributing! | {
"source": "yandex/perforator",
"title": "contrib/libs/fmt/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/fmt/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 709
} |
8.1.1 - 2022-01-06
------------------
* Restored ABI compatibility with version 8.0.x
(`#2695 <https://github.com/fmtlib/fmt/issues/2695>`_,
`#2696 <https://github.com/fmtlib/fmt/pull/2696>`_).
Thanks `@saraedum (Julian Rüth) <https://github.com/saraedum>`_.
* Fixed chorno formatting on big endian systems
(`#2698 <https://github.com/fmtlib/fmt/issues/2698>`_,
`#2699 <https://github.com/fmtlib/fmt/pull/2699>`_).
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_ and
`@xvitaly (Vitaly Zaitsev) <https://github.com/xvitaly>`_.
* Fixed a linkage error with mingw
(`#2691 <https://github.com/fmtlib/fmt/issues/2691>`_,
`#2692 <https://github.com/fmtlib/fmt/pull/2692>`_).
Thanks `@rbberger (Richard Berger) <https://github.com/rbberger>`_.
8.1.0 - 2022-01-02
------------------
* Optimized chrono formatting
(`#2500 <https://github.com/fmtlib/fmt/pull/2500>`_,
`#2537 <https://github.com/fmtlib/fmt/pull/2537>`_,
`#2541 <https://github.com/fmtlib/fmt/issues/2541>`_,
`#2544 <https://github.com/fmtlib/fmt/pull/2544>`_,
`#2550 <https://github.com/fmtlib/fmt/pull/2550>`_,
`#2551 <https://github.com/fmtlib/fmt/pull/2551>`_,
`#2576 <https://github.com/fmtlib/fmt/pull/2576>`_,
`#2577 <https://github.com/fmtlib/fmt/issues/2577>`_,
`#2586 <https://github.com/fmtlib/fmt/pull/2586>`_,
`#2591 <https://github.com/fmtlib/fmt/pull/2591>`_,
`#2594 <https://github.com/fmtlib/fmt/pull/2594>`_,
`#2602 <https://github.com/fmtlib/fmt/pull/2602>`_,
`#2617 <https://github.com/fmtlib/fmt/pull/2617>`_,
`#2628 <https://github.com/fmtlib/fmt/issues/2628>`_,
`#2633 <https://github.com/fmtlib/fmt/pull/2633>`_,
`#2670 <https://github.com/fmtlib/fmt/issues/2670>`_,
`#2671 <https://github.com/fmtlib/fmt/pull/2671>`_).
Processing of some specifiers such as ``%z`` and ``%Y`` is now up to 10-20
times faster, for example on GCC 11 with libstdc++::
----------------------------------------------------------------------------
Benchmark Before After
----------------------------------------------------------------------------
FMTFormatter_z 261 ns 26.3 ns
FMTFormatterCompile_z 246 ns 11.6 ns
FMTFormatter_Y 263 ns 26.1 ns
FMTFormatterCompile_Y 244 ns 10.5 ns
----------------------------------------------------------------------------
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_ and
`@toughengineer (Pavel Novikov) <https://github.com/toughengineer>`_.
* Implemented subsecond formatting for chrono durations
(`#2623 <https://github.com/fmtlib/fmt/pull/2623>`_).
For example (`godbolt <https://godbolt.org/z/es7vWTETe>`__):
.. code:: c++
#include <fmt/chrono.h>
int main() {
fmt::print("{:%S}", std::chrono::milliseconds(1234));
}
prints "01.234".
Thanks `@matrackif <https://github.com/matrackif>`_.
* Fixed handling of precision 0 when formatting chrono durations
(`#2587 <https://github.com/fmtlib/fmt/issues/2587>`_,
`#2588 <https://github.com/fmtlib/fmt/pull/2588>`_).
Thanks `@lukester1975 <https://github.com/lukester1975>`_.
* Fixed an overflow on invalid inputs in the ``tm`` formatter
(`#2564 <https://github.com/fmtlib/fmt/pull/2564>`_).
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
* Added ``fmt::group_digits`` that formats integers with a non-localized digit
separator (comma) for groups of three digits.
For example (`godbolt <https://godbolt.org/z/TxGxG9Poq>`__):
.. code:: c++
#include <fmt/format.h>
int main() {
fmt::print("{} dollars", fmt::group_digits(1000000));
}
prints "1,000,000 dollars".
* Added support for faint, conceal, reverse and blink text styles
(`#2394 <https://github.com/fmtlib/fmt/pull/2394>`_):
https://user-images.githubusercontent.com/576385/147710227-c68f5317-f8fa-42c3-9123-7c4ba3c398cb.mp4
Thanks `@benit8 (Benoît Lormeau) <https://github.com/benit8>`_ and
`@data-man (Dmitry Atamanov) <https://github.com/data-man>`_.
* Added experimental support for compile-time floating point formatting
(`#2426 <https://github.com/fmtlib/fmt/pull/2426>`_,
`#2470 <https://github.com/fmtlib/fmt/pull/2470>`_).
It is currently limited to the header-only mode.
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Added UDL-based named argument support to compile-time format string checks
(`#2640 <https://github.com/fmtlib/fmt/issues/2640>`_,
`#2649 <https://github.com/fmtlib/fmt/pull/2649>`_).
For example (`godbolt <https://godbolt.org/z/ohGbbvonv>`__):
.. code:: c++
#include <fmt/format.h>
int main() {
using namespace fmt::literals;
fmt::print("{answer:s}", "answer"_a=42);
}
gives a compile-time error on compilers with C++20 ``consteval`` and non-type
template parameter support (gcc 10+) because ``s`` is not a valid format
specifier for an integer.
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Implemented escaping of string range elements.
For example (`godbolt <https://godbolt.org/z/rKvM1vKf3>`__):
.. code:: c++
#include <fmt/ranges.h>
#include <vector>
int main() {
fmt::print("{}", std::vector<std::string>{"\naan"});
}
is now printed as::
["\naan"]
instead of::
["
aan"]
* Switched to JSON-like representation of maps and sets for consistency with
Python's ``str.format``.
For example (`godbolt <https://godbolt.org/z/seKjoY9W5>`__):
.. code:: c++
#include <fmt/ranges.h>
#include <map>
int main() {
fmt::print("{}", std::map<std::string, int>{{"answer", 42}});
}
is now printed as::
{"answer": 42}
* Extended ``fmt::join`` to support C++20-only ranges
(`#2549 <https://github.com/fmtlib/fmt/pull/2549>`_).
Thanks `@BRevzin (Barry Revzin) <https://github.com/BRevzin>`_.
* Optimized handling of non-const-iterable ranges and implemented initial
support for non-const-formattable types.
* Disabled implicit conversions of scoped enums to integers that was
accidentally introduced in earlier versions
(`#1841 <https://github.com/fmtlib/fmt/pull/1841>`_).
* Deprecated implicit conversion of ``[const] signed char*`` and
``[const] unsigned char*`` to C strings.
* Deprecated ``_format``, a legacy UDL-based format API
(`#2646 <https://github.com/fmtlib/fmt/pull/2646>`_).
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Marked ``format``, ``formatted_size`` and ``to_string`` as ``[[nodiscard]]``
(`#2612 <https://github.com/fmtlib/fmt/pull/2612>`_).
`@0x8000-0000 (Florin Iucha) <https://github.com/0x8000-0000>`_.
* Added missing diagnostic when trying to format function and member pointers
as well as objects convertible to pointers which is explicitly disallowed
(`#2598 <https://github.com/fmtlib/fmt/issues/2598>`_,
`#2609 <https://github.com/fmtlib/fmt/pull/2609>`_,
`#2610 <https://github.com/fmtlib/fmt/pull/2610>`_).
Thanks `@AlexGuteniev (Alex Guteniev) <https://github.com/AlexGuteniev>`_.
* Optimized writing to a contiguous buffer with ``format_to_n``
(`#2489 <https://github.com/fmtlib/fmt/pull/2489>`_).
Thanks `@Roman-Koshelev <https://github.com/Roman-Koshelev>`_.
* Optimized writing to non-``char`` buffers
(`#2477 <https://github.com/fmtlib/fmt/pull/2477>`_).
Thanks `@Roman-Koshelev <https://github.com/Roman-Koshelev>`_.
* Decimal point is now localized when using the ``L`` specifier.
* Improved floating point formatter implementation
(`#2498 <https://github.com/fmtlib/fmt/pull/2498>`_,
`#2499 <https://github.com/fmtlib/fmt/pull/2499>`_).
Thanks `@Roman-Koshelev <https://github.com/Roman-Koshelev>`_.
* Fixed handling of very large precision in fixed format
(`#2616 <https://github.com/fmtlib/fmt/pull/2616>`_).
* Made a table of cached powers used in FP formatting static
(`#2509 <https://github.com/fmtlib/fmt/pull/2509>`_).
Thanks `@jk-jeon (Junekey Jeon) <https://github.com/jk-jeon>`_.
* Resolved a lookup ambiguity with C++20 format-related functions due to ADL
(`#2639 <https://github.com/fmtlib/fmt/issues/2639>`_,
`#2641 <https://github.com/fmtlib/fmt/pull/2641>`_).
Thanks `@mkurdej (Marek Kurdej) <https://github.com/mkurdej>`_.
* Removed unnecessary inline namespace qualification
(`#2642 <https://github.com/fmtlib/fmt/issues/2642>`_,
`#2643 <https://github.com/fmtlib/fmt/pull/2643>`_).
Thanks `@mkurdej (Marek Kurdej) <https://github.com/mkurdej>`_.
* Implemented argument forwarding in ``format_to_n``
(`#2462 <https://github.com/fmtlib/fmt/issues/2462>`_,
`#2463 <https://github.com/fmtlib/fmt/pull/2463>`_).
Thanks `@owent (WenTao Ou) <https://github.com/owent>`_.
* Fixed handling of implicit conversions in ``fmt::to_string`` and format string
compilation (`#2565 <https://github.com/fmtlib/fmt/issues/2565>`_).
* Changed the default access mode of files created by ``fmt::output_file`` to
``-rw-r--r--`` for consistency with ``fopen``
(`#2530 <https://github.com/fmtlib/fmt/issues/2530>`_).
* Make ``fmt::ostream::flush`` public
(`#2435 <https://github.com/fmtlib/fmt/issues/2435>`_).
* Improved C++14/17 attribute detection
(`#2615 <https://github.com/fmtlib/fmt/pull/2615>`_).
Thanks `@AlexGuteniev (Alex Guteniev) <https://github.com/AlexGuteniev>`_.
* Improved ``consteval`` detection for MSVC
(`#2559 <https://github.com/fmtlib/fmt/pull/2559>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Improved documentation
(`#2406 <https://github.com/fmtlib/fmt/issues/2406>`_,
`#2446 <https://github.com/fmtlib/fmt/pull/2446>`_,
`#2493 <https://github.com/fmtlib/fmt/issues/2493>`_,
`#2513 <https://github.com/fmtlib/fmt/issues/2513>`_,
`#2515 <https://github.com/fmtlib/fmt/pull/2515>`_,
`#2522 <https://github.com/fmtlib/fmt/issues/2522>`_,
`#2562 <https://github.com/fmtlib/fmt/pull/2562>`_,
`#2575 <https://github.com/fmtlib/fmt/pull/2575>`_,
`#2606 <https://github.com/fmtlib/fmt/pull/2606>`_,
`#2620 <https://github.com/fmtlib/fmt/pull/2620>`_,
`#2676 <https://github.com/fmtlib/fmt/issues/2676>`_).
Thanks `@sobolevn (Nikita Sobolev) <https://github.com/sobolevn>`_,
`@UnePierre (Max FERGER) <https://github.com/UnePierre>`_,
`@zhsj <https://github.com/zhsj>`_,
`@phprus (Vladislav Shchapov) <https://github.com/phprus>`_,
`@ericcurtin (Eric Curtin) <https://github.com/ericcurtin>`_,
`@Lounarok <https://github.com/Lounarok>`_.
* Improved fuzzers and added a fuzzer for chrono timepoint formatting
(`#2461 <https://github.com/fmtlib/fmt/pull/2461>`_,
`#2469 <https://github.com/fmtlib/fmt/pull/2469>`_).
`@pauldreik (Paul Dreik) <https://github.com/pauldreik>`_,
* Added the ``FMT_SYSTEM_HEADERS`` CMake option setting which marks {fmt}'s
headers as system. It can be used to suppress warnings
(`#2644 <https://github.com/fmtlib/fmt/issues/2644>`_,
`#2651 <https://github.com/fmtlib/fmt/pull/2651>`_).
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Added the Bazel build system support
(`#2505 <https://github.com/fmtlib/fmt/pull/2505>`_,
`#2516 <https://github.com/fmtlib/fmt/pull/2516>`_).
Thanks `@Vertexwahn <https://github.com/Vertexwahn>`_.
* Improved build configuration and tests
(`#2437 <https://github.com/fmtlib/fmt/issues/2437>`_,
`#2558 <https://github.com/fmtlib/fmt/pull/2558>`_,
`#2648 <https://github.com/fmtlib/fmt/pull/2648>`_,
`#2650 <https://github.com/fmtlib/fmt/pull/2650>`_,
`#2663 <https://github.com/fmtlib/fmt/pull/2663>`_,
`#2677 <https://github.com/fmtlib/fmt/pull/2677>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_,
`@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_,
`@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
* Fixed various warnings and compilation issues
(`#2353 <https://github.com/fmtlib/fmt/pull/2353>`_,
`#2356 <https://github.com/fmtlib/fmt/pull/2356>`_,
`#2399 <https://github.com/fmtlib/fmt/pull/2399>`_,
`#2408 <https://github.com/fmtlib/fmt/issues/2408>`_,
`#2414 <https://github.com/fmtlib/fmt/pull/2414>`_,
`#2427 <https://github.com/fmtlib/fmt/pull/2427>`_,
`#2432 <https://github.com/fmtlib/fmt/pull/2432>`_,
`#2442 <https://github.com/fmtlib/fmt/pull/2442>`_,
`#2434 <https://github.com/fmtlib/fmt/pull/2434>`_,
`#2439 <https://github.com/fmtlib/fmt/issues/2439>`_,
`#2447 <https://github.com/fmtlib/fmt/pull/2447>`_,
`#2450 <https://github.com/fmtlib/fmt/pull/2450>`_,
`#2455 <https://github.com/fmtlib/fmt/issues/2455>`_,
`#2465 <https://github.com/fmtlib/fmt/issues/2465>`_,
`#2472 <https://github.com/fmtlib/fmt/issues/2472>`_,
`#2474 <https://github.com/fmtlib/fmt/issues/2474>`_,
`#2476 <https://github.com/fmtlib/fmt/pull/2476>`_,
`#2478 <https://github.com/fmtlib/fmt/issues/2478>`_,
`#2479 <https://github.com/fmtlib/fmt/issues/2479>`_,
`#2481 <https://github.com/fmtlib/fmt/issues/2481>`_,
`#2482 <https://github.com/fmtlib/fmt/pull/2482>`_,
`#2483 <https://github.com/fmtlib/fmt/pull/2483>`_,
`#2490 <https://github.com/fmtlib/fmt/issues/2490>`_,
`#2491 <https://github.com/fmtlib/fmt/pull/2491>`_,
`#2510 <https://github.com/fmtlib/fmt/pull/2510>`_,
`#2518 <https://github.com/fmtlib/fmt/pull/2518>`_,
`#2528 <https://github.com/fmtlib/fmt/issues/2528>`_,
`#2529 <https://github.com/fmtlib/fmt/pull/2529>`_,
`#2539 <https://github.com/fmtlib/fmt/pull/2539>`_,
`#2540 <https://github.com/fmtlib/fmt/issues/2540>`_,
`#2545 <https://github.com/fmtlib/fmt/pull/2545>`_,
`#2555 <https://github.com/fmtlib/fmt/pull/2555>`_,
`#2557 <https://github.com/fmtlib/fmt/issues/2557>`_,
`#2570 <https://github.com/fmtlib/fmt/issues/2570>`_,
`#2573 <https://github.com/fmtlib/fmt/pull/2573>`_,
`#2582 <https://github.com/fmtlib/fmt/pull/2582>`_,
`#2605 <https://github.com/fmtlib/fmt/issues/2605>`_,
`#2611 <https://github.com/fmtlib/fmt/pull/2611>`_,
`#2647 <https://github.com/fmtlib/fmt/pull/2647>`_,
`#2627 <https://github.com/fmtlib/fmt/issues/2627>`_,
`#2630 <https://github.com/fmtlib/fmt/pull/2630>`_,
`#2635 <https://github.com/fmtlib/fmt/issues/2635>`_,
`#2638 <https://github.com/fmtlib/fmt/issues/2638>`_,
`#2653 <https://github.com/fmtlib/fmt/issues/2653>`_,
`#2654 <https://github.com/fmtlib/fmt/issues/2654>`_,
`#2661 <https://github.com/fmtlib/fmt/issues/2661>`_,
`#2664 <https://github.com/fmtlib/fmt/pull/2664>`_,
`#2684 <https://github.com/fmtlib/fmt/pull/2684>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_,
`@mwinterb <https://github.com/mwinterb>`_,
`@cdacamar (Cameron DaCamara) <https://github.com/cdacamar>`_,
`@TrebledJ (Johnathan) <https://github.com/TrebledJ>`_,
`@bodomartin (brm) <https://github.com/bodomartin>`_,
`@cquammen (Cory Quammen) <https://github.com/cquammen>`_,
`@white238 (Chris White) <https://github.com/white238>`_,
`@mmarkeloff (Max) <https://github.com/mmarkeloff>`_,
`@palacaze (Pierre-Antoine Lacaze) <https://github.com/palacaze>`_,
`@jcelerier (Jean-Michaël Celerier) <https://github.com/jcelerier>`_,
`@mborn-adi (Mathias Born) <https://github.com/mborn-adi>`_,
`@BrukerJWD (Jonathan W) <https://github.com/BrukerJWD>`_,
`@spyridon97 (Spiros Tsalikis) <https://github.com/spyridon97>`_,
`@phprus (Vladislav Shchapov) <https://github.com/phprus>`_,
`@oliverlee (Oliver Lee) <https://github.com/oliverlee>`_,
`@joshessman-llnl (Josh Essman) <https://github.com/joshessman-llnl>`_,
`@akohlmey (Axel Kohlmeyer) <https://github.com/akohlmey>`_,
`@timkalu <https://github.com/timkalu>`_,
`@olupton (Olli Lupton) <https://github.com/olupton>`_,
`@Acretock <https://github.com/Acretock>`_,
`@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_,
`@andrewcorrigan (Andrew Corrigan) <https://github.com/andrewcorrigan>`_,
`@lucpelletier <https://github.com/lucpelletier>`_,
`@HazardyKnusperkeks (Björn Schäpers) <https://github.com/HazardyKnusperkeks>`_.
8.0.1 - 2021-07-02
------------------
* Fixed the version number in the inline namespace
(`#2374 <https://github.com/fmtlib/fmt/issues/2374>`_).
* Added a missing presentation type check for ``std::string``
(`#2402 <https://github.com/fmtlib/fmt/issues/2402>`_).
* Fixed a linkage error when mixing code built with clang and gcc
(`#2377 <https://github.com/fmtlib/fmt/issues/2377>`_).
* Fixed documentation issues
(`#2396 <https://github.com/fmtlib/fmt/pull/2396>`_,
`#2403 <https://github.com/fmtlib/fmt/issues/2403>`_,
`#2406 <https://github.com/fmtlib/fmt/issues/2406>`_).
Thanks `@mkurdej (Marek Kurdej) <https://github.com/mkurdej>`_.
* Removed dead code in FP formatter (
`#2398 <https://github.com/fmtlib/fmt/pull/2398>`_).
Thanks `@javierhonduco (Javier Honduvilla Coto)
<https://github.com/javierhonduco>`_.
* Fixed various warnings and compilation issues
(`#2351 <https://github.com/fmtlib/fmt/issues/2351>`_,
`#2359 <https://github.com/fmtlib/fmt/issues/2359>`_,
`#2365 <https://github.com/fmtlib/fmt/pull/2365>`_,
`#2368 <https://github.com/fmtlib/fmt/issues/2368>`_,
`#2370 <https://github.com/fmtlib/fmt/pull/2370>`_,
`#2376 <https://github.com/fmtlib/fmt/pull/2376>`_,
`#2381 <https://github.com/fmtlib/fmt/pull/2381>`_,
`#2382 <https://github.com/fmtlib/fmt/pull/2382>`_,
`#2386 <https://github.com/fmtlib/fmt/issues/2386>`_,
`#2389 <https://github.com/fmtlib/fmt/pull/2389>`_,
`#2395 <https://github.com/fmtlib/fmt/pull/2395>`_,
`#2397 <https://github.com/fmtlib/fmt/pull/2397>`_,
`#2400 <https://github.com/fmtlib/fmt/issues/2400>`_,
`#2401 <https://github.com/fmtlib/fmt/issues/2401>`_,
`#2407 <https://github.com/fmtlib/fmt/pull/2407>`_).
Thanks `@zx2c4 (Jason A. Donenfeld) <https://github.com/zx2c4>`_,
`@AidanSun05 (Aidan Sun) <https://github.com/AidanSun05>`_,
`@mattiasljungstrom (Mattias Ljungström)
<https://github.com/mattiasljungstrom>`_,
`@joemmett (Jonathan Emmett) <https://github.com/joemmett>`_,
`@erengy (Eren Okka) <https://github.com/erengy>`_,
`@patlkli (Patrick Geltinger) <https://github.com/patlkli>`_,
`@gsjaardema (Greg Sjaardema) <https://github.com/gsjaardema>`_,
`@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
8.0.0 - 2021-06-21
------------------
* Enabled compile-time format string checks by default.
For example (`godbolt <https://godbolt.org/z/sMxcohGjz>`__):
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("{:d}", "I am not a number");
}
gives a compile-time error on compilers with C++20 ``consteval`` support
(gcc 10+, clang 11+) because ``d`` is not a valid format specifier for a
string.
To pass a runtime string wrap it in ``fmt::runtime``:
.. code:: c++
fmt::print(fmt::runtime("{:d}"), "I am not a number");
* Added compile-time formatting
(`#2019 <https://github.com/fmtlib/fmt/pull/2019>`_,
`#2044 <https://github.com/fmtlib/fmt/pull/2044>`_,
`#2056 <https://github.com/fmtlib/fmt/pull/2056>`_,
`#2072 <https://github.com/fmtlib/fmt/pull/2072>`_,
`#2075 <https://github.com/fmtlib/fmt/pull/2075>`_,
`#2078 <https://github.com/fmtlib/fmt/issues/2078>`_,
`#2129 <https://github.com/fmtlib/fmt/pull/2129>`_,
`#2326 <https://github.com/fmtlib/fmt/pull/2326>`_).
For example (`godbolt <https://godbolt.org/z/Mxx9d89jM>`__):
.. code:: c++
#include <fmt/compile.h>
consteval auto compile_time_itoa(int value) -> std::array<char, 10> {
auto result = std::array<char, 10>();
fmt::format_to(result.data(), FMT_COMPILE("{}"), value);
return result;
}
constexpr auto answer = compile_time_itoa(42);
Most of the formatting functionality is available at compile time with a
notable exception of floating-point numbers and pointers.
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Optimized handling of format specifiers during format string compilation.
For example, hexadecimal formatting (``"{:x}"``) is now 3-7x faster than
before when using ``format_to`` with format string compilation and a
stack-allocated buffer (`#1944 <https://github.com/fmtlib/fmt/issues/1944>`_).
Before (7.1.3)::
----------------------------------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------------------------------
FMTCompileOld/0 15.5 ns 15.5 ns 43302898
FMTCompileOld/42 16.6 ns 16.6 ns 43278267
FMTCompileOld/273123 18.7 ns 18.6 ns 37035861
FMTCompileOld/9223372036854775807 19.4 ns 19.4 ns 35243000
----------------------------------------------------------------------------
After (8.x)::
----------------------------------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------------------------------
FMTCompileNew/0 1.99 ns 1.99 ns 360523686
FMTCompileNew/42 2.33 ns 2.33 ns 279865664
FMTCompileNew/273123 3.72 ns 3.71 ns 190230315
FMTCompileNew/9223372036854775807 5.28 ns 5.26 ns 130711631
----------------------------------------------------------------------------
It is even faster than ``std::to_chars`` from libc++ compiled with clang on
macOS::
----------------------------------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------------------------------
ToChars/0 4.42 ns 4.41 ns 160196630
ToChars/42 5.00 ns 4.98 ns 140735201
ToChars/273123 7.26 ns 7.24 ns 95784130
ToChars/9223372036854775807 8.77 ns 8.75 ns 75872534
----------------------------------------------------------------------------
In other cases, especially involving ``std::string`` construction, the
speed up is usually lower because handling format specifiers takes a smaller
fraction of the total time.
* Added the ``_cf`` user-defined literal to represent a compiled format string.
It can be used instead of the ``FMT_COMPILE`` macro
(`#2043 <https://github.com/fmtlib/fmt/pull/2043>`_,
`#2242 <https://github.com/fmtlib/fmt/pull/2242>`_):
.. code:: c++
#include <fmt/compile.h>
using namespace fmt::literals;
auto s = fmt::format(FMT_COMPILE("{}"), 42); // 🙁 not modern
auto s = fmt::format("{}"_cf, 42); // 🙂 modern as hell
It requires compiler support for class types in non-type template parameters
(a C++20 feature) which is available in GCC 9.3+.
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Format string compilation now requires ``format`` functions of ``formatter``
specializations for user-defined types to be ``const``:
.. code:: c++
template <> struct fmt::formatter<my_type>: formatter<string_view> {
template <typename FormatContext>
auto format(my_type obj, FormatContext& ctx) const { // Note const here.
// ...
}
};
* Added UDL-based named argument support to format string compilation
(`#2243 <https://github.com/fmtlib/fmt/pull/2243>`_,
`#2281 <https://github.com/fmtlib/fmt/pull/2281>`_). For example:
.. code:: c++
#include <fmt/compile.h>
using namespace fmt::literals;
auto s = fmt::format(FMT_COMPILE("{answer}"), "answer"_a = 42);
Here the argument named "answer" is resolved at compile time with no
runtime overhead.
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Added format string compilation support to ``fmt::print``
(`#2280 <https://github.com/fmtlib/fmt/issues/2280>`_,
`#2304 <https://github.com/fmtlib/fmt/pull/2304>`_).
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Added initial support for compiling {fmt} as a C++20 module
(`#2235 <https://github.com/fmtlib/fmt/pull/2235>`_,
`#2240 <https://github.com/fmtlib/fmt/pull/2240>`_,
`#2260 <https://github.com/fmtlib/fmt/pull/2260>`_,
`#2282 <https://github.com/fmtlib/fmt/pull/2282>`_,
`#2283 <https://github.com/fmtlib/fmt/pull/2283>`_,
`#2288 <https://github.com/fmtlib/fmt/pull/2288>`_,
`#2298 <https://github.com/fmtlib/fmt/pull/2298>`_,
`#2306 <https://github.com/fmtlib/fmt/pull/2306>`_,
`#2307 <https://github.com/fmtlib/fmt/pull/2307>`_,
`#2309 <https://github.com/fmtlib/fmt/pull/2309>`_,
`#2318 <https://github.com/fmtlib/fmt/pull/2318>`_,
`#2324 <https://github.com/fmtlib/fmt/pull/2324>`_,
`#2332 <https://github.com/fmtlib/fmt/pull/2332>`_,
`#2340 <https://github.com/fmtlib/fmt/pull/2340>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Made symbols private by default reducing shared library size
(`#2301 <https://github.com/fmtlib/fmt/pull/2301>`_). For example there was
a ~15% reported reduction on one platform.
Thanks `@sergiud (Sergiu Deitsch) <https://github.com/sergiud>`_.
* Optimized includes making the result of preprocessing ``fmt/format.h``
~20% smaller with libstdc++/C++20 and slightly improving build times
(`#1998 <https://github.com/fmtlib/fmt/issues/1998>`_).
* Added support of ranges with non-const ``begin`` / ``end``
(`#1953 <https://github.com/fmtlib/fmt/pull/1953>`_).
Thanks `@kitegi (sarah) <https://github.com/kitegi>`_.
* Added support of ``std::byte`` and other formattable types to ``fmt::join``
(`#1981 <https://github.com/fmtlib/fmt/issues/1981>`_,
`#2040 <https://github.com/fmtlib/fmt/issues/2040>`_,
`#2050 <https://github.com/fmtlib/fmt/pull/2050>`_,
`#2262 <https://github.com/fmtlib/fmt/issues/2262>`_). For example:
.. code:: c++
#include <fmt/format.h>
#include <cstddef>
#include <vector>
int main() {
auto bytes = std::vector{std::byte(4), std::byte(2)};
fmt::print("{}", fmt::join(bytes, ""));
}
prints "42".
Thanks `@kamibo (Camille Bordignon) <https://github.com/kamibo>`_.
* Implemented the default format for ``std::chrono::system_clock``
(`#2319 <https://github.com/fmtlib/fmt/issues/2319>`_,
`#2345 <https://github.com/fmtlib/fmt/pull/2345>`_). For example:
.. code:: c++
#include <fmt/chrono.h>
int main() {
fmt::print("{}", std::chrono::system_clock::now());
}
prints "2021-06-18 15:22:00" (the output depends on the current date and
time). Thanks `@sunmy2019 <https://github.com/sunmy2019>`_.
* Made more chrono specifiers locale independent by default. Use the ``'L'``
specifier to get localized formatting. For example:
.. code:: c++
#include <fmt/chrono.h>
int main() {
std::locale::global(std::locale("ru_RU.UTF-8"));
auto monday = std::chrono::weekday(1);
fmt::print("{}\n", monday); // prints "Mon"
fmt::print("{:L}\n", monday); // prints "пн"
}
* Improved locale handling in chrono formatting
(`#2337 <https://github.com/fmtlib/fmt/issues/2337>`_,
`#2349 <https://github.com/fmtlib/fmt/pull/2349>`_,
`#2350 <https://github.com/fmtlib/fmt/pull/2350>`_).
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
* Deprecated ``fmt/locale.h`` moving the formatting functions that take a
locale to ``fmt/format.h`` (``char``) and ``fmt/xchar`` (other overloads).
This doesn't introduce a dependency on ``<locale>`` so there is virtually no
compile time effect.
* Deprecated an undocumented ``format_to`` overload that takes
``basic_memory_buffer``.
* Made parameter order in ``vformat_to`` consistent with ``format_to``
(`#2327 <https://github.com/fmtlib/fmt/issues/2327>`_).
* Added support for time points with arbitrary durations
(`#2208 <https://github.com/fmtlib/fmt/issues/2208>`_). For example:
.. code:: c++
#include <fmt/chrono.h>
int main() {
using tp = std::chrono::time_point<
std::chrono::system_clock, std::chrono::seconds>;
fmt::print("{:%S}", tp(std::chrono::seconds(42)));
}
prints "42".
* Formatting floating-point numbers no longer produces trailing zeros by default
for consistency with ``std::format``. For example:
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("{0:.3}", 1.1);
}
prints "1.1". Use the ``'#'`` specifier to keep trailing zeros.
* Dropped a limit on the number of elements in a range and replaced ``{}`` with
``[]`` as range delimiters for consistency with Python's ``str.format``.
* The ``'L'`` specifier for locale-specific numeric formatting can now be
combined with presentation specifiers as in ``std::format``. For example:
.. code:: c++
#include <fmt/core.h>
#include <locale>
int main() {
std::locale::global(std::locale("fr_FR.UTF-8"));
fmt::print("{0:.2Lf}", 0.42);
}
prints "0,42". The deprecated ``'n'`` specifier has been removed.
* Made the ``0`` specifier ignored for infinity and NaN
(`#2305 <https://github.com/fmtlib/fmt/issues/2305>`_,
`#2310 <https://github.com/fmtlib/fmt/pull/2310>`_).
Thanks `@Liedtke (Matthias Liedtke) <https://github.com/Liedtke>`_.
* Made the hexfloat formatting use the right alignment by default
(`#2308 <https://github.com/fmtlib/fmt/issues/2308>`_,
`#2317 <https://github.com/fmtlib/fmt/pull/2317>`_).
Thanks `@Liedtke (Matthias Liedtke) <https://github.com/Liedtke>`_.
* Removed the deprecated numeric alignment (``'='``). Use the ``'0'`` specifier
instead.
* Removed the deprecated ``fmt/posix.h`` header that has been replaced with
``fmt/os.h``.
* Removed the deprecated ``format_to_n_context``, ``format_to_n_args`` and
``make_format_to_n_args``. They have been replaced with ``format_context``,
``format_args` and ``make_format_args`` respectively.
* Moved ``wchar_t``-specific functions and types to ``fmt/xchar.h``.
You can define ``FMT_DEPRECATED_INCLUDE_XCHAR`` to automatically include
``fmt/xchar.h`` from ``fmt/format.h`` but this will be disabled in the next
major release.
* Fixed handling of the ``'+'`` specifier in localized formatting
(`#2133 <https://github.com/fmtlib/fmt/issues/2133>`_).
* Added support for the ``'s'`` format specifier that gives textual
representation of ``bool``
(`#2094 <https://github.com/fmtlib/fmt/issues/2094>`_,
`#2109 <https://github.com/fmtlib/fmt/pull/2109>`_). For example:
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("{:s}", true);
}
prints "true".
Thanks `@powercoderlol (Ivan Polyakov) <https://github.com/powercoderlol>`_.
* Made ``fmt::ptr`` work with function pointers
(`#2131 <https://github.com/fmtlib/fmt/pull/2131>`_). For example:
.. code:: c++
#include <fmt/format.h>
int main() {
fmt::print("My main: {}\n", fmt::ptr(main));
}
Thanks `@mikecrowe (Mike Crowe) <https://github.com/mikecrowe>`_.
* The undocumented support for specializing ``formatter`` for pointer types
has been removed.
* Fixed ``fmt::formatted_size`` with format string compilation
(`#2141 <https://github.com/fmtlib/fmt/pull/2141>`_,
`#2161 <https://github.com/fmtlib/fmt/pull/2161>`_).
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Fixed handling of empty format strings during format string compilation
(`#2042 <https://github.com/fmtlib/fmt/issues/2042>`_):
.. code:: c++
auto s = fmt::format(FMT_COMPILE(""));
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Fixed handling of enums in ``fmt::to_string``
(`#2036 <https://github.com/fmtlib/fmt/issues/2036>`_).
* Improved width computation
(`#2033 <https://github.com/fmtlib/fmt/issues/2033>`_,
`#2091 <https://github.com/fmtlib/fmt/issues/2091>`_). For example:
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("{:-<10}{}\n", "你好", "世界");
fmt::print("{:-<10}{}\n", "hello", "world");
}
prints
.. image:: https://user-images.githubusercontent.com/576385/
119840373-cea3ca80-beb9-11eb-91e0-54266c48e181.png
on a modern terminal.
* The experimental fast output stream (``fmt::ostream``) is now truncated by
default for consistency with ``fopen``
(`#2018 <https://github.com/fmtlib/fmt/issues/2018>`_). For example:
.. code:: c++
#include <fmt/os.h>
int main() {
fmt::ostream out1 = fmt::output_file("guide");
out1.print("Zaphod");
out1.close();
fmt::ostream out2 = fmt::output_file("guide");
out2.print("Ford");
}
writes "Ford" to the file "guide". To preserve the old file content if any
pass ``fmt::file::WRONLY | fmt::file::CREATE`` flags to ``fmt::output_file``.
* Fixed moving of ``fmt::ostream`` that holds buffered data
(`#2197 <https://github.com/fmtlib/fmt/issues/2197>`_,
`#2198 <https://github.com/fmtlib/fmt/pull/2198>`_).
Thanks `@vtta <https://github.com/vtta>`_.
* Replaced the ``fmt::system_error`` exception with a function of the same
name that constructs ``std::system_error``
(`#2266 <https://github.com/fmtlib/fmt/issues/2266>`_).
* Replaced the ``fmt::windows_error`` exception with a function of the same
name that constructs ``std::system_error`` with the category returned by
``fmt::system_category()``
(`#2274 <https://github.com/fmtlib/fmt/issues/2274>`_,
`#2275 <https://github.com/fmtlib/fmt/pull/2275>`_).
The latter is similar to ``std::sytem_category`` but correctly handles UTF-8.
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
* Replaced ``fmt::error_code`` with ``std::error_code`` and made it formattable
(`#2269 <https://github.com/fmtlib/fmt/issues/2269>`_,
`#2270 <https://github.com/fmtlib/fmt/pull/2270>`_,
`#2273 <https://github.com/fmtlib/fmt/pull/2273>`_).
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
* Added speech synthesis support
(`#2206 <https://github.com/fmtlib/fmt/pull/2206>`_).
* Made ``format_to`` work with a memory buffer that has a custom allocator
(`#2300 <https://github.com/fmtlib/fmt/pull/2300>`_).
Thanks `@voxmea <https://github.com/voxmea>`_.
* Added ``Allocator::max_size`` support to ``basic_memory_buffer``.
(`#1960 <https://github.com/fmtlib/fmt/pull/1960>`_).
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
* Added wide string support to ``fmt::join``
(`#2236 <https://github.com/fmtlib/fmt/pull/2236>`_).
Thanks `@crbrz <https://github.com/crbrz>`_.
* Made iterators passed to ``formatter`` specializations via a format context
satisfy C++20 ``std::output_iterator`` requirements
(`#2156 <https://github.com/fmtlib/fmt/issues/2156>`_,
`#2158 <https://github.com/fmtlib/fmt/pull/2158>`_,
`#2195 <https://github.com/fmtlib/fmt/issues/2195>`_,
`#2204 <https://github.com/fmtlib/fmt/pull/2204>`_).
Thanks `@randomnetcat (Jason Cobb) <https://github.com/randomnetcat>`_.
* Optimized the ``printf`` implementation
(`#1982 <https://github.com/fmtlib/fmt/pull/1982>`_,
`#1984 <https://github.com/fmtlib/fmt/pull/1984>`_,
`#2016 <https://github.com/fmtlib/fmt/pull/2016>`_,
`#2164 <https://github.com/fmtlib/fmt/pull/2164>`_).
Thanks `@rimathia <https://github.com/rimathia>`_ and
`@moiwi <https://github.com/moiwi>`_.
* Improved detection of ``constexpr`` ``char_traits``
(`#2246 <https://github.com/fmtlib/fmt/pull/2246>`_,
`#2257 <https://github.com/fmtlib/fmt/pull/2257>`_).
Thanks `@phprus (Vladislav Shchapov) <https://github.com/phprus>`_.
* Fixed writing to ``stdout`` when it is redirected to ``NUL`` on Windows
(`#2080 <https://github.com/fmtlib/fmt/issues/2080>`_).
* Fixed exception propagation from iterators
(`#2097 <https://github.com/fmtlib/fmt/issues/2097>`_).
* Improved ``strftime`` error handling
(`#2238 <https://github.com/fmtlib/fmt/issues/2238>`_,
`#2244 <https://github.com/fmtlib/fmt/pull/2244>`_).
Thanks `@yumeyao <https://github.com/yumeyao>`_.
* Stopped using deprecated GCC UDL template extension.
* Added ``fmt/args.h`` to the install target
(`#2096 <https://github.com/fmtlib/fmt/issues/2096>`_).
* Error messages are now passed to assert when exceptions are disabled
(`#2145 <https://github.com/fmtlib/fmt/pull/2145>`_).
Thanks `@NobodyXu (Jiahao XU) <https://github.com/NobodyXu>`_.
* Added the ``FMT_MASTER_PROJECT`` CMake option to control build and install
targets when {fmt} is included via ``add_subdirectory``
(`#2098 <https://github.com/fmtlib/fmt/issues/2098>`_,
`#2100 <https://github.com/fmtlib/fmt/pull/2100>`_).
Thanks `@randomizedthinking <https://github.com/randomizedthinking>`_.
* Improved build configuration
(`#2026 <https://github.com/fmtlib/fmt/pull/2026>`_,
`#2122 <https://github.com/fmtlib/fmt/pull/2122>`_).
Thanks `@luncliff (Park DongHa) <https://github.com/luncliff>`_ and
`@ibaned (Dan Ibanez) <https://github.com/ibaned>`_.
* Fixed various warnings and compilation issues
(`#1947 <https://github.com/fmtlib/fmt/issues/1947>`_,
`#1959 <https://github.com/fmtlib/fmt/pull/1959>`_,
`#1963 <https://github.com/fmtlib/fmt/pull/1963>`_,
`#1965 <https://github.com/fmtlib/fmt/pull/1965>`_,
`#1966 <https://github.com/fmtlib/fmt/issues/1966>`_,
`#1974 <https://github.com/fmtlib/fmt/pull/1974>`_,
`#1975 <https://github.com/fmtlib/fmt/pull/1975>`_,
`#1990 <https://github.com/fmtlib/fmt/pull/1990>`_,
`#2000 <https://github.com/fmtlib/fmt/issues/2000>`_,
`#2001 <https://github.com/fmtlib/fmt/pull/2001>`_,
`#2002 <https://github.com/fmtlib/fmt/issues/2002>`_,
`#2004 <https://github.com/fmtlib/fmt/issues/2004>`_,
`#2006 <https://github.com/fmtlib/fmt/pull/2006>`_,
`#2009 <https://github.com/fmtlib/fmt/pull/2009>`_,
`#2010 <https://github.com/fmtlib/fmt/pull/2010>`_,
`#2038 <https://github.com/fmtlib/fmt/issues/2038>`_,
`#2039 <https://github.com/fmtlib/fmt/issues/2039>`_,
`#2047 <https://github.com/fmtlib/fmt/issues/2047>`_,
`#2053 <https://github.com/fmtlib/fmt/pull/2053>`_,
`#2059 <https://github.com/fmtlib/fmt/issues/2059>`_,
`#2065 <https://github.com/fmtlib/fmt/pull/2065>`_,
`#2067 <https://github.com/fmtlib/fmt/pull/2067>`_,
`#2068 <https://github.com/fmtlib/fmt/pull/2068>`_,
`#2073 <https://github.com/fmtlib/fmt/pull/2073>`_,
`#2103 <https://github.com/fmtlib/fmt/issues/2103>`_,
`#2105 <https://github.com/fmtlib/fmt/issues/2105>`_,
`#2106 <https://github.com/fmtlib/fmt/pull/2106>`_,
`#2107 <https://github.com/fmtlib/fmt/pull/2107>`_,
`#2116 <https://github.com/fmtlib/fmt/issues/2116>`_,
`#2117 <https://github.com/fmtlib/fmt/pull/2117>`_,
`#2118 <https://github.com/fmtlib/fmt/issues/2118>`_,
`#2119 <https://github.com/fmtlib/fmt/pull/2119>`_,
`#2127 <https://github.com/fmtlib/fmt/issues/2127>`_,
`#2128 <https://github.com/fmtlib/fmt/pull/2128>`_,
`#2140 <https://github.com/fmtlib/fmt/issues/2140>`_,
`#2142 <https://github.com/fmtlib/fmt/issues/2142>`_,
`#2143 <https://github.com/fmtlib/fmt/pull/2143>`_,
`#2144 <https://github.com/fmtlib/fmt/pull/2144>`_,
`#2147 <https://github.com/fmtlib/fmt/issues/2147>`_,
`#2148 <https://github.com/fmtlib/fmt/issues/2148>`_,
`#2149 <https://github.com/fmtlib/fmt/issues/2149>`_,
`#2152 <https://github.com/fmtlib/fmt/pull/2152>`_,
`#2160 <https://github.com/fmtlib/fmt/pull/2160>`_,
`#2170 <https://github.com/fmtlib/fmt/issues/2170>`_,
`#2175 <https://github.com/fmtlib/fmt/issues/2175>`_,
`#2176 <https://github.com/fmtlib/fmt/issues/2176>`_,
`#2177 <https://github.com/fmtlib/fmt/pull/2177>`_,
`#2178 <https://github.com/fmtlib/fmt/issues/2178>`_,
`#2179 <https://github.com/fmtlib/fmt/pull/2179>`_,
`#2180 <https://github.com/fmtlib/fmt/issues/2180>`_,
`#2181 <https://github.com/fmtlib/fmt/issues/2181>`_,
`#2183 <https://github.com/fmtlib/fmt/pull/2183>`_,
`#2184 <https://github.com/fmtlib/fmt/issues/2184>`_,
`#2185 <https://github.com/fmtlib/fmt/issues/2185>`_,
`#2186 <https://github.com/fmtlib/fmt/pull/2186>`_,
`#2187 <https://github.com/fmtlib/fmt/pull/2187>`_,
`#2190 <https://github.com/fmtlib/fmt/pull/2190>`_,
`#2192 <https://github.com/fmtlib/fmt/pull/2192>`_,
`#2194 <https://github.com/fmtlib/fmt/pull/2194>`_,
`#2205 <https://github.com/fmtlib/fmt/pull/2205>`_,
`#2210 <https://github.com/fmtlib/fmt/issues/2210>`_,
`#2211 <https://github.com/fmtlib/fmt/pull/2211>`_,
`#2215 <https://github.com/fmtlib/fmt/pull/2215>`_,
`#2216 <https://github.com/fmtlib/fmt/pull/2216>`_,
`#2218 <https://github.com/fmtlib/fmt/pull/2218>`_,
`#2220 <https://github.com/fmtlib/fmt/pull/2220>`_,
`#2228 <https://github.com/fmtlib/fmt/issues/2228>`_,
`#2229 <https://github.com/fmtlib/fmt/pull/2229>`_,
`#2230 <https://github.com/fmtlib/fmt/pull/2230>`_,
`#2233 <https://github.com/fmtlib/fmt/issues/2233>`_,
`#2239 <https://github.com/fmtlib/fmt/pull/2239>`_,
`#2248 <https://github.com/fmtlib/fmt/issues/2248>`_,
`#2252 <https://github.com/fmtlib/fmt/issues/2252>`_,
`#2253 <https://github.com/fmtlib/fmt/pull/2253>`_,
`#2255 <https://github.com/fmtlib/fmt/pull/2255>`_,
`#2261 <https://github.com/fmtlib/fmt/issues/2261>`_,
`#2278 <https://github.com/fmtlib/fmt/issues/2278>`_,
`#2284 <https://github.com/fmtlib/fmt/issues/2284>`_,
`#2287 <https://github.com/fmtlib/fmt/pull/2287>`_,
`#2289 <https://github.com/fmtlib/fmt/pull/2289>`_,
`#2290 <https://github.com/fmtlib/fmt/pull/2290>`_,
`#2293 <https://github.com/fmtlib/fmt/pull/2293>`_,
`#2295 <https://github.com/fmtlib/fmt/issues/2295>`_,
`#2296 <https://github.com/fmtlib/fmt/pull/2296>`_,
`#2297 <https://github.com/fmtlib/fmt/pull/2297>`_,
`#2311 <https://github.com/fmtlib/fmt/issues/2311>`_,
`#2313 <https://github.com/fmtlib/fmt/pull/2313>`_,
`#2315 <https://github.com/fmtlib/fmt/pull/2315>`_,
`#2320 <https://github.com/fmtlib/fmt/issues/2320>`_,
`#2321 <https://github.com/fmtlib/fmt/pull/2321>`_,
`#2323 <https://github.com/fmtlib/fmt/pull/2323>`_,
`#2328 <https://github.com/fmtlib/fmt/issues/2328>`_,
`#2329 <https://github.com/fmtlib/fmt/pull/2329>`_,
`#2333 <https://github.com/fmtlib/fmt/pull/2333>`_,
`#2338 <https://github.com/fmtlib/fmt/pull/2338>`_,
`#2341 <https://github.com/fmtlib/fmt/pull/2341>`_).
Thanks `@darklukee <https://github.com/darklukee>`_,
`@fagg (Ashton Fagg) <https://github.com/fagg>`_,
`@killerbot242 (Lieven de Cock) <https://github.com/killerbot242>`_,
`@jgopel (Jonathan Gopel) <https://github.com/jgopel>`_,
`@yeswalrus (Walter Gray) <https://github.com/yeswalrus>`_,
`@Finkman <https://github.com/Finkman>`_,
`@HazardyKnusperkeks (Björn Schäpers) <https://github.com/HazardyKnusperkeks>`_,
`@dkavolis (Daumantas Kavolis) <https://github.com/dkavolis>`_,
`@concatime (Issam Maghni) <https://github.com/concatime>`_,
`@chronoxor (Ivan Shynkarenka) <https://github.com/chronoxor>`_,
`@summivox (Yin Zhong) <https://github.com/summivox>`_,
`@yNeo <https://github.com/yNeo>`_,
`@Apache-HB (Elliot) <https://github.com/Apache-HB>`_,
`@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_,
`@toojays (John Steele Scott) <https://github.com/toojays>`_,
`@Brainy0207 <https://github.com/Brainy0207>`_,
`@vadz (VZ) <https://github.com/vadz>`_,
`@imsherlock (Ryan Sherlock) <https://github.com/imsherlock>`_,
`@phprus (Vladislav Shchapov) <https://github.com/phprus>`_,
`@white238 (Chris White) <https://github.com/white238>`_,
`@yafshar (Yaser Afshar) <https://github.com/yafshar>`_,
`@BillyDonahue (Billy Donahue) <https://github.com/BillyDonahue>`_,
`@jstaahl <https://github.com/jstaahl>`_,
`@denchat <https://github.com/denchat>`_,
`@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_,
`@ilyakurdyukov (Ilya Kurdyukov) <https://github.com/ilyakurdyukov>`_,
`@ilmai <https://github.com/ilmai>`_,
`@JessyDL (Jessy De Lannoit) <https://github.com/JessyDL>`_,
`@sergiud (Sergiu Deitsch) <https://github.com/sergiud>`_,
`@mwinterb <https://github.com/mwinterb>`_,
`@sven-herrmann <https://github.com/sven-herrmann>`_,
`@jmelas (John Melas) <https://github.com/jmelas>`_,
`@twoixter (Jose Miguel Pérez) <https://github.com/twoixter>`_,
`@crbrz <https://github.com/crbrz>`_,
`@upsj (Tobias Ribizel) <https://github.com/upsj>`_.
* Improved documentation
(`#1986 <https://github.com/fmtlib/fmt/issues/1986>`_,
`#2051 <https://github.com/fmtlib/fmt/pull/2051>`_,
`#2057 <https://github.com/fmtlib/fmt/issues/2057>`_,
`#2081 <https://github.com/fmtlib/fmt/pull/2081>`_,
`#2084 <https://github.com/fmtlib/fmt/issues/2084>`_,
`#2312 <https://github.com/fmtlib/fmt/pull/2312>`_).
Thanks `@imba-tjd (谭九鼎) <https://github.com/imba-tjd>`_,
`@0x416c69 (AlιAѕѕaѕѕιN) <https://github.com/0x416c69>`_,
`@mordante <https://github.com/mordante>`_.
* Continuous integration and test improvements
(`#1969 <https://github.com/fmtlib/fmt/issues/1969>`_,
`#1991 <https://github.com/fmtlib/fmt/pull/1991>`_,
`#2020 <https://github.com/fmtlib/fmt/pull/2020>`_,
`#2110 <https://github.com/fmtlib/fmt/pull/2110>`_,
`#2114 <https://github.com/fmtlib/fmt/pull/2114>`_,
`#2196 <https://github.com/fmtlib/fmt/issues/2196>`_,
`#2217 <https://github.com/fmtlib/fmt/pull/2217>`_,
`#2247 <https://github.com/fmtlib/fmt/pull/2247>`_,
`#2256 <https://github.com/fmtlib/fmt/pull/2256>`_,
`#2336 <https://github.com/fmtlib/fmt/pull/2336>`_,
`#2346 <https://github.com/fmtlib/fmt/pull/2346>`_).
Thanks `@jgopel (Jonathan Gopel) <https://github.com/jgopel>`_,
`@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_ and
`@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
7.1.3 - 2020-11-24
------------------
* Fixed handling of buffer boundaries in ``format_to_n``
(`#1996 <https://github.com/fmtlib/fmt/issues/1996>`_,
`#2029 <https://github.com/fmtlib/fmt/issues/2029>`_).
* Fixed linkage errors when linking with a shared library
(`#2011 <https://github.com/fmtlib/fmt/issues/2011>`_).
* Reintroduced ostream support to range formatters
(`#2014 <https://github.com/fmtlib/fmt/issues/2014>`_).
* Worked around an issue with mixing std versions in gcc
(`#2017 <https://github.com/fmtlib/fmt/issues/2017>`_).
7.1.2 - 2020-11-04
------------------
* Fixed floating point formatting with large precision
(`#1976 <https://github.com/fmtlib/fmt/issues/1976>`_).
7.1.1 - 2020-11-01
------------------
* Fixed ABI compatibility with 7.0.x
(`#1961 <https://github.com/fmtlib/fmt/issues/1961>`_).
* Added the ``FMT_ARM_ABI_COMPATIBILITY`` macro to work around ABI
incompatibility between GCC and Clang on ARM
(`#1919 <https://github.com/fmtlib/fmt/issues/1919>`_).
* Worked around a SFINAE bug in GCC 8
(`#1957 <https://github.com/fmtlib/fmt/issues/1957>`_).
* Fixed linkage errors when building with GCC's LTO
(`#1955 <https://github.com/fmtlib/fmt/issues/1955>`_).
* Fixed a compilation error when building without ``__builtin_clz`` or equivalent
(`#1968 <https://github.com/fmtlib/fmt/pull/1968>`_).
Thanks `@tohammer (Tobias Hammer) <https://github.com/tohammer>`_.
* Fixed a sign conversion warning
(`#1964 <https://github.com/fmtlib/fmt/pull/1964>`_).
Thanks `@OptoCloud <https://github.com/OptoCloud>`_.
7.1.0 - 2020-10-25
------------------
* Switched from `Grisu3
<https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf>`_
to `Dragonbox <https://github.com/jk-jeon/dragonbox>`_ for the default
floating-point formatting which gives the shortest decimal representation
with round-trip guarantee and correct rounding
(`#1882 <https://github.com/fmtlib/fmt/pull/1882>`_,
`#1887 <https://github.com/fmtlib/fmt/pull/1887>`_,
`#1894 <https://github.com/fmtlib/fmt/pull/1894>`_). This makes {fmt} up to
20-30x faster than common implementations of ``std::ostringstream`` and
``sprintf`` on `dtoa-benchmark <https://github.com/fmtlib/dtoa-benchmark>`_
and faster than double-conversion and Ryū:
.. image:: https://user-images.githubusercontent.com/576385/
95684665-11719600-0ba8-11eb-8e5b-972ff4e49428.png
It is possible to get even better performance at the cost of larger binary
size by compiling with the ``FMT_USE_FULL_CACHE_DRAGONBOX`` macro set to 1.
Thanks `@jk-jeon (Junekey Jeon) <https://github.com/jk-jeon>`_.
* Added an experimental unsynchronized file output API which, together with
`format string compilation <https://fmt.dev/latest/api.html#compile-api>`_,
can give `5-9 times speed up compared to fprintf
<https://www.zverovich.net/2020/08/04/optimal-file-buffer-size.html>`_
on common platforms (`godbolt <https://godbolt.org/z/nsTcG8>`__):
.. code:: c++
#include <fmt/os.h>
int main() {
auto f = fmt::output_file("guide");
f.print("The answer is {}.", 42);
}
* Added a formatter for ``std::chrono::time_point<system_clock>``
(`#1819 <https://github.com/fmtlib/fmt/issues/1819>`_,
`#1837 <https://github.com/fmtlib/fmt/pull/1837>`_). For example
(`godbolt <https://godbolt.org/z/c4M6fh>`__):
.. code:: c++
#include <fmt/chrono.h>
int main() {
auto now = std::chrono::system_clock::now();
fmt::print("The time is {:%H:%M:%S}.\n", now);
}
Thanks `@adamburgess (Adam Burgess) <https://github.com/adamburgess>`_.
* Added support for ranges with non-const ``begin``/``end`` to ``fmt::join``
(`#1784 <https://github.com/fmtlib/fmt/issues/1784>`_,
`#1786 <https://github.com/fmtlib/fmt/pull/1786>`_). For example
(`godbolt <https://godbolt.org/z/jP63Tv>`__):
.. code:: c++
#include <fmt/ranges.h>
#include <range/v3/view/filter.hpp>
int main() {
using std::literals::string_literals::operator""s;
auto strs = std::array{"a"s, "bb"s, "ccc"s};
auto range = strs | ranges::views::filter(
[] (const std::string &x) { return x.size() != 2; }
);
fmt::print("{}\n", fmt::join(range, ""));
}
prints "accc".
Thanks `@tonyelewis (Tony E Lewis) <https://github.com/tonyelewis>`_.
* Added a ``memory_buffer::append`` overload that takes a range
(`#1806 <https://github.com/fmtlib/fmt/pull/1806>`_).
Thanks `@BRevzin (Barry Revzin) <https://github.com/BRevzin>`_.
* Improved handling of single code units in ``FMT_COMPILE``. For example:
.. code:: c++
#include <fmt/compile.h>
char* f(char* buf) {
return fmt::format_to(buf, FMT_COMPILE("x{}"), 42);
}
compiles to just (`godbolt <https://godbolt.org/z/5vncz3>`__):
.. code:: asm
_Z1fPc:
movb $120, (%rdi)
xorl %edx, %edx
cmpl $42, _ZN3fmt2v76detail10basic_dataIvE23zero_or_powers_of_10_32E+8(%rip)
movl $3, %eax
seta %dl
subl %edx, %eax
movzwl _ZN3fmt2v76detail10basic_dataIvE6digitsE+84(%rip), %edx
cltq
addq %rdi, %rax
movw %dx, -2(%rax)
ret
Here a single ``mov`` instruction writes ``'x'`` (``$120``) to the output
buffer.
* Added dynamic width support to format string compilation
(`#1809 <https://github.com/fmtlib/fmt/issues/1809>`_).
* Improved error reporting for unformattable types: now you'll get the type name
directly in the error message instead of the note:
.. code:: c++
#include <fmt/core.h>
struct how_about_no {};
int main() {
fmt::print("{}", how_about_no());
}
Error (`godbolt <https://godbolt.org/z/GoxM4e>`__):
``fmt/core.h:1438:3: error: static_assert failed due to requirement
'fmt::v7::formattable<how_about_no>()' "Cannot format an argument.
To make type T formattable provide a formatter<T> specialization:
https://fmt.dev/latest/api.html#udt"
...``
* Added the `make_args_checked <https://fmt.dev/7.1.0/api.html#argument-lists>`_
function template that allows you to write formatting functions with
compile-time format string checks and avoid binary code bloat
(`godbolt <https://godbolt.org/z/PEf9qr>`__):
.. code:: c++
void vlog(const char* file, int line, fmt::string_view format,
fmt::format_args args) {
fmt::print("{}: {}: ", file, line);
fmt::vprint(format, args);
}
template <typename S, typename... Args>
void log(const char* file, int line, const S& format, Args&&... args) {
vlog(file, line, format,
fmt::make_args_checked<Args...>(format, args...));
}
#define MY_LOG(format, ...) \
log(__FILE__, __LINE__, FMT_STRING(format), __VA_ARGS__)
MY_LOG("invalid squishiness: {}", 42);
* Replaced ``snprintf`` fallback with a faster internal IEEE 754 ``float`` and
``double`` formatter for arbitrary precision. For example
(`godbolt <https://godbolt.org/z/dPhWvj>`__):
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("{:.500}\n", 4.9406564584124654E-324);
}
prints
``4.9406564584124654417656879286822137236505980261432476442558568250067550727020875186529983636163599237979656469544571773092665671035593979639877479601078187812630071319031140452784581716784898210368871863605699873072305000638740915356498438731247339727316961514003171538539807412623856559117102665855668676818703956031062493194527159149245532930545654440112748012970999954193198940908041656332452475714786901472678015935523861155013480352649347201937902681071074917033322268447533357208324319360923829e-324``.
* Made ``format_to_n`` and ``formatted_size`` part of the `core API
<https://fmt.dev/latest/api.html#core-api>`__
(`godbolt <https://godbolt.org/z/sPjY1K>`__):
.. code:: c++
#include <fmt/core.h>
int main() {
char buffer[10];
auto result = fmt::format_to_n(buffer, sizeof(buffer), "{}", 42);
}
* Added ``fmt::format_to_n`` overload with format string compilation
(`#1764 <https://github.com/fmtlib/fmt/issues/1764>`_,
`#1767 <https://github.com/fmtlib/fmt/pull/1767>`_,
`#1869 <https://github.com/fmtlib/fmt/pull/1869>`_). For example
(`godbolt <https://godbolt.org/z/93h86q>`__):
.. code:: c++
#include <fmt/compile.h>
int main() {
char buffer[8];
fmt::format_to_n(buffer, sizeof(buffer), FMT_COMPILE("{}"), 42);
}
Thanks `@Kurkin (Dmitry Kurkin) <https://github.com/Kurkin>`_,
`@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Added ``fmt::format_to`` overload that take ``text_style``
(`#1593 <https://github.com/fmtlib/fmt/issues/1593>`_,
`#1842 <https://github.com/fmtlib/fmt/issues/1842>`_,
`#1843 <https://github.com/fmtlib/fmt/pull/1843>`_). For example
(`godbolt <https://godbolt.org/z/91153r>`__):
.. code:: c++
#include <fmt/color.h>
int main() {
std::string out;
fmt::format_to(std::back_inserter(out),
fmt::emphasis::bold | fg(fmt::color::red),
"The answer is {}.", 42);
}
Thanks `@Naios (Denis Blank) <https://github.com/Naios>`_.
* Made the ``'#'`` specifier emit trailing zeros in addition to the decimal
point (`#1797 <https://github.com/fmtlib/fmt/issues/1797>`_). For example
(`godbolt <https://godbolt.org/z/bhdcW9>`__):
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("{:#.2g}", 0.5);
}
prints ``0.50``.
* Changed the default floating point format to not include ``.0`` for
consistency with ``std::format`` and ``std::to_chars``
(`#1893 <https://github.com/fmtlib/fmt/issues/1893>`_,
`#1943 <https://github.com/fmtlib/fmt/issues/1943>`_). It is possible to get
the decimal point and trailing zero with the ``#`` specifier.
* Fixed an issue with floating-point formatting that could result in addition of
a non-significant trailing zero in rare cases e.g. ``1.00e-34`` instead of
``1.0e-34`` (`#1873 <https://github.com/fmtlib/fmt/issues/1873>`_,
`#1917 <https://github.com/fmtlib/fmt/issues/1917>`_).
* Made ``fmt::to_string`` fallback on ``ostream`` insertion operator if
the ``formatter`` specialization is not provided
(`#1815 <https://github.com/fmtlib/fmt/issues/1815>`_,
`#1829 <https://github.com/fmtlib/fmt/pull/1829>`_).
Thanks `@alexezeder (Alexey Ochapov) <https://github.com/alexezeder>`_.
* Added support for the append mode to the experimental file API and
improved ``fcntl.h`` detection.
(`#1847 <https://github.com/fmtlib/fmt/pull/1847>`_,
`#1848 <https://github.com/fmtlib/fmt/pull/1848>`_).
Thanks `@t-wiser <https://github.com/t-wiser>`_.
* Fixed handling of types that have both an implicit conversion operator and
an overloaded ``ostream`` insertion operator
(`#1766 <https://github.com/fmtlib/fmt/issues/1766>`_).
* Fixed a slicing issue in an internal iterator type
(`#1822 <https://github.com/fmtlib/fmt/pull/1822>`_).
Thanks `@BRevzin (Barry Revzin) <https://github.com/BRevzin>`_.
* Fixed an issue in locale-specific integer formatting
(`#1927 <https://github.com/fmtlib/fmt/issues/1927>`_).
* Fixed handling of exotic code unit types
(`#1870 <https://github.com/fmtlib/fmt/issues/1870>`_,
`#1932 <https://github.com/fmtlib/fmt/issues/1932>`_).
* Improved ``FMT_ALWAYS_INLINE``
(`#1878 <https://github.com/fmtlib/fmt/pull/1878>`_).
Thanks `@jk-jeon (Junekey Jeon) <https://github.com/jk-jeon>`_.
* Removed dependency on ``windows.h``
(`#1900 <https://github.com/fmtlib/fmt/pull/1900>`_).
Thanks `@bernd5 (Bernd Baumanns) <https://github.com/bernd5>`_.
* Optimized counting of decimal digits on MSVC
(`#1890 <https://github.com/fmtlib/fmt/pull/1890>`_).
Thanks `@mwinterb <https://github.com/mwinterb>`_.
* Improved documentation
(`#1772 <https://github.com/fmtlib/fmt/issues/1772>`_,
`#1775 <https://github.com/fmtlib/fmt/pull/1775>`_,
`#1792 <https://github.com/fmtlib/fmt/pull/1792>`_,
`#1838 <https://github.com/fmtlib/fmt/pull/1838>`_,
`#1888 <https://github.com/fmtlib/fmt/pull/1888>`_,
`#1918 <https://github.com/fmtlib/fmt/pull/1918>`_,
`#1939 <https://github.com/fmtlib/fmt/pull/1939>`_).
Thanks `@leolchat (Léonard Gérard) <https://github.com/leolchat>`_,
`@pepsiman (Malcolm Parsons) <https://github.com/pepsiman>`_,
`@Klaim (Joël Lamotte) <https://github.com/Klaim>`_,
`@ravijanjam (Ravi J) <https://github.com/ravijanjam>`_,
`@francesco-st <https://github.com/francesco-st>`_,
`@udnaan (Adnan) <https://github.com/udnaan>`_.
* Added the ``FMT_REDUCE_INT_INSTANTIATIONS`` CMake option that reduces the
binary code size at the cost of some integer formatting performance. This can
be useful for extremely memory-constrained embedded systems
(`#1778 <https://github.com/fmtlib/fmt/issues/1778>`_,
`#1781 <https://github.com/fmtlib/fmt/pull/1781>`_).
Thanks `@kammce (Khalil Estell) <https://github.com/kammce>`_.
* Added the ``FMT_USE_INLINE_NAMESPACES`` macro to control usage of inline
namespaces (`#1945 <https://github.com/fmtlib/fmt/pull/1945>`_).
Thanks `@darklukee <https://github.com/darklukee>`_.
* Improved build configuration
(`#1760 <https://github.com/fmtlib/fmt/pull/1760>`_,
`#1770 <https://github.com/fmtlib/fmt/pull/1770>`_,
`#1779 <https://github.com/fmtlib/fmt/issues/1779>`_,
`#1783 <https://github.com/fmtlib/fmt/pull/1783>`_,
`#1823 <https://github.com/fmtlib/fmt/pull/1823>`_).
Thanks `@dvetutnev (Dmitriy Vetutnev) <https://github.com/dvetutnev>`_,
`@xvitaly (Vitaly Zaitsev) <https://github.com/xvitaly>`_,
`@tambry (Raul Tambre) <https://github.com/tambry>`_,
`@medithe <https://github.com/medithe>`_,
`@martinwuehrer (Martin Wührer) <https://github.com/martinwuehrer>`_.
* Fixed various warnings and compilation issues
(`#1790 <https://github.com/fmtlib/fmt/pull/1790>`_,
`#1802 <https://github.com/fmtlib/fmt/pull/1802>`_,
`#1808 <https://github.com/fmtlib/fmt/pull/1808>`_,
`#1810 <https://github.com/fmtlib/fmt/issues/1810>`_,
`#1811 <https://github.com/fmtlib/fmt/issues/1811>`_,
`#1812 <https://github.com/fmtlib/fmt/pull/1812>`_,
`#1814 <https://github.com/fmtlib/fmt/pull/1814>`_,
`#1816 <https://github.com/fmtlib/fmt/pull/1816>`_,
`#1817 <https://github.com/fmtlib/fmt/pull/1817>`_,
`#1818 <https://github.com/fmtlib/fmt/pull/1818>`_,
`#1825 <https://github.com/fmtlib/fmt/issues/1825>`_,
`#1836 <https://github.com/fmtlib/fmt/pull/1836>`_,
`#1855 <https://github.com/fmtlib/fmt/pull/1855>`_,
`#1856 <https://github.com/fmtlib/fmt/pull/1856>`_,
`#1860 <https://github.com/fmtlib/fmt/pull/1860>`_,
`#1877 <https://github.com/fmtlib/fmt/pull/1877>`_,
`#1879 <https://github.com/fmtlib/fmt/pull/1879>`_,
`#1880 <https://github.com/fmtlib/fmt/pull/1880>`_,
`#1896 <https://github.com/fmtlib/fmt/issues/1896>`_,
`#1897 <https://github.com/fmtlib/fmt/pull/1897>`_,
`#1898 <https://github.com/fmtlib/fmt/pull/1898>`_,
`#1904 <https://github.com/fmtlib/fmt/issues/1904>`_,
`#1908 <https://github.com/fmtlib/fmt/pull/1908>`_,
`#1911 <https://github.com/fmtlib/fmt/issues/1911>`_,
`#1912 <https://github.com/fmtlib/fmt/issues/1912>`_,
`#1928 <https://github.com/fmtlib/fmt/issues/1928>`_,
`#1929 <https://github.com/fmtlib/fmt/pull/1929>`_,
`#1935 <https://github.com/fmtlib/fmt/issues/1935>`_,
`#1937 <https://github.com/fmtlib/fmt/pull/1937>`_,
`#1942 <https://github.com/fmtlib/fmt/pull/1942>`_,
`#1949 <https://github.com/fmtlib/fmt/issues/1949>`_).
Thanks `@TheQwertiest <https://github.com/TheQwertiest>`_,
`@medithe <https://github.com/medithe>`_,
`@martinwuehrer (Martin Wührer) <https://github.com/martinwuehrer>`_,
`@n16h7hunt3r <https://github.com/n16h7hunt3r>`_,
`@Othereum (Seokjin Lee) <https://github.com/Othereum>`_,
`@gsjaardema (Greg Sjaardema) <https://github.com/gsjaardema>`_,
`@AlexanderLanin (Alexander Lanin) <https://github.com/AlexanderLanin>`_,
`@gcerretani (Giovanni Cerretani) <https://github.com/gcerretani>`_,
`@chronoxor (Ivan Shynkarenka) <https://github.com/chronoxor>`_,
`@noizefloor (Jan Schwers) <https://github.com/noizefloor>`_,
`@akohlmey (Axel Kohlmeyer) <https://github.com/akohlmey>`_,
`@jk-jeon (Junekey Jeon) <https://github.com/jk-jeon>`_,
`@rimathia <https://github.com/rimathia>`_,
`@rglarix (Riccardo Ghetta (larix)) <https://github.com/rglarix>`_,
`@moiwi <https://github.com/moiwi>`_,
`@heckad (Kazantcev Andrey) <https://github.com/heckad>`_,
`@MarcDirven <https://github.com/MarcDirven>`_.
`@BartSiwek (Bart Siwek) <https://github.com/BartSiwek>`_,
`@darklukee <https://github.com/darklukee>`_.
7.0.3 - 2020-08-06
------------------
* Worked around broken ``numeric_limits`` for 128-bit integers
(`#1787 <https://github.com/fmtlib/fmt/issues/1787>`_).
* Added error reporting on missing named arguments
(`#1796 <https://github.com/fmtlib/fmt/issues/1796>`_).
* Stopped using 128-bit integers with clang-cl
(`#1800 <https://github.com/fmtlib/fmt/pull/1800>`_).
Thanks `@Kingcom <https://github.com/Kingcom>`_.
* Fixed issues in locale-specific integer formatting
(`#1782 <https://github.com/fmtlib/fmt/issues/1782>`_,
`#1801 <https://github.com/fmtlib/fmt/issues/1801>`_).
7.0.2 - 2020-07-29
------------------
* Worked around broken ``numeric_limits`` for 128-bit integers
(`#1725 <https://github.com/fmtlib/fmt/issues/1725>`_).
* Fixed compatibility with CMake 3.4
(`#1779 <https://github.com/fmtlib/fmt/issues/1779>`_).
* Fixed handling of digit separators in locale-specific formatting
(`#1782 <https://github.com/fmtlib/fmt/issues/1782>`_).
7.0.1 - 2020-07-07
------------------
* Updated the inline version namespace name.
* Worked around a gcc bug in mangling of alias templates
(`#1753 <https://github.com/fmtlib/fmt/issues/1753>`_).
* Fixed a linkage error on Windows
(`#1757 <https://github.com/fmtlib/fmt/issues/1757>`_).
Thanks `@Kurkin (Dmitry Kurkin) <https://github.com/Kurkin>`_.
* Fixed minor issues with the documentation.
7.0.0 - 2020-07-05
------------------
* Reduced the library size. For example, on macOS a stripped test binary
statically linked with {fmt} `shrank from ~368k to less than 100k
<http://www.zverovich.net/2020/05/21/reducing-library-size.html>`_.
* Added a simpler and more efficient `format string compilation API
<https://fmt.dev/7.0.0/api.html#compile-api>`_:
.. code:: c++
#include <fmt/compile.h>
// Converts 42 into std::string using the most efficient method and no
// runtime format string processing.
std::string s = fmt::format(FMT_COMPILE("{}"), 42);
The old ``fmt::compile`` API is now deprecated.
* Optimized integer formatting: ``format_to`` with format string compilation
and a stack-allocated buffer is now `faster than to_chars on both
libc++ and libstdc++
<http://www.zverovich.net/2020/06/13/fast-int-to-string-revisited.html>`_.
* Optimized handling of small format strings. For example,
.. code:: c++
fmt::format("Result: {}: ({},{},{},{})", str1, str2, str3, str4, str5)
is now ~40% faster (`#1685 <https://github.com/fmtlib/fmt/issues/1685>`_).
* Applied extern templates to improve compile times when using the core API
and ``fmt/format.h`` (`#1452 <https://github.com/fmtlib/fmt/issues/1452>`_).
For example, on macOS with clang the compile time of a test translation unit
dropped from 2.3s to 0.3s with ``-O2`` and from 0.6s to 0.3s with the default
settings (``-O0``).
Before (``-O2``)::
% time c++ -c test.cc -I include -std=c++17 -O2
c++ -c test.cc -I include -std=c++17 -O2 2.22s user 0.08s system 99% cpu 2.311 total
After (``-O2``)::
% time c++ -c test.cc -I include -std=c++17 -O2
c++ -c test.cc -I include -std=c++17 -O2 0.26s user 0.04s system 98% cpu 0.303 total
Before (default)::
% time c++ -c test.cc -I include -std=c++17
c++ -c test.cc -I include -std=c++17 0.53s user 0.06s system 98% cpu 0.601 total
After (default)::
% time c++ -c test.cc -I include -std=c++17
c++ -c test.cc -I include -std=c++17 0.24s user 0.06s system 98% cpu 0.301 total
It is still recommended to use ``fmt/core.h`` instead of ``fmt/format.h`` but
the compile time difference is now smaller. Thanks
`@alex3d <https://github.com/alex3d>`_ for the suggestion.
* Named arguments are now stored on stack (no dynamic memory allocations) and
the compiled code is more compact and efficient. For example
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("The answer is {answer}\n", fmt::arg("answer", 42));
}
compiles to just (`godbolt <https://godbolt.org/z/NcfEp_>`__)
.. code:: asm
.LC0:
.string "answer"
.LC1:
.string "The answer is {answer}\n"
main:
sub rsp, 56
mov edi, OFFSET FLAT:.LC1
mov esi, 23
movabs rdx, 4611686018427387905
lea rax, [rsp+32]
lea rcx, [rsp+16]
mov QWORD PTR [rsp+8], 1
mov QWORD PTR [rsp], rax
mov DWORD PTR [rsp+16], 42
mov QWORD PTR [rsp+32], OFFSET FLAT:.LC0
mov DWORD PTR [rsp+40], 0
call fmt::v6::vprint(fmt::v6::basic_string_view<char>,
fmt::v6::format_args)
xor eax, eax
add rsp, 56
ret
.L.str.1:
.asciz "answer"
* Implemented compile-time checks for dynamic width and precision
(`#1614 <https://github.com/fmtlib/fmt/issues/1614>`_):
.. code:: c++
#include <fmt/format.h>
int main() {
fmt::print(FMT_STRING("{0:{1}}"), 42);
}
now gives a compilation error because argument 1 doesn't exist::
In file included from test.cc:1:
include/fmt/format.h:2726:27: error: constexpr variable 'invalid_format' must be
initialized by a constant expression
FMT_CONSTEXPR_DECL bool invalid_format =
^
...
include/fmt/core.h:569:26: note: in call to
'&checker(s, {}).context_->on_error(&"argument not found"[0])'
if (id >= num_args_) on_error("argument not found");
^
* Added sentinel support to ``fmt::join``
(`#1689 <https://github.com/fmtlib/fmt/pull/1689>`_)
.. code:: c++
struct zstring_sentinel {};
bool operator==(const char* p, zstring_sentinel) { return *p == '\0'; }
bool operator!=(const char* p, zstring_sentinel) { return *p != '\0'; }
struct zstring {
const char* p;
const char* begin() const { return p; }
zstring_sentinel end() const { return {}; }
};
auto s = fmt::format("{}", fmt::join(zstring{"hello"}, "_"));
// s == "h_e_l_l_o"
Thanks `@BRevzin (Barry Revzin) <https://github.com/BRevzin>`_.
* Added support for named arguments, ``clear`` and ``reserve`` to
``dynamic_format_arg_store``
(`#1655 <https://github.com/fmtlib/fmt/issues/1655>`_,
`#1663 <https://github.com/fmtlib/fmt/pull/1663>`_,
`#1674 <https://github.com/fmtlib/fmt/pull/1674>`_,
`#1677 <https://github.com/fmtlib/fmt/pull/1677>`_).
Thanks `@vsolontsov-ll (Vladimir Solontsov)
<https://github.com/vsolontsov-ll>`_.
* Added support for the ``'c'`` format specifier to integral types for
compatibility with ``std::format``
(`#1652 <https://github.com/fmtlib/fmt/issues/1652>`_).
* Replaced the ``'n'`` format specifier with ``'L'`` for compatibility with
``std::format`` (`#1624 <https://github.com/fmtlib/fmt/issues/1624>`_).
The ``'n'`` specifier can be enabled via the ``FMT_DEPRECATED_N_SPECIFIER``
macro.
* The ``'='`` format specifier is now disabled by default for compatibility with
``std::format``. It can be enabled via the ``FMT_DEPRECATED_NUMERIC_ALIGN``
macro.
* Removed the following deprecated APIs:
* ``FMT_STRING_ALIAS`` and ``fmt`` macros - replaced by ``FMT_STRING``
* ``fmt::basic_string_view::char_type`` - replaced by
``fmt::basic_string_view::value_type``
* ``convert_to_int``
* ``format_arg_store::types``
* ``*parse_context`` - replaced by ``*format_parse_context``
* ``FMT_DEPRECATED_INCLUDE_OS``
* ``FMT_DEPRECATED_PERCENT`` - incompatible with ``std::format``
* ``*writer`` - replaced by compiled format API
* Renamed the ``internal`` namespace to ``detail``
(`#1538 <https://github.com/fmtlib/fmt/issues/1538>`_). The former is still
provided as an alias if the ``FMT_USE_INTERNAL`` macro is defined.
* Improved compatibility between ``fmt::printf`` with the standard specs
(`#1595 <https://github.com/fmtlib/fmt/issues/1595>`_,
`#1682 <https://github.com/fmtlib/fmt/pull/1682>`_,
`#1683 <https://github.com/fmtlib/fmt/pull/1683>`_,
`#1687 <https://github.com/fmtlib/fmt/pull/1687>`_,
`#1699 <https://github.com/fmtlib/fmt/pull/1699>`_).
Thanks `@rimathia <https://github.com/rimathia>`_.
* Fixed handling of ``operator<<`` overloads that use ``copyfmt``
(`#1666 <https://github.com/fmtlib/fmt/issues/1666>`_).
* Added the ``FMT_OS`` CMake option to control inclusion of OS-specific APIs
in the fmt target. This can be useful for embedded platforms
(`#1654 <https://github.com/fmtlib/fmt/issues/1654>`_,
`#1656 <https://github.com/fmtlib/fmt/pull/1656>`_).
Thanks `@kwesolowski (Krzysztof Wesolowski)
<https://github.com/kwesolowski>`_.
* Replaced ``FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION`` with the ``FMT_FUZZ``
macro to prevent interferring with fuzzing of projects using {fmt}
(`#1650 <https://github.com/fmtlib/fmt/pull/1650>`_).
Thanks `@asraa (Asra Ali) <https://github.com/asraa>`_.
* Fixed compatibility with emscripten
(`#1636 <https://github.com/fmtlib/fmt/issues/1636>`_,
`#1637 <https://github.com/fmtlib/fmt/pull/1637>`_).
Thanks `@ArthurSonzogni (Arthur Sonzogni)
<https://github.com/ArthurSonzogni>`_.
* Improved documentation
(`#704 <https://github.com/fmtlib/fmt/issues/704>`_,
`#1643 <https://github.com/fmtlib/fmt/pull/1643>`_,
`#1660 <https://github.com/fmtlib/fmt/pull/1660>`_,
`#1681 <https://github.com/fmtlib/fmt/pull/1681>`_,
`#1691 <https://github.com/fmtlib/fmt/pull/1691>`_,
`#1706 <https://github.com/fmtlib/fmt/pull/1706>`_,
`#1714 <https://github.com/fmtlib/fmt/pull/1714>`_,
`#1721 <https://github.com/fmtlib/fmt/pull/1721>`_,
`#1739 <https://github.com/fmtlib/fmt/pull/1739>`_,
`#1740 <https://github.com/fmtlib/fmt/pull/1740>`_,
`#1741 <https://github.com/fmtlib/fmt/pull/1741>`_,
`#1751 <https://github.com/fmtlib/fmt/pull/1751>`_).
Thanks `@senior7515 (Alexander Gallego) <https://github.com/senior7515>`_,
`@lsr0 (Lindsay Roberts) <https://github.com/lsr0>`_,
`@puetzk (Kevin Puetz) <https://github.com/puetzk>`_,
`@fpelliccioni (Fernando Pelliccioni) <https://github.com/fpelliccioni>`_,
Alexey Kuzmenko, `@jelly (jelle van der Waa) <https://github.com/jelly>`_,
`@claremacrae (Clare Macrae) <https://github.com/claremacrae>`_,
`@jiapengwen (文佳鹏) <https://github.com/jiapengwen>`_,
`@gsjaardema (Greg Sjaardema) <https://github.com/gsjaardema>`_,
`@alexey-milovidov <https://github.com/alexey-milovidov>`_.
* Implemented various build configuration fixes and improvements
(`#1603 <https://github.com/fmtlib/fmt/pull/1603>`_,
`#1657 <https://github.com/fmtlib/fmt/pull/1657>`_,
`#1702 <https://github.com/fmtlib/fmt/pull/1702>`_,
`#1728 <https://github.com/fmtlib/fmt/pull/1728>`_).
Thanks `@scramsby (Scott Ramsby) <https://github.com/scramsby>`_,
`@jtojnar (Jan Tojnar) <https://github.com/jtojnar>`_,
`@orivej (Orivej Desh) <https://github.com/orivej>`_,
`@flagarde <https://github.com/flagarde>`_.
* Fixed various warnings and compilation issues
(`#1616 <https://github.com/fmtlib/fmt/pull/1616>`_,
`#1620 <https://github.com/fmtlib/fmt/issues/1620>`_,
`#1622 <https://github.com/fmtlib/fmt/issues/1622>`_,
`#1625 <https://github.com/fmtlib/fmt/issues/1625>`_,
`#1627 <https://github.com/fmtlib/fmt/pull/1627>`_,
`#1628 <https://github.com/fmtlib/fmt/issues/1628>`_,
`#1629 <https://github.com/fmtlib/fmt/pull/1629>`_,
`#1631 <https://github.com/fmtlib/fmt/issues/1631>`_,
`#1633 <https://github.com/fmtlib/fmt/pull/1633>`_,
`#1649 <https://github.com/fmtlib/fmt/pull/1649>`_,
`#1658 <https://github.com/fmtlib/fmt/issues/1658>`_,
`#1661 <https://github.com/fmtlib/fmt/pull/1661>`_,
`#1667 <https://github.com/fmtlib/fmt/pull/1667>`_,
`#1668 <https://github.com/fmtlib/fmt/issues/1668>`_,
`#1669 <https://github.com/fmtlib/fmt/pull/1669>`_,
`#1692 <https://github.com/fmtlib/fmt/issues/1692>`_,
`#1696 <https://github.com/fmtlib/fmt/pull/1696>`_,
`#1697 <https://github.com/fmtlib/fmt/pull/1697>`_,
`#1707 <https://github.com/fmtlib/fmt/issues/1707>`_,
`#1712 <https://github.com/fmtlib/fmt/pull/1712>`_,
`#1716 <https://github.com/fmtlib/fmt/pull/1716>`_,
`#1722 <https://github.com/fmtlib/fmt/pull/1722>`_,
`#1724 <https://github.com/fmtlib/fmt/issues/1724>`_,
`#1729 <https://github.com/fmtlib/fmt/pull/1729>`_,
`#1738 <https://github.com/fmtlib/fmt/pull/1738>`_,
`#1742 <https://github.com/fmtlib/fmt/issues/1742>`_,
`#1743 <https://github.com/fmtlib/fmt/issues/1743>`_,
`#1744 <https://github.com/fmtlib/fmt/pull/1744>`_,
`#1747 <https://github.com/fmtlib/fmt/issues/1747>`_,
`#1750 <https://github.com/fmtlib/fmt/pull/1750>`_).
Thanks `@gsjaardema (Greg Sjaardema) <https://github.com/gsjaardema>`_,
`@gabime (Gabi Melman) <https://github.com/gabime>`_,
`@johnor (Johan) <https://github.com/johnor>`_,
`@Kurkin (Dmitry Kurkin) <https://github.com/Kurkin>`_,
`@invexed (James Beach) <https://github.com/invexed>`_,
`@peterbell10 <https://github.com/peterbell10>`_,
`@daixtrose (Markus Werle) <https://github.com/daixtrose>`_,
`@petrutlucian94 (Lucian Petrut) <https://github.com/petrutlucian94>`_,
`@Neargye (Daniil Goncharov) <https://github.com/Neargye>`_,
`@ambitslix (Attila M. Szilagyi) <https://github.com/ambitslix>`_,
`@gabime (Gabi Melman) <https://github.com/gabime>`_,
`@erthink (Leonid Yuriev) <https://github.com/erthink>`_,
`@tohammer (Tobias Hammer) <https://github.com/tohammer>`_,
`@0x8000-0000 (Florin Iucha) <https://github.com/0x8000-0000>`_.
6.2.1 - 2020-05-09
------------------
* Fixed ostream support in ``sprintf``
(`#1631 <https://github.com/fmtlib/fmt/issues/1631>`_).
* Fixed type detection when using implicit conversion to ``string_view`` and
ostream ``operator<<`` inconsistently
(`#1662 <https://github.com/fmtlib/fmt/issues/1662>`_).
6.2.0 - 2020-04-05
------------------
* Improved error reporting when trying to format an object of a non-formattable
type:
.. code:: c++
fmt::format("{}", S());
now gives::
include/fmt/core.h:1015:5: error: static_assert failed due to requirement
'formattable' "Cannot format argument. To make type T formattable provide a
formatter<T> specialization:
https://fmt.dev/latest/api.html#formatting-user-defined-types"
static_assert(
^
...
note: in instantiation of function template specialization
'fmt::v6::format<char [3], S, char>' requested here
fmt::format("{}", S());
^
if ``S`` is not formattable.
* Reduced the library size by ~10%.
* Always print decimal point if ``#`` is specified
(`#1476 <https://github.com/fmtlib/fmt/issues/1476>`_,
`#1498 <https://github.com/fmtlib/fmt/issues/1498>`_):
.. code:: c++
fmt::print("{:#.0f}", 42.0);
now prints ``42.``
* Implemented the ``'L'`` specifier for locale-specific numeric formatting to
improve compatibility with ``std::format``. The ``'n'`` specifier is now
deprecated and will be removed in the next major release.
* Moved OS-specific APIs such as ``windows_error`` from ``fmt/format.h`` to
``fmt/os.h``. You can define ``FMT_DEPRECATED_INCLUDE_OS`` to automatically
include ``fmt/os.h`` from ``fmt/format.h`` for compatibility but this will be
disabled in the next major release.
* Added precision overflow detection in floating-point formatting.
* Implemented detection of invalid use of ``fmt::arg``.
* Used ``type_identity`` to block unnecessary template argument deduction.
Thanks Tim Song.
* Improved UTF-8 handling
(`#1109 <https://github.com/fmtlib/fmt/issues/1109>`_):
.. code:: c++
fmt::print("┌{0:─^{2}}┐\n"
"│{1: ^{2}}│\n"
"└{0:─^{2}}┘\n", "", "Привет, мир!", 20);
now prints::
┌────────────────────┐
│ Привет, мир! │
└────────────────────┘
on systems that support Unicode.
* Added experimental dynamic argument storage
(`#1170 <https://github.com/fmtlib/fmt/issues/1170>`_,
`#1584 <https://github.com/fmtlib/fmt/pull/1584>`_):
.. code:: c++
fmt::dynamic_format_arg_store<fmt::format_context> store;
store.push_back("answer");
store.push_back(42);
fmt::vprint("The {} is {}.\n", store);
prints::
The answer is 42.
Thanks `@vsolontsov-ll (Vladimir Solontsov)
<https://github.com/vsolontsov-ll>`_.
* Made ``fmt::join`` accept ``initializer_list``
(`#1591 <https://github.com/fmtlib/fmt/pull/1591>`_).
Thanks `@Rapotkinnik (Nikolay Rapotkin) <https://github.com/Rapotkinnik>`_.
* Fixed handling of empty tuples
(`#1588 <https://github.com/fmtlib/fmt/issues/1588>`_).
* Fixed handling of output iterators in ``format_to_n``
(`#1506 <https://github.com/fmtlib/fmt/issues/1506>`_).
* Fixed formatting of ``std::chrono::duration`` types to wide output
(`#1533 <https://github.com/fmtlib/fmt/pull/1533>`_).
Thanks `@zeffy (pilao) <https://github.com/zeffy>`_.
* Added const ``begin`` and ``end`` overload to buffers
(`#1553 <https://github.com/fmtlib/fmt/pull/1553>`_).
Thanks `@dominicpoeschko <https://github.com/dominicpoeschko>`_.
* Added the ability to disable floating-point formatting via ``FMT_USE_FLOAT``,
``FMT_USE_DOUBLE`` and ``FMT_USE_LONG_DOUBLE`` macros for extremely
memory-constrained embedded system
(`#1590 <https://github.com/fmtlib/fmt/pull/1590>`_).
Thanks `@albaguirre (Alberto Aguirre) <https://github.com/albaguirre>`_.
* Made ``FMT_STRING`` work with ``constexpr`` ``string_view``
(`#1589 <https://github.com/fmtlib/fmt/pull/1589>`_).
Thanks `@scramsby (Scott Ramsby) <https://github.com/scramsby>`_.
* Implemented a minor optimization in the format string parser
(`#1560 <https://github.com/fmtlib/fmt/pull/1560>`_).
Thanks `@IkarusDeveloper <https://github.com/IkarusDeveloper>`_.
* Improved attribute detection
(`#1469 <https://github.com/fmtlib/fmt/pull/1469>`_,
`#1475 <https://github.com/fmtlib/fmt/pull/1475>`_,
`#1576 <https://github.com/fmtlib/fmt/pull/1576>`_).
Thanks `@federico-busato (Federico) <https://github.com/federico-busato>`_,
`@chronoxor (Ivan Shynkarenka) <https://github.com/chronoxor>`_,
`@refnum <https://github.com/refnum>`_.
* Improved documentation
(`#1481 <https://github.com/fmtlib/fmt/pull/1481>`_,
`#1523 <https://github.com/fmtlib/fmt/pull/1523>`_).
Thanks `@JackBoosY (Jack·Boos·Yu) <https://github.com/JackBoosY>`_,
`@imba-tjd (谭九鼎) <https://github.com/imba-tjd>`_.
* Fixed symbol visibility on Linux when compiling with ``-fvisibility=hidden``
(`#1535 <https://github.com/fmtlib/fmt/pull/1535>`_).
Thanks `@milianw (Milian Wolff) <https://github.com/milianw>`_.
* Implemented various build configuration fixes and improvements
(`#1264 <https://github.com/fmtlib/fmt/issues/1264>`_,
`#1460 <https://github.com/fmtlib/fmt/issues/1460>`_,
`#1534 <https://github.com/fmtlib/fmt/pull/1534>`_,
`#1536 <https://github.com/fmtlib/fmt/issues/1536>`_,
`#1545 <https://github.com/fmtlib/fmt/issues/1545>`_,
`#1546 <https://github.com/fmtlib/fmt/pull/1546>`_,
`#1566 <https://github.com/fmtlib/fmt/issues/1566>`_,
`#1582 <https://github.com/fmtlib/fmt/pull/1582>`_,
`#1597 <https://github.com/fmtlib/fmt/issues/1597>`_,
`#1598 <https://github.com/fmtlib/fmt/pull/1598>`_).
Thanks `@ambitslix (Attila M. Szilagyi) <https://github.com/ambitslix>`_,
`@jwillikers (Jordan Williams) <https://github.com/jwillikers>`_,
`@stac47 (Laurent Stacul) <https://github.com/stac47>`_.
* Fixed various warnings and compilation issues
(`#1433 <https://github.com/fmtlib/fmt/pull/1433>`_,
`#1461 <https://github.com/fmtlib/fmt/issues/1461>`_,
`#1470 <https://github.com/fmtlib/fmt/pull/1470>`_,
`#1480 <https://github.com/fmtlib/fmt/pull/1480>`_,
`#1485 <https://github.com/fmtlib/fmt/pull/1485>`_,
`#1492 <https://github.com/fmtlib/fmt/pull/1492>`_,
`#1493 <https://github.com/fmtlib/fmt/issues/1493>`_,
`#1504 <https://github.com/fmtlib/fmt/issues/1504>`_,
`#1505 <https://github.com/fmtlib/fmt/pull/1505>`_,
`#1512 <https://github.com/fmtlib/fmt/pull/1512>`_,
`#1515 <https://github.com/fmtlib/fmt/issues/1515>`_,
`#1516 <https://github.com/fmtlib/fmt/pull/1516>`_,
`#1518 <https://github.com/fmtlib/fmt/pull/1518>`_,
`#1519 <https://github.com/fmtlib/fmt/pull/1519>`_,
`#1520 <https://github.com/fmtlib/fmt/pull/1520>`_,
`#1521 <https://github.com/fmtlib/fmt/pull/1521>`_,
`#1522 <https://github.com/fmtlib/fmt/pull/1522>`_,
`#1524 <https://github.com/fmtlib/fmt/issues/1524>`_,
`#1530 <https://github.com/fmtlib/fmt/pull/1530>`_,
`#1531 <https://github.com/fmtlib/fmt/issues/1531>`_,
`#1532 <https://github.com/fmtlib/fmt/pull/1532>`_,
`#1539 <https://github.com/fmtlib/fmt/issues/1539>`_,
`#1547 <https://github.com/fmtlib/fmt/issues/1547>`_,
`#1548 <https://github.com/fmtlib/fmt/issues/1548>`_,
`#1554 <https://github.com/fmtlib/fmt/pull/1554>`_,
`#1567 <https://github.com/fmtlib/fmt/issues/1567>`_,
`#1568 <https://github.com/fmtlib/fmt/pull/1568>`_,
`#1569 <https://github.com/fmtlib/fmt/pull/1569>`_,
`#1571 <https://github.com/fmtlib/fmt/pull/1571>`_,
`#1573 <https://github.com/fmtlib/fmt/pull/1573>`_,
`#1575 <https://github.com/fmtlib/fmt/pull/1575>`_,
`#1581 <https://github.com/fmtlib/fmt/pull/1581>`_,
`#1583 <https://github.com/fmtlib/fmt/issues/1583>`_,
`#1586 <https://github.com/fmtlib/fmt/issues/1586>`_,
`#1587 <https://github.com/fmtlib/fmt/issues/1587>`_,
`#1594 <https://github.com/fmtlib/fmt/issues/1594>`_,
`#1596 <https://github.com/fmtlib/fmt/pull/1596>`_,
`#1604 <https://github.com/fmtlib/fmt/issues/1604>`_,
`#1606 <https://github.com/fmtlib/fmt/pull/1606>`_,
`#1607 <https://github.com/fmtlib/fmt/issues/1607>`_,
`#1609 <https://github.com/fmtlib/fmt/issues/1609>`_).
Thanks `@marti4d (Chris Martin) <https://github.com/marti4d>`_,
`@iPherian <https://github.com/iPherian>`_,
`@parkertomatoes <https://github.com/parkertomatoes>`_,
`@gsjaardema (Greg Sjaardema) <https://github.com/gsjaardema>`_,
`@chronoxor (Ivan Shynkarenka) <https://github.com/chronoxor>`_,
`@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_,
`@torsten48 <https://github.com/torsten48>`_,
`@tohammer (Tobias Hammer) <https://github.com/tohammer>`_,
`@lefticus (Jason Turner) <https://github.com/lefticus>`_,
`@ryusakki (Haise) <https://github.com/ryusakki>`_,
`@adnsv (Alex Denisov) <https://github.com/adnsv>`_,
`@fghzxm <https://github.com/fghzxm>`_,
`@refnum <https://github.com/refnum>`_,
`@pramodk (Pramod Kumbhar) <https://github.com/pramodk>`_,
`@Spirrwell <https://github.com/Spirrwell>`_,
`@scramsby (Scott Ramsby) <https://github.com/scramsby>`_.
6.1.2 - 2019-12-11
------------------
* Fixed ABI compatibility with ``libfmt.so.6.0.0``
(`#1471 <https://github.com/fmtlib/fmt/issues/1471>`_).
* Fixed handling types convertible to ``std::string_view``
(`#1451 <https://github.com/fmtlib/fmt/pull/1451>`_).
Thanks `@denizevrenci (Deniz Evrenci) <https://github.com/denizevrenci>`_.
* Made CUDA test an opt-in enabled via the ``FMT_CUDA_TEST`` CMake option.
* Fixed sign conversion warnings
(`#1440 <https://github.com/fmtlib/fmt/pull/1440>`_).
Thanks `@0x8000-0000 (Florin Iucha) <https://github.com/0x8000-0000>`_.
6.1.1 - 2019-12-04
------------------
* Fixed shared library build on Windows
(`#1443 <https://github.com/fmtlib/fmt/pull/1443>`_,
`#1445 <https://github.com/fmtlib/fmt/issues/1445>`_,
`#1446 <https://github.com/fmtlib/fmt/pull/1446>`_,
`#1450 <https://github.com/fmtlib/fmt/issues/1450>`_).
Thanks `@egorpugin (Egor Pugin) <https://github.com/egorpugin>`_,
`@bbolli (Beat Bolli) <https://github.com/bbolli>`_.
* Added a missing decimal point in exponent notation with trailing zeros.
* Removed deprecated ``format_arg_store::TYPES``.
6.1.0 - 2019-12-01
------------------
* {fmt} now formats IEEE 754 ``float`` and ``double`` using the shortest decimal
representation with correct rounding by default:
.. code:: c++
#include <cmath>
#include <fmt/core.h>
int main() {
fmt::print("{}", M_PI);
}
prints ``3.141592653589793``.
* Made the fast binary to decimal floating-point formatter the default,
simplified it and improved performance. {fmt} is now 15 times faster than
libc++'s ``std::ostringstream``, 11 times faster than ``printf`` and 10%
faster than double-conversion on `dtoa-benchmark
<https://github.com/fmtlib/dtoa-benchmark>`_:
================== ========= =======
Function Time (ns) Speedup
================== ========= =======
ostringstream 1,346.30 1.00x
ostrstream 1,195.74 1.13x
sprintf 995.08 1.35x
doubleconv 99.10 13.59x
fmt 88.34 15.24x
================== ========= =======
.. image:: https://user-images.githubusercontent.com/576385/
69767160-cdaca400-112f-11ea-9fc5-347c9f83caad.png
* {fmt} no longer converts ``float`` arguments to ``double``. In particular this
improves the default (shortest) representation of floats and makes
``fmt::format`` consistent with ``std::format`` specs
(`#1336 <https://github.com/fmtlib/fmt/issues/1336>`_,
`#1353 <https://github.com/fmtlib/fmt/issues/1353>`_,
`#1360 <https://github.com/fmtlib/fmt/pull/1360>`_,
`#1361 <https://github.com/fmtlib/fmt/pull/1361>`_):
.. code:: c++
fmt::print("{}", 0.1f);
prints ``0.1`` instead of ``0.10000000149011612``.
Thanks `@orivej (Orivej Desh) <https://github.com/orivej>`_.
* Made floating-point formatting output consistent with ``printf``/iostreams
(`#1376 <https://github.com/fmtlib/fmt/issues/1376>`_,
`#1417 <https://github.com/fmtlib/fmt/issues/1417>`_).
* Added support for 128-bit integers
(`#1287 <https://github.com/fmtlib/fmt/pull/1287>`_):
.. code:: c++
fmt::print("{}", std::numeric_limits<__int128_t>::max());
prints ``170141183460469231731687303715884105727``.
Thanks `@denizevrenci (Deniz Evrenci) <https://github.com/denizevrenci>`_.
* The overload of ``print`` that takes ``text_style`` is now atomic, i.e. the
output from different threads doesn't interleave
(`#1351 <https://github.com/fmtlib/fmt/pull/1351>`_).
Thanks `@tankiJong (Tanki Zhang) <https://github.com/tankiJong>`_.
* Made compile time in the header-only mode ~20% faster by reducing the number
of template instantiations. ``wchar_t`` overload of ``vprint`` was moved from
``fmt/core.h`` to ``fmt/format.h``.
* Added an overload of ``fmt::join`` that works with tuples
(`#1322 <https://github.com/fmtlib/fmt/issues/1322>`_,
`#1330 <https://github.com/fmtlib/fmt/pull/1330>`_):
.. code:: c++
#include <tuple>
#include <fmt/ranges.h>
int main() {
std::tuple<char, int, float> t{'a', 1, 2.0f};
fmt::print("{}", t);
}
prints ``('a', 1, 2.0)``.
Thanks `@jeremyong (Jeremy Ong) <https://github.com/jeremyong>`_.
* Changed formatting of octal zero with prefix from "00" to "0":
.. code:: c++
fmt::print("{:#o}", 0);
prints ``0``.
* The locale is now passed to ostream insertion (``<<``) operators
(`#1406 <https://github.com/fmtlib/fmt/pull/1406>`_):
.. code:: c++
#include <fmt/locale.h>
#include <fmt/ostream.h>
struct S {
double value;
};
std::ostream& operator<<(std::ostream& os, S s) {
return os << s.value;
}
int main() {
auto s = fmt::format(std::locale("fr_FR.UTF-8"), "{}", S{0.42});
// s == "0,42"
}
Thanks `@dlaugt (Daniel Laügt) <https://github.com/dlaugt>`_.
* Locale-specific number formatting now uses grouping
(`#1393 <https://github.com/fmtlib/fmt/issues/1393>`_
`#1394 <https://github.com/fmtlib/fmt/pull/1394>`_).
Thanks `@skrdaniel <https://github.com/skrdaniel>`_.
* Fixed handling of types with deleted implicit rvalue conversion to
``const char**`` (`#1421 <https://github.com/fmtlib/fmt/issues/1421>`_):
.. code:: c++
struct mystring {
operator const char*() const&;
operator const char*() &;
operator const char*() const&& = delete;
operator const char*() && = delete;
};
mystring str;
fmt::print("{}", str); // now compiles
* Enums are now mapped to correct underlying types instead of ``int``
(`#1286 <https://github.com/fmtlib/fmt/pull/1286>`_).
Thanks `@agmt (Egor Seredin) <https://github.com/agmt>`_.
* Enum classes are no longer implicitly converted to ``int``
(`#1424 <https://github.com/fmtlib/fmt/issues/1424>`_).
* Added ``basic_format_parse_context`` for consistency with C++20
``std::format`` and deprecated ``basic_parse_context``.
* Fixed handling of UTF-8 in precision
(`#1389 <https://github.com/fmtlib/fmt/issues/1389>`_,
`#1390 <https://github.com/fmtlib/fmt/pull/1390>`_).
Thanks `@tajtiattila (Attila Tajti) <https://github.com/tajtiattila>`_.
* {fmt} can now be installed on Linux, macOS and Windows with
`Conda <https://docs.conda.io/en/latest/>`__ using its
`conda-forge <https://conda-forge.org>`__
`package <https://github.com/conda-forge/fmt-feedstock>`__
(`#1410 <https://github.com/fmtlib/fmt/pull/1410>`_)::
conda install -c conda-forge fmt
Thanks `@tdegeus (Tom de Geus) <https://github.com/tdegeus>`_.
* Added a CUDA test (`#1285 <https://github.com/fmtlib/fmt/pull/1285>`_,
`#1317 <https://github.com/fmtlib/fmt/pull/1317>`_).
Thanks `@luncliff (Park DongHa) <https://github.com/luncliff>`_ and
`@risa2000 <https://github.com/risa2000>`_.
* Improved documentation (`#1276 <https://github.com/fmtlib/fmt/pull/1276>`_,
`#1291 <https://github.com/fmtlib/fmt/issues/1291>`_,
`#1296 <https://github.com/fmtlib/fmt/issues/1296>`_,
`#1315 <https://github.com/fmtlib/fmt/pull/1315>`_,
`#1332 <https://github.com/fmtlib/fmt/pull/1332>`_,
`#1337 <https://github.com/fmtlib/fmt/pull/1337>`_,
`#1395 <https://github.com/fmtlib/fmt/issues/1395>`_
`#1418 <https://github.com/fmtlib/fmt/pull/1418>`_).
Thanks
`@waywardmonkeys (Bruce Mitchener) <https://github.com/waywardmonkeys>`_,
`@pauldreik (Paul Dreik) <https://github.com/pauldreik>`_,
`@jackoalan (Jack Andersen) <https://github.com/jackoalan>`_.
* Various code improvements
(`#1358 <https://github.com/fmtlib/fmt/pull/1358>`_,
`#1407 <https://github.com/fmtlib/fmt/pull/1407>`_).
Thanks `@orivej (Orivej Desh) <https://github.com/orivej>`_,
`@dpacbach (David P. Sicilia) <https://github.com/dpacbach>`_,
* Fixed compile-time format string checks for user-defined types
(`#1292 <https://github.com/fmtlib/fmt/issues/1292>`_).
* Worked around a false positive in ``unsigned-integer-overflow`` sanitizer
(`#1377 <https://github.com/fmtlib/fmt/issues/1377>`_).
* Fixed various warnings and compilation issues
(`#1273 <https://github.com/fmtlib/fmt/issues/1273>`_,
`#1278 <https://github.com/fmtlib/fmt/pull/1278>`_,
`#1280 <https://github.com/fmtlib/fmt/pull/1280>`_,
`#1281 <https://github.com/fmtlib/fmt/issues/1281>`_,
`#1288 <https://github.com/fmtlib/fmt/issues/1288>`_,
`#1290 <https://github.com/fmtlib/fmt/pull/1290>`_,
`#1301 <https://github.com/fmtlib/fmt/pull/1301>`_,
`#1305 <https://github.com/fmtlib/fmt/issues/1305>`_,
`#1306 <https://github.com/fmtlib/fmt/issues/1306>`_,
`#1309 <https://github.com/fmtlib/fmt/issues/1309>`_,
`#1312 <https://github.com/fmtlib/fmt/pull/1312>`_,
`#1313 <https://github.com/fmtlib/fmt/issues/1313>`_,
`#1316 <https://github.com/fmtlib/fmt/issues/1316>`_,
`#1319 <https://github.com/fmtlib/fmt/issues/1319>`_,
`#1320 <https://github.com/fmtlib/fmt/pull/1320>`_,
`#1326 <https://github.com/fmtlib/fmt/pull/1326>`_,
`#1328 <https://github.com/fmtlib/fmt/pull/1328>`_,
`#1344 <https://github.com/fmtlib/fmt/issues/1344>`_,
`#1345 <https://github.com/fmtlib/fmt/pull/1345>`_,
`#1347 <https://github.com/fmtlib/fmt/pull/1347>`_,
`#1349 <https://github.com/fmtlib/fmt/pull/1349>`_,
`#1354 <https://github.com/fmtlib/fmt/issues/1354>`_,
`#1362 <https://github.com/fmtlib/fmt/issues/1362>`_,
`#1366 <https://github.com/fmtlib/fmt/issues/1366>`_,
`#1364 <https://github.com/fmtlib/fmt/pull/1364>`_,
`#1370 <https://github.com/fmtlib/fmt/pull/1370>`_,
`#1371 <https://github.com/fmtlib/fmt/pull/1371>`_,
`#1385 <https://github.com/fmtlib/fmt/issues/1385>`_,
`#1388 <https://github.com/fmtlib/fmt/issues/1388>`_,
`#1397 <https://github.com/fmtlib/fmt/pull/1397>`_,
`#1414 <https://github.com/fmtlib/fmt/pull/1414>`_,
`#1416 <https://github.com/fmtlib/fmt/pull/1416>`_,
`#1422 <https://github.com/fmtlib/fmt/issues/1422>`_
`#1427 <https://github.com/fmtlib/fmt/pull/1427>`_,
`#1431 <https://github.com/fmtlib/fmt/issues/1431>`_,
`#1433 <https://github.com/fmtlib/fmt/pull/1433>`_).
Thanks `@hhb <https://github.com/hhb>`_,
`@gsjaardema (Greg Sjaardema) <https://github.com/gsjaardema>`_,
`@gabime (Gabi Melman) <https://github.com/gabime>`_,
`@neheb (Rosen Penev) <https://github.com/neheb>`_,
`@vedranmiletic (Vedran Miletić) <https://github.com/vedranmiletic>`_,
`@dkavolis (Daumantas Kavolis) <https://github.com/dkavolis>`_,
`@mwinterb <https://github.com/mwinterb>`_,
`@orivej (Orivej Desh) <https://github.com/orivej>`_,
`@denizevrenci (Deniz Evrenci) <https://github.com/denizevrenci>`_
`@leonklingele <https://github.com/leonklingele>`_,
`@chronoxor (Ivan Shynkarenka) <https://github.com/chronoxor>`_,
`@kent-tri <https://github.com/kent-tri>`_,
`@0x8000-0000 (Florin Iucha) <https://github.com/0x8000-0000>`_,
`@marti4d (Chris Martin) <https://github.com/marti4d>`_.
6.0.0 - 2019-08-26
------------------
* Switched to the `MIT license
<https://github.com/fmtlib/fmt/blob/5a4b24613ba16cc689977c3b5bd8274a3ba1dd1f/LICENSE.rst>`_
with an optional exception that allows distributing binary code without
attribution.
* Floating-point formatting is now locale-independent by default:
.. code:: c++
#include <locale>
#include <fmt/core.h>
int main() {
std::locale::global(std::locale("ru_RU.UTF-8"));
fmt::print("value = {}", 4.2);
}
prints "value = 4.2" regardless of the locale.
For locale-specific formatting use the ``n`` specifier:
.. code:: c++
std::locale::global(std::locale("ru_RU.UTF-8"));
fmt::print("value = {:n}", 4.2);
prints "value = 4,2".
* Added an experimental Grisu floating-point formatting algorithm
implementation (disabled by default). To enable it compile with the
``FMT_USE_GRISU`` macro defined to 1:
.. code:: c++
#define FMT_USE_GRISU 1
#include <fmt/format.h>
auto s = fmt::format("{}", 4.2); // formats 4.2 using Grisu
With Grisu enabled, {fmt} is 13x faster than ``std::ostringstream`` (libc++)
and 10x faster than ``sprintf`` on `dtoa-benchmark
<https://github.com/fmtlib/dtoa-benchmark>`_ (`full results
<https://fmt.dev/unknown_mac64_clang10.0.html>`_):
.. image:: https://user-images.githubusercontent.com/576385/
54883977-9fe8c000-4e28-11e9-8bde-272d122e7c52.jpg
* Separated formatting and parsing contexts for consistency with
`C++20 std::format <http://eel.is/c++draft/format>`_, removing the
undocumented ``basic_format_context::parse_context()`` function.
* Added `oss-fuzz <https://github.com/google/oss-fuzz>`_ support
(`#1199 <https://github.com/fmtlib/fmt/pull/1199>`_).
Thanks `@pauldreik (Paul Dreik) <https://github.com/pauldreik>`_.
* ``formatter`` specializations now always take precedence over ``operator<<``
(`#952 <https://github.com/fmtlib/fmt/issues/952>`_):
.. code:: c++
#include <iostream>
#include <fmt/ostream.h>
struct S {};
std::ostream& operator<<(std::ostream& os, S) {
return os << 1;
}
template <>
struct fmt::formatter<S> : fmt::formatter<int> {
auto format(S, format_context& ctx) {
return formatter<int>::format(2, ctx);
}
};
int main() {
std::cout << S() << "\n"; // prints 1 using operator<<
fmt::print("{}\n", S()); // prints 2 using formatter
}
* Introduced the experimental ``fmt::compile`` function that does format string
compilation (`#618 <https://github.com/fmtlib/fmt/issues/618>`_,
`#1169 <https://github.com/fmtlib/fmt/issues/1169>`_,
`#1171 <https://github.com/fmtlib/fmt/pull/1171>`_):
.. code:: c++
#include <fmt/compile.h>
auto f = fmt::compile<int>("{}");
std::string s = fmt::format(f, 42); // can be called multiple times to
// format different values
// s == "42"
It moves the cost of parsing a format string outside of the format function
which can be beneficial when identically formatting many objects of the same
types. Thanks `@stryku (Mateusz Janek) <https://github.com/stryku>`_.
* Added experimental ``%`` format specifier that formats floating-point values
as percentages (`#1060 <https://github.com/fmtlib/fmt/pull/1060>`_,
`#1069 <https://github.com/fmtlib/fmt/pull/1069>`_,
`#1071 <https://github.com/fmtlib/fmt/pull/1071>`_):
.. code:: c++
auto s = fmt::format("{:.1%}", 0.42); // s == "42.0%"
Thanks `@gawain-bolton (Gawain Bolton) <https://github.com/gawain-bolton>`_.
* Implemented precision for floating-point durations
(`#1004 <https://github.com/fmtlib/fmt/issues/1004>`_,
`#1012 <https://github.com/fmtlib/fmt/pull/1012>`_):
.. code:: c++
auto s = fmt::format("{:.1}", std::chrono::duration<double>(1.234));
// s == 1.2s
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Implemented ``chrono`` format specifiers ``%Q`` and ``%q`` that give the value
and the unit respectively (`#1019 <https://github.com/fmtlib/fmt/pull/1019>`_):
.. code:: c++
auto value = fmt::format("{:%Q}", 42s); // value == "42"
auto unit = fmt::format("{:%q}", 42s); // unit == "s"
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Fixed handling of dynamic width in chrono formatter:
.. code:: c++
auto s = fmt::format("{0:{1}%H:%M:%S}", std::chrono::seconds(12345), 12);
// ^ width argument index ^ width
// s == "03:25:45 "
Thanks Howard Hinnant.
* Removed deprecated ``fmt/time.h``. Use ``fmt/chrono.h`` instead.
* Added ``fmt::format`` and ``fmt::vformat`` overloads that take ``text_style``
(`#993 <https://github.com/fmtlib/fmt/issues/993>`_,
`#994 <https://github.com/fmtlib/fmt/pull/994>`_):
.. code:: c++
#include <fmt/color.h>
std::string message = fmt::format(fmt::emphasis::bold | fg(fmt::color::red),
"The answer is {}.", 42);
Thanks `@Naios (Denis Blank) <https://github.com/Naios>`_.
* Removed the deprecated color API (``print_colored``). Use the new API, namely
``print`` overloads that take ``text_style`` instead.
* Made ``std::unique_ptr`` and ``std::shared_ptr`` formattable as pointers via
``fmt::ptr`` (`#1121 <https://github.com/fmtlib/fmt/pull/1121>`_):
.. code:: c++
std::unique_ptr<int> p = ...;
fmt::print("{}", fmt::ptr(p)); // prints p as a pointer
Thanks `@sighingnow (Tao He) <https://github.com/sighingnow>`_.
* Made ``print`` and ``vprint`` report I/O errors
(`#1098 <https://github.com/fmtlib/fmt/issues/1098>`_,
`#1099 <https://github.com/fmtlib/fmt/pull/1099>`_).
Thanks `@BillyDonahue (Billy Donahue) <https://github.com/BillyDonahue>`_.
* Marked deprecated APIs with the ``[[deprecated]]`` attribute and removed
internal uses of deprecated APIs
(`#1022 <https://github.com/fmtlib/fmt/pull/1022>`_).
Thanks `@eliaskosunen (Elias Kosunen) <https://github.com/eliaskosunen>`_.
* Modernized the codebase using more C++11 features and removing workarounds.
Most importantly, ``buffer_context`` is now an alias template, so
use ``buffer_context<T>`` instead of ``buffer_context<T>::type``.
These features require GCC 4.8 or later.
* ``formatter`` specializations now always take precedence over implicit
conversions to ``int`` and the undocumented ``convert_to_int`` trait
is now deprecated.
* Moved the undocumented ``basic_writer``, ``writer``, and ``wwriter`` types
to the ``internal`` namespace.
* Removed deprecated ``basic_format_context::begin()``. Use ``out()`` instead.
* Disallowed passing the result of ``join`` as an lvalue to prevent misuse.
* Refactored the undocumented structs that represent parsed format specifiers
to simplify the API and allow multibyte fill.
* Moved SFINAE to template parameters to reduce symbol sizes.
* Switched to ``fputws`` for writing wide strings so that it's no longer
required to call ``_setmode`` on Windows
(`#1229 <https://github.com/fmtlib/fmt/issues/1229>`_,
`#1243 <https://github.com/fmtlib/fmt/pull/1243>`_).
Thanks `@jackoalan (Jack Andersen) <https://github.com/jackoalan>`_.
* Improved literal-based API
(`#1254 <https://github.com/fmtlib/fmt/pull/1254>`_).
Thanks `@sylveon (Charles Milette) <https://github.com/sylveon>`_.
* Added support for exotic platforms without ``uintptr_t`` such as IBM i
(AS/400) which has 128-bit pointers and only 64-bit integers
(`#1059 <https://github.com/fmtlib/fmt/issues/1059>`_).
* Added `Sublime Text syntax highlighting config
<https://github.com/fmtlib/fmt/blob/master/support/C%2B%2B.sublime-syntax>`_
(`#1037 <https://github.com/fmtlib/fmt/issues/1037>`_).
Thanks `@Kronuz (Germán Méndez Bravo) <https://github.com/Kronuz>`_.
* Added the ``FMT_ENFORCE_COMPILE_STRING`` macro to enforce the use of
compile-time format strings
(`#1231 <https://github.com/fmtlib/fmt/pull/1231>`_).
Thanks `@jackoalan (Jack Andersen) <https://github.com/jackoalan>`_.
* Stopped setting ``CMAKE_BUILD_TYPE`` if {fmt} is a subproject
(`#1081 <https://github.com/fmtlib/fmt/issues/1081>`_).
* Various build improvements
(`#1039 <https://github.com/fmtlib/fmt/pull/1039>`_,
`#1078 <https://github.com/fmtlib/fmt/pull/1078>`_,
`#1091 <https://github.com/fmtlib/fmt/pull/1091>`_,
`#1103 <https://github.com/fmtlib/fmt/pull/1103>`_,
`#1177 <https://github.com/fmtlib/fmt/pull/1177>`_).
Thanks `@luncliff (Park DongHa) <https://github.com/luncliff>`_,
`@jasonszang (Jason Shuo Zang) <https://github.com/jasonszang>`_,
`@olafhering (Olaf Hering) <https://github.com/olafhering>`_,
`@Lecetem <https://github.com/Lectem>`_,
`@pauldreik (Paul Dreik) <https://github.com/pauldreik>`_.
* Improved documentation
(`#1049 <https://github.com/fmtlib/fmt/issues/1049>`_,
`#1051 <https://github.com/fmtlib/fmt/pull/1051>`_,
`#1083 <https://github.com/fmtlib/fmt/pull/1083>`_,
`#1113 <https://github.com/fmtlib/fmt/pull/1113>`_,
`#1114 <https://github.com/fmtlib/fmt/pull/1114>`_,
`#1146 <https://github.com/fmtlib/fmt/issues/1146>`_,
`#1180 <https://github.com/fmtlib/fmt/issues/1180>`_,
`#1250 <https://github.com/fmtlib/fmt/pull/1250>`_,
`#1252 <https://github.com/fmtlib/fmt/pull/1252>`_,
`#1265 <https://github.com/fmtlib/fmt/pull/1265>`_).
Thanks `@mikelui (Michael Lui) <https://github.com/mikelui>`_,
`@foonathan (Jonathan Müller) <https://github.com/foonathan>`_,
`@BillyDonahue (Billy Donahue) <https://github.com/BillyDonahue>`_,
`@jwakely (Jonathan Wakely) <https://github.com/jwakely>`_,
`@kaisbe (Kais Ben Salah) <https://github.com/kaisbe>`_,
`@sdebionne (Samuel Debionne) <https://github.com/sdebionne>`_.
* Fixed ambiguous formatter specialization in ``fmt/ranges.h``
(`#1123 <https://github.com/fmtlib/fmt/issues/1123>`_).
* Fixed formatting of a non-empty ``std::filesystem::path`` which is an
infinitely deep range of its components
(`#1268 <https://github.com/fmtlib/fmt/issues/1268>`_).
* Fixed handling of general output iterators when formatting characters
(`#1056 <https://github.com/fmtlib/fmt/issues/1056>`_,
`#1058 <https://github.com/fmtlib/fmt/pull/1058>`_).
Thanks `@abolz (Alexander Bolz) <https://github.com/abolz>`_.
* Fixed handling of output iterators in ``formatter`` specialization for
ranges (`#1064 <https://github.com/fmtlib/fmt/issues/1064>`_).
* Fixed handling of exotic character types
(`#1188 <https://github.com/fmtlib/fmt/issues/1188>`_).
* Made chrono formatting work with exceptions disabled
(`#1062 <https://github.com/fmtlib/fmt/issues/1062>`_).
* Fixed DLL visibility issues
(`#1134 <https://github.com/fmtlib/fmt/pull/1134>`_,
`#1147 <https://github.com/fmtlib/fmt/pull/1147>`_).
Thanks `@denchat <https://github.com/denchat>`_.
* Disabled the use of UDL template extension on GCC 9
(`#1148 <https://github.com/fmtlib/fmt/issues/1148>`_).
* Removed misplaced ``format`` compile-time checks from ``printf``
(`#1173 <https://github.com/fmtlib/fmt/issues/1173>`_).
* Fixed issues in the experimental floating-point formatter
(`#1072 <https://github.com/fmtlib/fmt/issues/1072>`_,
`#1129 <https://github.com/fmtlib/fmt/issues/1129>`_,
`#1153 <https://github.com/fmtlib/fmt/issues/1153>`_,
`#1155 <https://github.com/fmtlib/fmt/pull/1155>`_,
`#1210 <https://github.com/fmtlib/fmt/issues/1210>`_,
`#1222 <https://github.com/fmtlib/fmt/issues/1222>`_).
Thanks `@alabuzhev (Alex Alabuzhev) <https://github.com/alabuzhev>`_.
* Fixed bugs discovered by fuzzing or during fuzzing integration
(`#1124 <https://github.com/fmtlib/fmt/issues/1124>`_,
`#1127 <https://github.com/fmtlib/fmt/issues/1127>`_,
`#1132 <https://github.com/fmtlib/fmt/issues/1132>`_,
`#1135 <https://github.com/fmtlib/fmt/pull/1135>`_,
`#1136 <https://github.com/fmtlib/fmt/issues/1136>`_,
`#1141 <https://github.com/fmtlib/fmt/issues/1141>`_,
`#1142 <https://github.com/fmtlib/fmt/issues/1142>`_,
`#1178 <https://github.com/fmtlib/fmt/issues/1178>`_,
`#1179 <https://github.com/fmtlib/fmt/issues/1179>`_,
`#1194 <https://github.com/fmtlib/fmt/issues/1194>`_).
Thanks `@pauldreik (Paul Dreik) <https://github.com/pauldreik>`_.
* Fixed building tests on FreeBSD and Hurd
(`#1043 <https://github.com/fmtlib/fmt/issues/1043>`_).
Thanks `@jackyf (Eugene V. Lyubimkin) <https://github.com/jackyf>`_.
* Fixed various warnings and compilation issues
(`#998 <https://github.com/fmtlib/fmt/pull/998>`_,
`#1006 <https://github.com/fmtlib/fmt/pull/1006>`_,
`#1008 <https://github.com/fmtlib/fmt/issues/1008>`_,
`#1011 <https://github.com/fmtlib/fmt/issues/1011>`_,
`#1025 <https://github.com/fmtlib/fmt/issues/1025>`_,
`#1027 <https://github.com/fmtlib/fmt/pull/1027>`_,
`#1028 <https://github.com/fmtlib/fmt/pull/1028>`_,
`#1029 <https://github.com/fmtlib/fmt/pull/1029>`_,
`#1030 <https://github.com/fmtlib/fmt/pull/1030>`_,
`#1031 <https://github.com/fmtlib/fmt/pull/1031>`_,
`#1054 <https://github.com/fmtlib/fmt/pull/1054>`_,
`#1063 <https://github.com/fmtlib/fmt/issues/1063>`_,
`#1068 <https://github.com/fmtlib/fmt/pull/1068>`_,
`#1074 <https://github.com/fmtlib/fmt/pull/1074>`_,
`#1075 <https://github.com/fmtlib/fmt/pull/1075>`_,
`#1079 <https://github.com/fmtlib/fmt/pull/1079>`_,
`#1086 <https://github.com/fmtlib/fmt/pull/1086>`_,
`#1088 <https://github.com/fmtlib/fmt/issues/1088>`_,
`#1089 <https://github.com/fmtlib/fmt/pull/1089>`_,
`#1094 <https://github.com/fmtlib/fmt/pull/1094>`_,
`#1101 <https://github.com/fmtlib/fmt/issues/1101>`_,
`#1102 <https://github.com/fmtlib/fmt/pull/1102>`_,
`#1105 <https://github.com/fmtlib/fmt/issues/1105>`_,
`#1107 <https://github.com/fmtlib/fmt/pull/1107>`_,
`#1115 <https://github.com/fmtlib/fmt/issues/1115>`_,
`#1117 <https://github.com/fmtlib/fmt/issues/1117>`_,
`#1118 <https://github.com/fmtlib/fmt/issues/1118>`_,
`#1120 <https://github.com/fmtlib/fmt/issues/1120>`_,
`#1123 <https://github.com/fmtlib/fmt/issues/1123>`_,
`#1139 <https://github.com/fmtlib/fmt/pull/1139>`_,
`#1140 <https://github.com/fmtlib/fmt/issues/1140>`_,
`#1143 <https://github.com/fmtlib/fmt/issues/1143>`_,
`#1144 <https://github.com/fmtlib/fmt/pull/1144>`_,
`#1150 <https://github.com/fmtlib/fmt/pull/1150>`_,
`#1151 <https://github.com/fmtlib/fmt/pull/1151>`_,
`#1152 <https://github.com/fmtlib/fmt/issues/1152>`_,
`#1154 <https://github.com/fmtlib/fmt/issues/1154>`_,
`#1156 <https://github.com/fmtlib/fmt/issues/1156>`_,
`#1159 <https://github.com/fmtlib/fmt/pull/1159>`_,
`#1175 <https://github.com/fmtlib/fmt/issues/1175>`_,
`#1181 <https://github.com/fmtlib/fmt/issues/1181>`_,
`#1186 <https://github.com/fmtlib/fmt/issues/1186>`_,
`#1187 <https://github.com/fmtlib/fmt/pull/1187>`_,
`#1191 <https://github.com/fmtlib/fmt/pull/1191>`_,
`#1197 <https://github.com/fmtlib/fmt/issues/1197>`_,
`#1200 <https://github.com/fmtlib/fmt/issues/1200>`_,
`#1203 <https://github.com/fmtlib/fmt/issues/1203>`_,
`#1205 <https://github.com/fmtlib/fmt/issues/1205>`_,
`#1206 <https://github.com/fmtlib/fmt/pull/1206>`_,
`#1213 <https://github.com/fmtlib/fmt/issues/1213>`_,
`#1214 <https://github.com/fmtlib/fmt/issues/1214>`_,
`#1217 <https://github.com/fmtlib/fmt/pull/1217>`_,
`#1228 <https://github.com/fmtlib/fmt/issues/1228>`_,
`#1230 <https://github.com/fmtlib/fmt/pull/1230>`_,
`#1232 <https://github.com/fmtlib/fmt/issues/1232>`_,
`#1235 <https://github.com/fmtlib/fmt/pull/1235>`_,
`#1236 <https://github.com/fmtlib/fmt/pull/1236>`_,
`#1240 <https://github.com/fmtlib/fmt/issues/1240>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_,
`@mwinterb <https://github.com/mwinterb>`_,
`@eliaskosunen (Elias Kosunen) <https://github.com/eliaskosunen>`_,
`@morinmorin <https://github.com/morinmorin>`_,
`@ricco19 (Brian Ricciardelli) <https://github.com/ricco19>`_,
`@waywardmonkeys (Bruce Mitchener) <https://github.com/waywardmonkeys>`_,
`@chronoxor (Ivan Shynkarenka) <https://github.com/chronoxor>`_,
`@remyabel <https://github.com/remyabel>`_,
`@pauldreik (Paul Dreik) <https://github.com/pauldreik>`_,
`@gsjaardema (Greg Sjaardema) <https://github.com/gsjaardema>`_,
`@rcane (Ronny Krüger) <https://github.com/rcane>`_,
`@mocabe <https://github.com/mocabe>`_,
`@denchat <https://github.com/denchat>`_,
`@cjdb (Christopher Di Bella) <https://github.com/cjdb>`_,
`@HazardyKnusperkeks (Björn Schäpers) <https://github.com/HazardyKnusperkeks>`_,
`@vedranmiletic (Vedran Miletić) <https://github.com/vedranmiletic>`_,
`@jackoalan (Jack Andersen) <https://github.com/jackoalan>`_,
`@DaanDeMeyer (Daan De Meyer) <https://github.com/DaanDeMeyer>`_,
`@starkmapper (Mark Stapper) <https://github.com/starkmapper>`_.
5.3.0 - 2018-12-28
------------------
* Introduced experimental chrono formatting support:
.. code:: c++
#include <fmt/chrono.h>
int main() {
using namespace std::literals::chrono_literals;
fmt::print("Default format: {} {}\n", 42s, 100ms);
fmt::print("strftime-like format: {:%H:%M:%S}\n", 3h + 15min + 30s);
}
prints::
Default format: 42s 100ms
strftime-like format: 03:15:30
* Added experimental support for emphasis (bold, italic, underline,
strikethrough), colored output to a file stream, and improved colored
formatting API
(`#961 <https://github.com/fmtlib/fmt/pull/961>`_,
`#967 <https://github.com/fmtlib/fmt/pull/967>`_,
`#973 <https://github.com/fmtlib/fmt/pull/973>`_):
.. code:: c++
#include <fmt/color.h>
int main() {
print(fg(fmt::color::crimson) | fmt::emphasis::bold,
"Hello, {}!\n", "world");
print(fg(fmt::color::floral_white) | bg(fmt::color::slate_gray) |
fmt::emphasis::underline, "Hello, {}!\n", "мир");
print(fg(fmt::color::steel_blue) | fmt::emphasis::italic,
"Hello, {}!\n", "世界");
}
prints the following on modern terminals with RGB color support:
.. image:: https://user-images.githubusercontent.com/576385/
50405788-b66e7500-076e-11e9-9592-7324d1f951d8.png
Thanks `@Rakete1111 (Nicolas) <https://github.com/Rakete1111>`_.
* Added support for 4-bit terminal colors
(`#968 <https://github.com/fmtlib/fmt/issues/968>`_,
`#974 <https://github.com/fmtlib/fmt/pull/974>`_)
.. code:: c++
#include <fmt/color.h>
int main() {
print(fg(fmt::terminal_color::red), "stop\n");
}
Note that these colors vary by terminal:
.. image:: https://user-images.githubusercontent.com/576385/
50405925-dbfc7e00-0770-11e9-9b85-333fab0af9ac.png
Thanks `@Rakete1111 (Nicolas) <https://github.com/Rakete1111>`_.
* Parameterized formatting functions on the type of the format string
(`#880 <https://github.com/fmtlib/fmt/issues/880>`_,
`#881 <https://github.com/fmtlib/fmt/pull/881>`_,
`#883 <https://github.com/fmtlib/fmt/pull/883>`_,
`#885 <https://github.com/fmtlib/fmt/pull/885>`_,
`#897 <https://github.com/fmtlib/fmt/pull/897>`_,
`#920 <https://github.com/fmtlib/fmt/issues/920>`_).
Any object of type ``S`` that has an overloaded ``to_string_view(const S&)``
returning ``fmt::string_view`` can be used as a format string:
.. code:: c++
namespace my_ns {
inline string_view to_string_view(const my_string& s) {
return {s.data(), s.length()};
}
}
std::string message = fmt::format(my_string("The answer is {}."), 42);
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Made ``std::string_view`` work as a format string
(`#898 <https://github.com/fmtlib/fmt/pull/898>`_):
.. code:: c++
auto message = fmt::format(std::string_view("The answer is {}."), 42);
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Added wide string support to compile-time format string checks
(`#924 <https://github.com/fmtlib/fmt/pull/924>`_):
.. code:: c++
print(fmt(L"{:f}"), 42); // compile-time error: invalid type specifier
Thanks `@XZiar <https://github.com/XZiar>`_.
* Made colored print functions work with wide strings
(`#867 <https://github.com/fmtlib/fmt/pull/867>`_):
.. code:: c++
#include <fmt/color.h>
int main() {
print(fg(fmt::color::red), L"{}\n", 42);
}
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Introduced experimental Unicode support
(`#628 <https://github.com/fmtlib/fmt/issues/628>`_,
`#891 <https://github.com/fmtlib/fmt/pull/891>`_):
.. code:: c++
using namespace fmt::literals;
auto s = fmt::format("{:*^5}"_u, "🤡"_u); // s == "**🤡**"_u
* Improved locale support:
.. code:: c++
#include <fmt/locale.h>
struct numpunct : std::numpunct<char> {
protected:
char do_thousands_sep() const override { return '~'; }
};
std::locale loc;
auto s = fmt::format(std::locale(loc, new numpunct()), "{:n}", 1234567);
// s == "1~234~567"
* Constrained formatting functions on proper iterator types
(`#921 <https://github.com/fmtlib/fmt/pull/921>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Added ``make_printf_args`` and ``make_wprintf_args`` functions
(`#934 <https://github.com/fmtlib/fmt/pull/934>`_).
Thanks `@tnovotny <https://github.com/tnovotny>`_.
* Deprecated ``fmt::visit``, ``parse_context``, and ``wparse_context``.
Use ``fmt::visit_format_arg``, ``format_parse_context``, and
``wformat_parse_context`` instead.
* Removed undocumented ``basic_fixed_buffer`` which has been superseded by the
iterator-based API
(`#873 <https://github.com/fmtlib/fmt/issues/873>`_,
`#902 <https://github.com/fmtlib/fmt/pull/902>`_).
Thanks `@superfunc (hollywood programmer) <https://github.com/superfunc>`_.
* Disallowed repeated leading zeros in an argument ID:
.. code:: c++
fmt::print("{000}", 42); // error
* Reintroduced support for gcc 4.4.
* Fixed compilation on platforms with exotic ``double``
(`#878 <https://github.com/fmtlib/fmt/issues/878>`_).
* Improved documentation
(`#164 <https://github.com/fmtlib/fmt/issues/164>`_,
`#877 <https://github.com/fmtlib/fmt/issues/877>`_,
`#901 <https://github.com/fmtlib/fmt/pull/901>`_,
`#906 <https://github.com/fmtlib/fmt/pull/906>`_,
`#979 <https://github.com/fmtlib/fmt/pull/979>`_).
Thanks `@kookjr (Mathew Cucuzella) <https://github.com/kookjr>`_,
`@DarkDimius (Dmitry Petrashko) <https://github.com/DarkDimius>`_,
`@HecticSerenity <https://github.com/HecticSerenity>`_.
* Added pkgconfig support which makes it easier to consume the library from
meson and other build systems
(`#916 <https://github.com/fmtlib/fmt/pull/916>`_).
Thanks `@colemickens (Cole Mickens) <https://github.com/colemickens>`_.
* Various build improvements
(`#909 <https://github.com/fmtlib/fmt/pull/909>`_,
`#926 <https://github.com/fmtlib/fmt/pull/926>`_,
`#937 <https://github.com/fmtlib/fmt/pull/937>`_,
`#953 <https://github.com/fmtlib/fmt/pull/953>`_,
`#959 <https://github.com/fmtlib/fmt/pull/959>`_).
Thanks `@tchaikov (Kefu Chai) <https://github.com/tchaikov>`_,
`@luncliff (Park DongHa) <https://github.com/luncliff>`_,
`@AndreasSchoenle (Andreas Schönle) <https://github.com/AndreasSchoenle>`_,
`@hotwatermorning <https://github.com/hotwatermorning>`_,
`@Zefz (JohanJansen) <https://github.com/Zefz>`_.
* Improved ``string_view`` construction performance
(`#914 <https://github.com/fmtlib/fmt/pull/914>`_).
Thanks `@gabime (Gabi Melman) <https://github.com/gabime>`_.
* Fixed non-matching char types
(`#895 <https://github.com/fmtlib/fmt/pull/895>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Fixed ``format_to_n`` with ``std::back_insert_iterator``
(`#913 <https://github.com/fmtlib/fmt/pull/913>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Fixed locale-dependent formatting
(`#905 <https://github.com/fmtlib/fmt/issues/905>`_).
* Fixed various compiler warnings and errors
(`#882 <https://github.com/fmtlib/fmt/pull/882>`_,
`#886 <https://github.com/fmtlib/fmt/pull/886>`_,
`#933 <https://github.com/fmtlib/fmt/pull/933>`_,
`#941 <https://github.com/fmtlib/fmt/pull/941>`_,
`#931 <https://github.com/fmtlib/fmt/issues/931>`_,
`#943 <https://github.com/fmtlib/fmt/pull/943>`_,
`#954 <https://github.com/fmtlib/fmt/pull/954>`_,
`#956 <https://github.com/fmtlib/fmt/pull/956>`_,
`#962 <https://github.com/fmtlib/fmt/pull/962>`_,
`#965 <https://github.com/fmtlib/fmt/issues/965>`_,
`#977 <https://github.com/fmtlib/fmt/issues/977>`_,
`#983 <https://github.com/fmtlib/fmt/pull/983>`_,
`#989 <https://github.com/fmtlib/fmt/pull/989>`_).
Thanks `@Luthaf (Guillaume Fraux) <https://github.com/Luthaf>`_,
`@stevenhoving (Steven Hoving) <https://github.com/stevenhoving>`_,
`@christinaa (Kristina Brooks) <https://github.com/christinaa>`_,
`@lgritz (Larry Gritz) <https://github.com/lgritz>`_,
`@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_,
`@0x8000-0000 (Sign Bit) <https://github.com/0x8000-0000>`_,
`@liuping1997 <https://github.com/liuping1997>`_.
5.2.1 - 2018-09-21
------------------
* Fixed ``visit`` lookup issues on gcc 7 & 8
(`#870 <https://github.com/fmtlib/fmt/pull/870>`_).
Thanks `@medithe <https://github.com/medithe>`_.
* Fixed linkage errors on older gcc.
* Prevented ``fmt/range.h`` from specializing ``fmt::basic_string_view``
(`#865 <https://github.com/fmtlib/fmt/issues/865>`_,
`#868 <https://github.com/fmtlib/fmt/pull/868>`_).
Thanks `@hhggit (dual) <https://github.com/hhggit>`_.
* Improved error message when formatting unknown types
(`#872 <https://github.com/fmtlib/fmt/pull/872>`_).
Thanks `@foonathan (Jonathan Müller) <https://github.com/foonathan>`_,
* Disabled templated user-defined literals when compiled under nvcc
(`#875 <https://github.com/fmtlib/fmt/pull/875>`_).
Thanks `@CandyGumdrop (Candy Gumdrop) <https://github.com/CandyGumdrop>`_,
* Fixed ``format_to`` formatting to ``wmemory_buffer``
(`#874 <https://github.com/fmtlib/fmt/issues/874>`_).
5.2.0 - 2018-09-13
------------------
* Optimized format string parsing and argument processing which resulted in up
to 5x speed up on long format strings and significant performance boost on
various benchmarks. For example, version 5.2 is 2.22x faster than 5.1 on
decimal integer formatting with ``format_to`` (macOS, clang-902.0.39.2):
================== ======= =======
Method Time, s Speedup
================== ======= =======
fmt::format 5.1 0.58
fmt::format 5.2 0.35 1.66x
fmt::format_to 5.1 0.51
fmt::format_to 5.2 0.23 2.22x
sprintf 0.71
std::to_string 1.01
std::stringstream 1.73
================== ======= =======
* Changed the ``fmt`` macro from opt-out to opt-in to prevent name collisions.
To enable it define the ``FMT_STRING_ALIAS`` macro to 1 before including
``fmt/format.h``:
.. code:: c++
#define FMT_STRING_ALIAS 1
#include <fmt/format.h>
std::string answer = format(fmt("{}"), 42);
* Added compile-time format string checks to ``format_to`` overload that takes
``fmt::memory_buffer`` (`#783 <https://github.com/fmtlib/fmt/issues/783>`_):
.. code:: c++
fmt::memory_buffer buf;
// Compile-time error: invalid type specifier.
fmt::format_to(buf, fmt("{:d}"), "foo");
* Moved experimental color support to ``fmt/color.h`` and enabled the
new API by default. The old API can be enabled by defining the
``FMT_DEPRECATED_COLORS`` macro.
* Added formatting support for types explicitly convertible to
``fmt::string_view``:
.. code:: c++
struct foo {
explicit operator fmt::string_view() const { return "foo"; }
};
auto s = format("{}", foo());
In particular, this makes formatting function work with
``folly::StringPiece``.
* Implemented preliminary support for ``char*_t`` by replacing the ``format``
function overloads with a single function template parameterized on the string
type.
* Added support for dynamic argument lists
(`#814 <https://github.com/fmtlib/fmt/issues/814>`_,
`#819 <https://github.com/fmtlib/fmt/pull/819>`_).
Thanks `@MikePopoloski (Michael Popoloski)
<https://github.com/MikePopoloski>`_.
* Reduced executable size overhead for embedded targets using newlib nano by
making locale dependency optional
(`#839 <https://github.com/fmtlib/fmt/pull/839>`_).
Thanks `@teajay-fr (Thomas Benard) <https://github.com/teajay-fr>`_.
* Keep ``noexcept`` specifier when exceptions are disabled
(`#801 <https://github.com/fmtlib/fmt/issues/801>`_,
`#810 <https://github.com/fmtlib/fmt/pull/810>`_).
Thanks `@qis (Alexej Harm) <https://github.com/qis>`_.
* Fixed formatting of user-defined types providing ``operator<<`` with
``format_to_n``
(`#806 <https://github.com/fmtlib/fmt/pull/806>`_).
Thanks `@mkurdej (Marek Kurdej) <https://github.com/mkurdej>`_.
* Fixed dynamic linkage of new symbols
(`#808 <https://github.com/fmtlib/fmt/issues/808>`_).
* Fixed global initialization issue
(`#807 <https://github.com/fmtlib/fmt/issues/807>`_):
.. code:: c++
// This works on compilers with constexpr support.
static const std::string answer = fmt::format("{}", 42);
* Fixed various compiler warnings and errors
(`#804 <https://github.com/fmtlib/fmt/pull/804>`_,
`#809 <https://github.com/fmtlib/fmt/issues/809>`_,
`#811 <https://github.com/fmtlib/fmt/pull/811>`_,
`#822 <https://github.com/fmtlib/fmt/issues/822>`_,
`#827 <https://github.com/fmtlib/fmt/pull/827>`_,
`#830 <https://github.com/fmtlib/fmt/issues/830>`_,
`#838 <https://github.com/fmtlib/fmt/pull/838>`_,
`#843 <https://github.com/fmtlib/fmt/issues/843>`_,
`#844 <https://github.com/fmtlib/fmt/pull/844>`_,
`#851 <https://github.com/fmtlib/fmt/issues/851>`_,
`#852 <https://github.com/fmtlib/fmt/pull/852>`_,
`#854 <https://github.com/fmtlib/fmt/pull/854>`_).
Thanks `@henryiii (Henry Schreiner) <https://github.com/henryiii>`_,
`@medithe <https://github.com/medithe>`_, and
`@eliasdaler (Elias Daler) <https://github.com/eliasdaler>`_.
5.1.0 - 2018-07-05
------------------
* Added experimental support for RGB color output enabled with
the ``FMT_EXTENDED_COLORS`` macro:
.. code:: c++
#define FMT_EXTENDED_COLORS
#define FMT_HEADER_ONLY // or compile fmt with FMT_EXTENDED_COLORS defined
#include <fmt/format.h>
fmt::print(fmt::color::steel_blue, "Some beautiful text");
The old API (the ``print_colored`` and ``vprint_colored`` functions and the
``color`` enum) is now deprecated.
(`#762 <https://github.com/fmtlib/fmt/issues/762>`_
`#767 <https://github.com/fmtlib/fmt/pull/767>`_).
thanks `@Remotion (Remo) <https://github.com/Remotion>`_.
* Added quotes to strings in ranges and tuples
(`#766 <https://github.com/fmtlib/fmt/pull/766>`_).
Thanks `@Remotion (Remo) <https://github.com/Remotion>`_.
* Made ``format_to`` work with ``basic_memory_buffer``
(`#776 <https://github.com/fmtlib/fmt/issues/776>`_).
* Added ``vformat_to_n`` and ``wchar_t`` overload of ``format_to_n``
(`#764 <https://github.com/fmtlib/fmt/issues/764>`_,
`#769 <https://github.com/fmtlib/fmt/issues/769>`_).
* Made ``is_range`` and ``is_tuple_like`` part of public (experimental) API
to allow specialization for user-defined types
(`#751 <https://github.com/fmtlib/fmt/issues/751>`_,
`#759 <https://github.com/fmtlib/fmt/pull/759>`_).
Thanks `@drrlvn (Dror Levin) <https://github.com/drrlvn>`_.
* Added more compilers to continuous integration and increased ``FMT_PEDANTIC``
warning levels
(`#736 <https://github.com/fmtlib/fmt/pull/736>`_).
Thanks `@eliaskosunen (Elias Kosunen) <https://github.com/eliaskosunen>`_.
* Fixed compilation with MSVC 2013.
* Fixed handling of user-defined types in ``format_to``
(`#793 <https://github.com/fmtlib/fmt/issues/793>`_).
* Forced linking of inline ``vformat`` functions into the library
(`#795 <https://github.com/fmtlib/fmt/issues/795>`_).
* Fixed incorrect call to on_align in ``'{:}='``
(`#750 <https://github.com/fmtlib/fmt/issues/750>`_).
* Fixed floating-point formatting to a non-back_insert_iterator with sign &
numeric alignment specified
(`#756 <https://github.com/fmtlib/fmt/issues/756>`_).
* Fixed formatting to an array with ``format_to_n``
(`#778 <https://github.com/fmtlib/fmt/issues/778>`_).
* Fixed formatting of more than 15 named arguments
(`#754 <https://github.com/fmtlib/fmt/issues/754>`_).
* Fixed handling of compile-time strings when including ``fmt/ostream.h``.
(`#768 <https://github.com/fmtlib/fmt/issues/768>`_).
* Fixed various compiler warnings and errors
(`#742 <https://github.com/fmtlib/fmt/issues/742>`_,
`#748 <https://github.com/fmtlib/fmt/issues/748>`_,
`#752 <https://github.com/fmtlib/fmt/issues/752>`_,
`#770 <https://github.com/fmtlib/fmt/issues/770>`_,
`#775 <https://github.com/fmtlib/fmt/pull/775>`_,
`#779 <https://github.com/fmtlib/fmt/issues/779>`_,
`#780 <https://github.com/fmtlib/fmt/pull/780>`_,
`#790 <https://github.com/fmtlib/fmt/pull/790>`_,
`#792 <https://github.com/fmtlib/fmt/pull/792>`_,
`#800 <https://github.com/fmtlib/fmt/pull/800>`_).
Thanks `@Remotion (Remo) <https://github.com/Remotion>`_,
`@gabime (Gabi Melman) <https://github.com/gabime>`_,
`@foonathan (Jonathan Müller) <https://github.com/foonathan>`_,
`@Dark-Passenger (Dhruv Paranjape) <https://github.com/Dark-Passenger>`_, and
`@0x8000-0000 (Sign Bit) <https://github.com/0x8000-0000>`_.
5.0.0 - 2018-05-21
------------------
* Added a requirement for partial C++11 support, most importantly variadic
templates and type traits, and dropped ``FMT_VARIADIC_*`` emulation macros.
Variadic templates are available since GCC 4.4, Clang 2.9 and MSVC 18.0 (2013).
For older compilers use {fmt} `version 4.x
<https://github.com/fmtlib/fmt/releases/tag/4.1.0>`_ which continues to be
maintained and works with C++98 compilers.
* Renamed symbols to follow standard C++ naming conventions and proposed a subset
of the library for standardization in `P0645R2 Text Formatting
<https://wg21.link/P0645>`_.
* Implemented ``constexpr`` parsing of format strings and `compile-time format
string checks
<https://fmt.dev/latest/api.html#compile-time-format-string-checks>`_. For
example
.. code:: c++
#include <fmt/format.h>
std::string s = format(fmt("{:d}"), "foo");
gives a compile-time error because ``d`` is an invalid specifier for strings
(`godbolt <https://godbolt.org/g/rnCy9Q>`__)::
...
<source>:4:19: note: in instantiation of function template specialization 'fmt::v5::format<S, char [4]>' requested here
std::string s = format(fmt("{:d}"), "foo");
^
format.h:1337:13: note: non-constexpr function 'on_error' cannot be used in a constant expression
handler.on_error("invalid type specifier");
Compile-time checks require relaxed ``constexpr`` (C++14 feature) support. If
the latter is not available, checks will be performed at runtime.
* Separated format string parsing and formatting in the extension API to enable
compile-time format string processing. For example
.. code:: c++
struct Answer {};
namespace fmt {
template <>
struct formatter<Answer> {
constexpr auto parse(parse_context& ctx) {
auto it = ctx.begin();
spec = *it;
if (spec != 'd' && spec != 's')
throw format_error("invalid specifier");
return ++it;
}
template <typename FormatContext>
auto format(Answer, FormatContext& ctx) {
return spec == 's' ?
format_to(ctx.begin(), "{}", "fourty-two") :
format_to(ctx.begin(), "{}", 42);
}
char spec = 0;
};
}
std::string s = format(fmt("{:x}"), Answer());
gives a compile-time error due to invalid format specifier (`godbolt
<https://godbolt.org/g/2jQ1Dv>`__)::
...
<source>:12:45: error: expression '<throw-expression>' is not a constant expression
throw format_error("invalid specifier");
* Added `iterator support
<https://fmt.dev/latest/api.html#output-iterator-support>`_:
.. code:: c++
#include <vector>
#include <fmt/format.h>
std::vector<char> out;
fmt::format_to(std::back_inserter(out), "{}", 42);
* Added the `format_to_n
<https://fmt.dev/latest/api.html#_CPPv2N3fmt11format_to_nE8OutputItNSt6size_tE11string_viewDpRK4Args>`_
function that restricts the output to the specified number of characters
(`#298 <https://github.com/fmtlib/fmt/issues/298>`_):
.. code:: c++
char out[4];
fmt::format_to_n(out, sizeof(out), "{}", 12345);
// out == "1234" (without terminating '\0')
* Added the `formatted_size
<https://fmt.dev/latest/api.html#_CPPv2N3fmt14formatted_sizeE11string_viewDpRK4Args>`_
function for computing the output size:
.. code:: c++
#include <fmt/format.h>
auto size = fmt::formatted_size("{}", 12345); // size == 5
* Improved compile times by reducing dependencies on standard headers and
providing a lightweight `core API <https://fmt.dev/latest/api.html#core-api>`_:
.. code:: c++
#include <fmt/core.h>
fmt::print("The answer is {}.", 42);
See `Compile time and code bloat
<https://github.com/fmtlib/fmt#compile-time-and-code-bloat>`_.
* Added the `make_format_args
<https://fmt.dev/latest/api.html#_CPPv2N3fmt16make_format_argsEDpRK4Args>`_
function for capturing formatting arguments:
.. code:: c++
// Prints formatted error message.
void vreport_error(const char *format, fmt::format_args args) {
fmt::print("Error: ");
fmt::vprint(format, args);
}
template <typename... Args>
void report_error(const char *format, const Args & ... args) {
vreport_error(format, fmt::make_format_args(args...));
}
* Added the ``make_printf_args`` function for capturing ``printf`` arguments
(`#687 <https://github.com/fmtlib/fmt/issues/687>`_,
`#694 <https://github.com/fmtlib/fmt/pull/694>`_).
Thanks `@Kronuz (Germán Méndez Bravo) <https://github.com/Kronuz>`_.
* Added prefix ``v`` to non-variadic functions taking ``format_args`` to
distinguish them from variadic ones:
.. code:: c++
std::string vformat(string_view format_str, format_args args);
template <typename... Args>
std::string format(string_view format_str, const Args & ... args);
* Added experimental support for formatting ranges, containers and tuple-like
types in ``fmt/ranges.h`` (`#735 <https://github.com/fmtlib/fmt/pull/735>`_):
.. code:: c++
#include <fmt/ranges.h>
std::vector<int> v = {1, 2, 3};
fmt::print("{}", v); // prints {1, 2, 3}
Thanks `@Remotion (Remo) <https://github.com/Remotion>`_.
* Implemented ``wchar_t`` date and time formatting
(`#712 <https://github.com/fmtlib/fmt/pull/712>`_):
.. code:: c++
#include <fmt/time.h>
std::time_t t = std::time(nullptr);
auto s = fmt::format(L"The date is {:%Y-%m-%d}.", *std::localtime(&t));
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Provided more wide string overloads
(`#724 <https://github.com/fmtlib/fmt/pull/724>`_).
Thanks `@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_.
* Switched from a custom null-terminated string view class to ``string_view``
in the format API and provided ``fmt::string_view`` which implements a subset
of ``std::string_view`` API for pre-C++17 systems.
* Added support for ``std::experimental::string_view``
(`#607 <https://github.com/fmtlib/fmt/pull/607>`_):
.. code:: c++
#include <fmt/core.h>
#include <experimental/string_view>
fmt::print("{}", std::experimental::string_view("foo"));
Thanks `@virgiliofornazin (Virgilio Alexandre Fornazin)
<https://github.com/virgiliofornazin>`__.
* Allowed mixing named and automatic arguments:
.. code:: c++
fmt::format("{} {two}", 1, fmt::arg("two", 2));
* Removed the write API in favor of the `format API
<https://fmt.dev/latest/api.html#format-api>`_ with compile-time handling of
format strings.
* Disallowed formatting of multibyte strings into a wide character target
(`#606 <https://github.com/fmtlib/fmt/pull/606>`_).
* Improved documentation
(`#515 <https://github.com/fmtlib/fmt/pull/515>`_,
`#614 <https://github.com/fmtlib/fmt/issues/614>`_,
`#617 <https://github.com/fmtlib/fmt/pull/617>`_,
`#661 <https://github.com/fmtlib/fmt/pull/661>`_,
`#680 <https://github.com/fmtlib/fmt/pull/680>`_).
Thanks `@ibell (Ian Bell) <https://github.com/ibell>`_,
`@mihaitodor (Mihai Todor) <https://github.com/mihaitodor>`_, and
`@johnthagen <https://github.com/johnthagen>`_.
* Implemented more efficient handling of large number of format arguments.
* Introduced an inline namespace for symbol versioning.
* Added debug postfix ``d`` to the ``fmt`` library name
(`#636 <https://github.com/fmtlib/fmt/issues/636>`_).
* Removed unnecessary ``fmt/`` prefix in includes
(`#397 <https://github.com/fmtlib/fmt/pull/397>`_).
Thanks `@chronoxor (Ivan Shynkarenka) <https://github.com/chronoxor>`_.
* Moved ``fmt/*.h`` to ``include/fmt/*.h`` to prevent irrelevant files and
directories appearing on the include search paths when fmt is used as a
subproject and moved source files to the ``src`` directory.
* Added qmake project file ``support/fmt.pro``
(`#641 <https://github.com/fmtlib/fmt/pull/641>`_).
Thanks `@cowo78 (Giuseppe Corbelli) <https://github.com/cowo78>`_.
* Added Gradle build file ``support/build.gradle``
(`#649 <https://github.com/fmtlib/fmt/pull/649>`_).
Thanks `@luncliff (Park DongHa) <https://github.com/luncliff>`_.
* Removed ``FMT_CPPFORMAT`` CMake option.
* Fixed a name conflict with the macro ``CHAR_WIDTH`` in glibc
(`#616 <https://github.com/fmtlib/fmt/pull/616>`_).
Thanks `@aroig (Abdó Roig-Maranges) <https://github.com/aroig>`_.
* Fixed handling of nested braces in ``fmt::join``
(`#638 <https://github.com/fmtlib/fmt/issues/638>`_).
* Added ``SOURCELINK_SUFFIX`` for compatibility with Sphinx 1.5
(`#497 <https://github.com/fmtlib/fmt/pull/497>`_).
Thanks `@ginggs (Graham Inggs) <https://github.com/ginggs>`_.
* Added a missing ``inline`` in the header-only mode
(`#626 <https://github.com/fmtlib/fmt/pull/626>`_).
Thanks `@aroig (Abdó Roig-Maranges) <https://github.com/aroig>`_.
* Fixed various compiler warnings
(`#640 <https://github.com/fmtlib/fmt/pull/640>`_,
`#656 <https://github.com/fmtlib/fmt/pull/656>`_,
`#679 <https://github.com/fmtlib/fmt/pull/679>`_,
`#681 <https://github.com/fmtlib/fmt/pull/681>`_,
`#705 <https://github.com/fmtlib/fmt/pull/705>`__,
`#715 <https://github.com/fmtlib/fmt/issues/715>`_,
`#717 <https://github.com/fmtlib/fmt/pull/717>`_,
`#720 <https://github.com/fmtlib/fmt/pull/720>`_,
`#723 <https://github.com/fmtlib/fmt/pull/723>`_,
`#726 <https://github.com/fmtlib/fmt/pull/726>`_,
`#730 <https://github.com/fmtlib/fmt/pull/730>`_,
`#739 <https://github.com/fmtlib/fmt/pull/739>`_).
Thanks `@peterbell10 <https://github.com/peterbell10>`_,
`@LarsGullik <https://github.com/LarsGullik>`_,
`@foonathan (Jonathan Müller) <https://github.com/foonathan>`_,
`@eliaskosunen (Elias Kosunen) <https://github.com/eliaskosunen>`_,
`@christianparpart (Christian Parpart) <https://github.com/christianparpart>`_,
`@DanielaE (Daniela Engert) <https://github.com/DanielaE>`_,
and `@mwinterb <https://github.com/mwinterb>`_.
* Worked around an MSVC bug and fixed several warnings
(`#653 <https://github.com/fmtlib/fmt/pull/653>`_).
Thanks `@alabuzhev (Alex Alabuzhev) <https://github.com/alabuzhev>`_.
* Worked around GCC bug 67371
(`#682 <https://github.com/fmtlib/fmt/issues/682>`_).
* Fixed compilation with ``-fno-exceptions``
(`#655 <https://github.com/fmtlib/fmt/pull/655>`_).
Thanks `@chenxiaolong (Andrew Gunnerson) <https://github.com/chenxiaolong>`_.
* Made ``constexpr remove_prefix`` gcc version check tighter
(`#648 <https://github.com/fmtlib/fmt/issues/648>`_).
* Renamed internal type enum constants to prevent collision with poorly written
C libraries (`#644 <https://github.com/fmtlib/fmt/issues/644>`_).
* Added detection of ``wostream operator<<``
(`#650 <https://github.com/fmtlib/fmt/issues/650>`_).
* Fixed compilation on OpenBSD
(`#660 <https://github.com/fmtlib/fmt/pull/660>`_).
Thanks `@hubslave <https://github.com/hubslave>`_.
* Fixed compilation on FreeBSD 12
(`#732 <https://github.com/fmtlib/fmt/pull/732>`_).
Thanks `@dankm <https://github.com/dankm>`_.
* Fixed compilation when there is a mismatch between ``-std`` options between
the library and user code
(`#664 <https://github.com/fmtlib/fmt/issues/664>`_).
* Fixed compilation with GCC 7 and ``-std=c++11``
(`#734 <https://github.com/fmtlib/fmt/issues/734>`_).
* Improved generated binary code on GCC 7 and older
(`#668 <https://github.com/fmtlib/fmt/issues/668>`_).
* Fixed handling of numeric alignment with no width
(`#675 <https://github.com/fmtlib/fmt/issues/675>`_).
* Fixed handling of empty strings in UTF8/16 converters
(`#676 <https://github.com/fmtlib/fmt/pull/676>`_).
Thanks `@vgalka-sl (Vasili Galka) <https://github.com/vgalka-sl>`_.
* Fixed formatting of an empty ``string_view``
(`#689 <https://github.com/fmtlib/fmt/issues/689>`_).
* Fixed detection of ``string_view`` on libc++
(`#686 <https://github.com/fmtlib/fmt/issues/686>`_).
* Fixed DLL issues (`#696 <https://github.com/fmtlib/fmt/pull/696>`_).
Thanks `@sebkoenig <https://github.com/sebkoenig>`_.
* Fixed compile checks for mixing narrow and wide strings
(`#690 <https://github.com/fmtlib/fmt/issues/690>`_).
* Disabled unsafe implicit conversion to ``std::string``
(`#729 <https://github.com/fmtlib/fmt/issues/729>`_).
* Fixed handling of reused format specs (as in ``fmt::join``) for pointers
(`#725 <https://github.com/fmtlib/fmt/pull/725>`_).
Thanks `@mwinterb <https://github.com/mwinterb>`_.
* Fixed installation of ``fmt/ranges.h``
(`#738 <https://github.com/fmtlib/fmt/pull/738>`_).
Thanks `@sv1990 <https://github.com/sv1990>`_.
4.1.0 - 2017-12-20
------------------
* Added ``fmt::to_wstring()`` in addition to ``fmt::to_string()``
(`#559 <https://github.com/fmtlib/fmt/pull/559>`_).
Thanks `@alabuzhev (Alex Alabuzhev) <https://github.com/alabuzhev>`_.
* Added support for C++17 ``std::string_view``
(`#571 <https://github.com/fmtlib/fmt/pull/571>`_ and
`#578 <https://github.com/fmtlib/fmt/pull/578>`_).
Thanks `@thelostt (Mário Feroldi) <https://github.com/thelostt>`_ and
`@mwinterb <https://github.com/mwinterb>`_.
* Enabled stream exceptions to catch errors
(`#581 <https://github.com/fmtlib/fmt/issues/581>`_).
Thanks `@crusader-mike <https://github.com/crusader-mike>`_.
* Allowed formatting of class hierarchies with ``fmt::format_arg()``
(`#547 <https://github.com/fmtlib/fmt/pull/547>`_).
Thanks `@rollbear (Björn Fahller) <https://github.com/rollbear>`_.
* Removed limitations on character types
(`#563 <https://github.com/fmtlib/fmt/pull/563>`_).
Thanks `@Yelnats321 (Elnar Dakeshov) <https://github.com/Yelnats321>`_.
* Conditionally enabled use of ``std::allocator_traits``
(`#583 <https://github.com/fmtlib/fmt/pull/583>`_).
Thanks `@mwinterb <https://github.com/mwinterb>`_.
* Added support for ``const`` variadic member function emulation with
``FMT_VARIADIC_CONST`` (`#591 <https://github.com/fmtlib/fmt/pull/591>`_).
Thanks `@ludekvodicka (Ludek Vodicka) <https://github.com/ludekvodicka>`_.
* Various bugfixes: bad overflow check, unsupported implicit type conversion
when determining formatting function, test segfaults
(`#551 <https://github.com/fmtlib/fmt/issues/551>`_), ill-formed macros
(`#542 <https://github.com/fmtlib/fmt/pull/542>`_) and ambiguous overloads
(`#580 <https://github.com/fmtlib/fmt/issues/580>`_).
Thanks `@xylosper (Byoung-young Lee) <https://github.com/xylosper>`_.
* Prevented warnings on MSVC (`#605 <https://github.com/fmtlib/fmt/pull/605>`_,
`#602 <https://github.com/fmtlib/fmt/pull/602>`_, and
`#545 <https://github.com/fmtlib/fmt/pull/545>`_),
clang (`#582 <https://github.com/fmtlib/fmt/pull/582>`_),
GCC (`#573 <https://github.com/fmtlib/fmt/issues/573>`_),
various conversion warnings (`#609 <https://github.com/fmtlib/fmt/pull/609>`_,
`#567 <https://github.com/fmtlib/fmt/pull/567>`_,
`#553 <https://github.com/fmtlib/fmt/pull/553>`_ and
`#553 <https://github.com/fmtlib/fmt/pull/553>`_), and added ``override`` and
``[[noreturn]]`` (`#549 <https://github.com/fmtlib/fmt/pull/549>`_ and
`#555 <https://github.com/fmtlib/fmt/issues/555>`_).
Thanks `@alabuzhev (Alex Alabuzhev) <https://github.com/alabuzhev>`_,
`@virgiliofornazin (Virgilio Alexandre Fornazin)
<https://gihtub.com/virgiliofornazin>`_,
`@alexanderbock (Alexander Bock) <https://github.com/alexanderbock>`_,
`@yumetodo <https://github.com/yumetodo>`_,
`@VaderY (Császár Mátyás) <https://github.com/VaderY>`_,
`@jpcima (JP Cimalando) <https://github.com/jpcima>`_,
`@thelostt (Mário Feroldi) <https://github.com/thelostt>`_, and
`@Manu343726 (Manu Sánchez) <https://github.com/Manu343726>`_.
* Improved CMake: Used ``GNUInstallDirs`` to set installation location
(`#610 <https://github.com/fmtlib/fmt/pull/610>`_) and fixed warnings
(`#536 <https://github.com/fmtlib/fmt/pull/536>`_ and
`#556 <https://github.com/fmtlib/fmt/pull/556>`_).
Thanks `@mikecrowe (Mike Crowe) <https://github.com/mikecrowe>`_,
`@evgen231 <https://github.com/evgen231>`_ and
`@henryiii (Henry Schreiner) <https://github.com/henryiii>`_.
4.0.0 - 2017-06-27
------------------
* Removed old compatibility headers ``cppformat/*.h`` and CMake options
(`#527 <https://github.com/fmtlib/fmt/pull/527>`_).
Thanks `@maddinat0r (Alex Martin) <https://github.com/maddinat0r>`_.
* Added ``string.h`` containing ``fmt::to_string()`` as alternative to
``std::to_string()`` as well as other string writer functionality
(`#326 <https://github.com/fmtlib/fmt/issues/326>`_ and
`#441 <https://github.com/fmtlib/fmt/pull/441>`_):
.. code:: c++
#include "fmt/string.h"
std::string answer = fmt::to_string(42);
Thanks to `@glebov-andrey (Andrey Glebov)
<https://github.com/glebov-andrey>`_.
* Moved ``fmt::printf()`` to new ``printf.h`` header and allowed ``%s`` as
generic specifier (`#453 <https://github.com/fmtlib/fmt/pull/453>`_),
made ``%.f`` more conformant to regular ``printf()``
(`#490 <https://github.com/fmtlib/fmt/pull/490>`_), added custom writer
support (`#476 <https://github.com/fmtlib/fmt/issues/476>`_) and implemented
missing custom argument formatting
(`#339 <https://github.com/fmtlib/fmt/pull/339>`_ and
`#340 <https://github.com/fmtlib/fmt/pull/340>`_):
.. code:: c++
#include "fmt/printf.h"
// %s format specifier can be used with any argument type.
fmt::printf("%s", 42);
Thanks `@mojoBrendan <https://github.com/mojoBrendan>`_,
`@manylegged (Arthur Danskin) <https://github.com/manylegged>`_ and
`@spacemoose (Glen Stark) <https://github.com/spacemoose>`_.
See also `#360 <https://github.com/fmtlib/fmt/issues/360>`_,
`#335 <https://github.com/fmtlib/fmt/issues/335>`_ and
`#331 <https://github.com/fmtlib/fmt/issues/331>`_.
* Added ``container.h`` containing a ``BasicContainerWriter``
to write to containers like ``std::vector``
(`#450 <https://github.com/fmtlib/fmt/pull/450>`_).
Thanks `@polyvertex (Jean-Charles Lefebvre) <https://github.com/polyvertex>`_.
* Added ``fmt::join()`` function that takes a range and formats
its elements separated by a given string
(`#466 <https://github.com/fmtlib/fmt/pull/466>`_):
.. code:: c++
#include "fmt/format.h"
std::vector<double> v = {1.2, 3.4, 5.6};
// Prints "(+01.20, +03.40, +05.60)".
fmt::print("({:+06.2f})", fmt::join(v.begin(), v.end(), ", "));
Thanks `@olivier80 <https://github.com/olivier80>`_.
* Added support for custom formatting specifications to simplify customization
of built-in formatting (`#444 <https://github.com/fmtlib/fmt/pull/444>`_).
Thanks `@polyvertex (Jean-Charles Lefebvre) <https://github.com/polyvertex>`_.
See also `#439 <https://github.com/fmtlib/fmt/issues/439>`_.
* Added ``fmt::format_system_error()`` for error code formatting
(`#323 <https://github.com/fmtlib/fmt/issues/323>`_ and
`#526 <https://github.com/fmtlib/fmt/pull/526>`_).
Thanks `@maddinat0r (Alex Martin) <https://github.com/maddinat0r>`_.
* Added thread-safe ``fmt::localtime()`` and ``fmt::gmtime()``
as replacement for the standard version to ``time.h``
(`#396 <https://github.com/fmtlib/fmt/pull/396>`_).
Thanks `@codicodi <https://github.com/codicodi>`_.
* Internal improvements to ``NamedArg`` and ``ArgLists``
(`#389 <https://github.com/fmtlib/fmt/pull/389>`_ and
`#390 <https://github.com/fmtlib/fmt/pull/390>`_).
Thanks `@chronoxor <https://github.com/chronoxor>`_.
* Fixed crash due to bug in ``FormatBuf``
(`#493 <https://github.com/fmtlib/fmt/pull/493>`_).
Thanks `@effzeh <https://github.com/effzeh>`_. See also
`#480 <https://github.com/fmtlib/fmt/issues/480>`_ and
`#491 <https://github.com/fmtlib/fmt/issues/491>`_.
* Fixed handling of wide strings in ``fmt::StringWriter``.
* Improved compiler error messages
(`#357 <https://github.com/fmtlib/fmt/issues/357>`_).
* Fixed various warnings and issues with various compilers
(`#494 <https://github.com/fmtlib/fmt/pull/494>`_,
`#499 <https://github.com/fmtlib/fmt/pull/499>`_,
`#483 <https://github.com/fmtlib/fmt/pull/483>`_,
`#485 <https://github.com/fmtlib/fmt/pull/485>`_,
`#482 <https://github.com/fmtlib/fmt/pull/482>`_,
`#475 <https://github.com/fmtlib/fmt/pull/475>`_,
`#473 <https://github.com/fmtlib/fmt/pull/473>`_ and
`#414 <https://github.com/fmtlib/fmt/pull/414>`_).
Thanks `@chronoxor <https://github.com/chronoxor>`_,
`@zhaohuaxishi <https://github.com/zhaohuaxishi>`_,
`@pkestene (Pierre Kestener) <https://github.com/pkestene>`_,
`@dschmidt (Dominik Schmidt) <https://github.com/dschmidt>`_ and
`@0x414c (Alexey Gorishny) <https://github.com/0x414c>`_ .
* Improved CMake: targets are now namespaced
(`#511 <https://github.com/fmtlib/fmt/pull/511>`_ and
`#513 <https://github.com/fmtlib/fmt/pull/513>`_), supported header-only
``printf.h`` (`#354 <https://github.com/fmtlib/fmt/pull/354>`_), fixed issue
with minimal supported library subset
(`#418 <https://github.com/fmtlib/fmt/issues/418>`_,
`#419 <https://github.com/fmtlib/fmt/pull/419>`_ and
`#420 <https://github.com/fmtlib/fmt/pull/420>`_).
Thanks `@bjoernthiel (Bjoern Thiel) <https://github.com/bjoernthiel>`_,
`@niosHD (Mario Werner) <https://github.com/niosHD>`_,
`@LogicalKnight (Sean LK) <https://github.com/LogicalKnight>`_ and
`@alabuzhev (Alex Alabuzhev) <https://github.com/alabuzhev>`_.
* Improved documentation. Thanks to
`@pwm1234 (Phil) <https://github.com/pwm1234>`_ for
`#393 <https://github.com/fmtlib/fmt/pull/393>`_.
3.0.2 - 2017-06-14
------------------
* Added ``FMT_VERSION`` macro
(`#411 <https://github.com/fmtlib/fmt/issues/411>`_).
* Used ``FMT_NULL`` instead of literal ``0``
(`#409 <https://github.com/fmtlib/fmt/pull/409>`_).
Thanks `@alabuzhev (Alex Alabuzhev) <https://github.com/alabuzhev>`_.
* Added extern templates for ``format_float``
(`#413 <https://github.com/fmtlib/fmt/issues/413>`_).
* Fixed implicit conversion issue
(`#507 <https://github.com/fmtlib/fmt/issues/507>`_).
* Fixed signbit detection (`#423 <https://github.com/fmtlib/fmt/issues/423>`_).
* Fixed naming collision (`#425 <https://github.com/fmtlib/fmt/issues/425>`_).
* Fixed missing intrinsic for C++/CLI
(`#457 <https://github.com/fmtlib/fmt/pull/457>`_).
Thanks `@calumr (Calum Robinson) <https://github.com/calumr>`_
* Fixed Android detection (`#458 <https://github.com/fmtlib/fmt/pull/458>`_).
Thanks `@Gachapen (Magnus Bjerke Vik) <https://github.com/Gachapen>`_.
* Use lean ``windows.h`` if not in header-only mode
(`#503 <https://github.com/fmtlib/fmt/pull/503>`_).
Thanks `@Quentin01 (Quentin Buathier) <https://github.com/Quentin01>`_.
* Fixed issue with CMake exporting C++11 flag
(`#445 <https://github.com/fmtlib/fmt/pull/455>`_).
Thanks `@EricWF (Eric) <https://github.com/EricWF>`_.
* Fixed issue with nvcc and MSVC compiler bug and MinGW
(`#505 <https://github.com/fmtlib/fmt/issues/505>`_).
* Fixed DLL issues (`#469 <https://github.com/fmtlib/fmt/pull/469>`_ and
`#502 <https://github.com/fmtlib/fmt/pull/502>`_).
Thanks `@richardeakin (Richard Eakin) <https://github.com/richardeakin>`_ and
`@AndreasSchoenle (Andreas Schönle) <https://github.com/AndreasSchoenle>`_.
* Fixed test compilation under FreeBSD
(`#433 <https://github.com/fmtlib/fmt/issues/433>`_).
* Fixed various warnings (`#403 <https://github.com/fmtlib/fmt/pull/403>`_,
`#410 <https://github.com/fmtlib/fmt/pull/410>`_ and
`#510 <https://github.com/fmtlib/fmt/pull/510>`_).
Thanks `@Lecetem <https://github.com/Lectem>`_,
`@chenhayat (Chen Hayat) <https://github.com/chenhayat>`_ and
`@trozen <https://github.com/trozen>`_.
* Worked around a broken ``__builtin_clz`` in clang with MS codegen
(`#519 <https://github.com/fmtlib/fmt/issues/519>`_).
* Removed redundant include
(`#479 <https://github.com/fmtlib/fmt/issues/479>`_).
* Fixed documentation issues.
3.0.1 - 2016-11-01
------------------
* Fixed handling of thousands separator
(`#353 <https://github.com/fmtlib/fmt/issues/353>`_).
* Fixed handling of ``unsigned char`` strings
(`#373 <https://github.com/fmtlib/fmt/issues/373>`_).
* Corrected buffer growth when formatting time
(`#367 <https://github.com/fmtlib/fmt/issues/367>`_).
* Removed warnings under MSVC and clang
(`#318 <https://github.com/fmtlib/fmt/issues/318>`_,
`#250 <https://github.com/fmtlib/fmt/issues/250>`_, also merged
`#385 <https://github.com/fmtlib/fmt/pull/385>`_ and
`#361 <https://github.com/fmtlib/fmt/pull/361>`_).
Thanks `@jcelerier (Jean-Michaël Celerier) <https://github.com/jcelerier>`_
and `@nmoehrle (Nils Moehrle) <https://github.com/nmoehrle>`_.
* Fixed compilation issues under Android
(`#327 <https://github.com/fmtlib/fmt/pull/327>`_,
`#345 <https://github.com/fmtlib/fmt/issues/345>`_ and
`#381 <https://github.com/fmtlib/fmt/pull/381>`_),
FreeBSD (`#358 <https://github.com/fmtlib/fmt/pull/358>`_),
Cygwin (`#388 <https://github.com/fmtlib/fmt/issues/388>`_),
MinGW (`#355 <https://github.com/fmtlib/fmt/issues/355>`_) as well as other
issues (`#350 <https://github.com/fmtlib/fmt/issues/350>`_,
`#366 <https://github.com/fmtlib/fmt/issues/355>`_,
`#348 <https://github.com/fmtlib/fmt/pull/348>`_,
`#402 <https://github.com/fmtlib/fmt/pull/402>`_,
`#405 <https://github.com/fmtlib/fmt/pull/405>`_).
Thanks to `@dpantele (Dmitry) <https://github.com/dpantele>`_,
`@hghwng (Hugh Wang) <https://github.com/hghwng>`_,
`@arvedarved (Tilman Keskinöz) <https://github.com/arvedarved>`_,
`@LogicalKnight (Sean) <https://github.com/LogicalKnight>`_ and
`@JanHellwig (Jan Hellwig) <https://github.com/janhellwig>`_.
* Fixed some documentation issues and extended specification
(`#320 <https://github.com/fmtlib/fmt/issues/320>`_,
`#333 <https://github.com/fmtlib/fmt/pull/333>`_,
`#347 <https://github.com/fmtlib/fmt/issues/347>`_,
`#362 <https://github.com/fmtlib/fmt/pull/362>`_).
Thanks to `@smellman (Taro Matsuzawa aka. btm)
<https://github.com/smellman>`_.
3.0.0 - 2016-05-07
------------------
* The project has been renamed from C++ Format (cppformat) to fmt for
consistency with the used namespace and macro prefix
(`#307 <https://github.com/fmtlib/fmt/issues/307>`_).
Library headers are now located in the ``fmt`` directory:
.. code:: c++
#include "fmt/format.h"
Including ``format.h`` from the ``cppformat`` directory is deprecated
but works via a proxy header which will be removed in the next major version.
The documentation is now available at https://fmt.dev.
* Added support for `strftime <http://en.cppreference.com/w/cpp/chrono/c/strftime>`_-like
`date and time formatting <https://fmt.dev/3.0.0/api.html#date-and-time-formatting>`_
(`#283 <https://github.com/fmtlib/fmt/issues/283>`_):
.. code:: c++
#include "fmt/time.h"
std::time_t t = std::time(nullptr);
// Prints "The date is 2016-04-29." (with the current date)
fmt::print("The date is {:%Y-%m-%d}.", *std::localtime(&t));
* ``std::ostream`` support including formatting of user-defined types that provide
overloaded ``operator<<`` has been moved to ``fmt/ostream.h``:
.. code:: c++
#include "fmt/ostream.h"
class Date {
int year_, month_, day_;
public:
Date(int year, int month, int day) : year_(year), month_(month), day_(day) {}
friend std::ostream &operator<<(std::ostream &os, const Date &d) {
return os << d.year_ << '-' << d.month_ << '-' << d.day_;
}
};
std::string s = fmt::format("The date is {}", Date(2012, 12, 9));
// s == "The date is 2012-12-9"
* Added support for `custom argument formatters
<https://fmt.dev/3.0.0/api.html#argument-formatters>`_
(`#235 <https://github.com/fmtlib/fmt/issues/235>`_).
* Added support for locale-specific integer formatting with the ``n`` specifier
(`#305 <https://github.com/fmtlib/fmt/issues/305>`_):
.. code:: c++
std::setlocale(LC_ALL, "en_US.utf8");
fmt::print("cppformat: {:n}\n", 1234567); // prints 1,234,567
* Sign is now preserved when formatting an integer with an incorrect ``printf``
format specifier (`#265 <https://github.com/fmtlib/fmt/issues/265>`_):
.. code:: c++
fmt::printf("%lld", -42); // prints -42
Note that it would be an undefined behavior in ``std::printf``.
* Length modifiers such as ``ll`` are now optional in printf formatting
functions and the correct type is determined automatically
(`#255 <https://github.com/fmtlib/fmt/issues/255>`_):
.. code:: c++
fmt::printf("%d", std::numeric_limits<long long>::max());
Note that it would be an undefined behavior in ``std::printf``.
* Added initial support for custom formatters
(`#231 <https://github.com/fmtlib/fmt/issues/231>`_).
* Fixed detection of user-defined literal support on Intel C++ compiler
(`#311 <https://github.com/fmtlib/fmt/issues/311>`_,
`#312 <https://github.com/fmtlib/fmt/pull/312>`_).
Thanks to `@dean0x7d (Dean Moldovan) <https://github.com/dean0x7d>`_ and
`@speth (Ray Speth) <https://github.com/speth>`_.
* Reduced compile time
(`#243 <https://github.com/fmtlib/fmt/pull/243>`_,
`#249 <https://github.com/fmtlib/fmt/pull/249>`_,
`#317 <https://github.com/fmtlib/fmt/issues/317>`_):
.. image:: https://cloud.githubusercontent.com/assets/4831417/11614060/
b9e826d2-9c36-11e5-8666-d4131bf503ef.png
.. image:: https://cloud.githubusercontent.com/assets/4831417/11614080/
6ac903cc-9c37-11e5-8165-26df6efae364.png
Thanks to `@dean0x7d (Dean Moldovan) <https://github.com/dean0x7d>`_.
* Compile test fixes (`#313 <https://github.com/fmtlib/fmt/pull/313>`_).
Thanks to `@dean0x7d (Dean Moldovan) <https://github.com/dean0x7d>`_.
* Documentation fixes (`#239 <https://github.com/fmtlib/fmt/pull/239>`_,
`#248 <https://github.com/fmtlib/fmt/issues/248>`_,
`#252 <https://github.com/fmtlib/fmt/issues/252>`_,
`#258 <https://github.com/fmtlib/fmt/pull/258>`_,
`#260 <https://github.com/fmtlib/fmt/issues/260>`_,
`#301 <https://github.com/fmtlib/fmt/issues/301>`_,
`#309 <https://github.com/fmtlib/fmt/pull/309>`_).
Thanks to `@ReadmeCritic <https://github.com/ReadmeCritic>`_
`@Gachapen (Magnus Bjerke Vik) <https://github.com/Gachapen>`_ and
`@jwilk (Jakub Wilk) <https://github.com/jwilk>`_.
* Fixed compiler and sanitizer warnings
(`#244 <https://github.com/fmtlib/fmt/issues/244>`_,
`#256 <https://github.com/fmtlib/fmt/pull/256>`_,
`#259 <https://github.com/fmtlib/fmt/pull/259>`_,
`#263 <https://github.com/fmtlib/fmt/issues/263>`_,
`#274 <https://github.com/fmtlib/fmt/issues/274>`_,
`#277 <https://github.com/fmtlib/fmt/pull/277>`_,
`#286 <https://github.com/fmtlib/fmt/pull/286>`_,
`#291 <https://github.com/fmtlib/fmt/issues/291>`_,
`#296 <https://github.com/fmtlib/fmt/issues/296>`_,
`#308 <https://github.com/fmtlib/fmt/issues/308>`_)
Thanks to `@mwinterb <https://github.com/mwinterb>`_,
`@pweiskircher (Patrik Weiskircher) <https://github.com/pweiskircher>`_,
`@Naios <https://github.com/Naios>`_.
* Improved compatibility with Windows Store apps
(`#280 <https://github.com/fmtlib/fmt/issues/280>`_,
`#285 <https://github.com/fmtlib/fmt/pull/285>`_)
Thanks to `@mwinterb <https://github.com/mwinterb>`_.
* Added tests of compatibility with older C++ standards
(`#273 <https://github.com/fmtlib/fmt/pull/273>`_).
Thanks to `@niosHD <https://github.com/niosHD>`_.
* Fixed Android build (`#271 <https://github.com/fmtlib/fmt/pull/271>`_).
Thanks to `@newnon <https://github.com/newnon>`_.
* Changed ``ArgMap`` to be backed by a vector instead of a map.
(`#261 <https://github.com/fmtlib/fmt/issues/261>`_,
`#262 <https://github.com/fmtlib/fmt/pull/262>`_).
Thanks to `@mwinterb <https://github.com/mwinterb>`_.
* Added ``fprintf`` overload that writes to a ``std::ostream``
(`#251 <https://github.com/fmtlib/fmt/pull/251>`_).
Thanks to `nickhutchinson (Nicholas Hutchinson) <https://github.com/nickhutchinson>`_.
* Export symbols when building a Windows DLL
(`#245 <https://github.com/fmtlib/fmt/pull/245>`_).
Thanks to `macdems (Maciek Dems) <https://github.com/macdems>`_.
* Fixed compilation on Cygwin (`#304 <https://github.com/fmtlib/fmt/issues/304>`_).
* Implemented a workaround for a bug in Apple LLVM version 4.2 of clang
(`#276 <https://github.com/fmtlib/fmt/issues/276>`_).
* Implemented a workaround for Google Test bug
`#705 <https://github.com/google/googletest/issues/705>`_ on gcc 6
(`#268 <https://github.com/fmtlib/fmt/issues/268>`_).
Thanks to `octoploid <https://github.com/octoploid>`_.
* Removed Biicode support because the latter has been discontinued.
2.1.1 - 2016-04-11
------------------
* The install location for generated CMake files is now configurable via
the ``FMT_CMAKE_DIR`` CMake variable
(`#299 <https://github.com/fmtlib/fmt/pull/299>`_).
Thanks to `@niosHD <https://github.com/niosHD>`_.
* Documentation fixes (`#252 <https://github.com/fmtlib/fmt/issues/252>`_).
2.1.0 - 2016-03-21
------------------
* Project layout and build system improvements
(`#267 <https://github.com/fmtlib/fmt/pull/267>`_):
* The code have been moved to the ``cppformat`` directory.
Including ``format.h`` from the top-level directory is deprecated
but works via a proxy header which will be removed in the next
major version.
* C++ Format CMake targets now have proper interface definitions.
* Installed version of the library now supports the header-only
configuration.
* Targets ``doc``, ``install``, and ``test`` are now disabled if C++ Format
is included as a CMake subproject. They can be enabled by setting
``FMT_DOC``, ``FMT_INSTALL``, and ``FMT_TEST`` in the parent project.
Thanks to `@niosHD <https://github.com/niosHD>`_.
2.0.1 - 2016-03-13
------------------
* Improved CMake find and package support
(`#264 <https://github.com/fmtlib/fmt/issues/264>`_).
Thanks to `@niosHD <https://github.com/niosHD>`_.
* Fix compile error with Android NDK and mingw32
(`#241 <https://github.com/fmtlib/fmt/issues/241>`_).
Thanks to `@Gachapen (Magnus Bjerke Vik) <https://github.com/Gachapen>`_.
* Documentation fixes
(`#248 <https://github.com/fmtlib/fmt/issues/248>`_,
`#260 <https://github.com/fmtlib/fmt/issues/260>`_).
2.0.0 - 2015-12-01
------------------
General
~~~~~~~
* [Breaking] Named arguments
(`#169 <https://github.com/fmtlib/fmt/pull/169>`_,
`#173 <https://github.com/fmtlib/fmt/pull/173>`_,
`#174 <https://github.com/fmtlib/fmt/pull/174>`_):
.. code:: c++
fmt::print("The answer is {answer}.", fmt::arg("answer", 42));
Thanks to `@jamboree <https://github.com/jamboree>`_.
* [Experimental] User-defined literals for format and named arguments
(`#204 <https://github.com/fmtlib/fmt/pull/204>`_,
`#206 <https://github.com/fmtlib/fmt/pull/206>`_,
`#207 <https://github.com/fmtlib/fmt/pull/207>`_):
.. code:: c++
using namespace fmt::literals;
fmt::print("The answer is {answer}.", "answer"_a=42);
Thanks to `@dean0x7d (Dean Moldovan) <https://github.com/dean0x7d>`_.
* [Breaking] Formatting of more than 16 arguments is now supported when using
variadic templates
(`#141 <https://github.com/fmtlib/fmt/issues/141>`_).
Thanks to `@Shauren <https://github.com/Shauren>`_.
* Runtime width specification
(`#168 <https://github.com/fmtlib/fmt/pull/168>`_):
.. code:: c++
fmt::format("{0:{1}}", 42, 5); // gives " 42"
Thanks to `@jamboree <https://github.com/jamboree>`_.
* [Breaking] Enums are now formatted with an overloaded ``std::ostream`` insertion
operator (``operator<<``) if available
(`#232 <https://github.com/fmtlib/fmt/issues/232>`_).
* [Breaking] Changed default ``bool`` format to textual, "true" or "false"
(`#170 <https://github.com/fmtlib/fmt/issues/170>`_):
.. code:: c++
fmt::print("{}", true); // prints "true"
To print ``bool`` as a number use numeric format specifier such as ``d``:
.. code:: c++
fmt::print("{:d}", true); // prints "1"
* ``fmt::printf`` and ``fmt::sprintf`` now support formatting of ``bool`` with the
``%s`` specifier giving textual output, "true" or "false"
(`#223 <https://github.com/fmtlib/fmt/pull/223>`_):
.. code:: c++
fmt::printf("%s", true); // prints "true"
Thanks to `@LarsGullik <https://github.com/LarsGullik>`_.
* [Breaking] ``signed char`` and ``unsigned char`` are now formatted as integers by default
(`#217 <https://github.com/fmtlib/fmt/pull/217>`_).
* [Breaking] Pointers to C strings can now be formatted with the ``p`` specifier
(`#223 <https://github.com/fmtlib/fmt/pull/223>`_):
.. code:: c++
fmt::print("{:p}", "test"); // prints pointer value
Thanks to `@LarsGullik <https://github.com/LarsGullik>`_.
* [Breaking] ``fmt::printf`` and ``fmt::sprintf`` now print null pointers as ``(nil)``
and null strings as ``(null)`` for consistency with glibc
(`#226 <https://github.com/fmtlib/fmt/pull/226>`_).
Thanks to `@LarsGullik <https://github.com/LarsGullik>`_.
* [Breaking] ``fmt::(s)printf`` now supports formatting of objects of user-defined types
that provide an overloaded ``std::ostream`` insertion operator (``operator<<``)
(`#201 <https://github.com/fmtlib/fmt/issues/201>`_):
.. code:: c++
fmt::printf("The date is %s", Date(2012, 12, 9));
* [Breaking] The ``Buffer`` template is now part of the public API and can be used
to implement custom memory buffers
(`#140 <https://github.com/fmtlib/fmt/issues/140>`_).
Thanks to `@polyvertex (Jean-Charles Lefebvre) <https://github.com/polyvertex>`_.
* [Breaking] Improved compatibility between ``BasicStringRef`` and
`std::experimental::basic_string_view
<http://en.cppreference.com/w/cpp/experimental/basic_string_view>`_
(`#100 <https://github.com/fmtlib/fmt/issues/100>`_,
`#159 <https://github.com/fmtlib/fmt/issues/159>`_,
`#183 <https://github.com/fmtlib/fmt/issues/183>`_):
- Comparison operators now compare string content, not pointers
- ``BasicStringRef::c_str`` replaced by ``BasicStringRef::data``
- ``BasicStringRef`` is no longer assumed to be null-terminated
References to null-terminated strings are now represented by a new class,
``BasicCStringRef``.
* Dependency on pthreads introduced by Google Test is now optional
(`#185 <https://github.com/fmtlib/fmt/issues/185>`_).
* New CMake options ``FMT_DOC``, ``FMT_INSTALL`` and ``FMT_TEST`` to control
generation of ``doc``, ``install`` and ``test`` targets respectively, on by default
(`#197 <https://github.com/fmtlib/fmt/issues/197>`_,
`#198 <https://github.com/fmtlib/fmt/issues/198>`_,
`#200 <https://github.com/fmtlib/fmt/issues/200>`_).
Thanks to `@maddinat0r (Alex Martin) <https://github.com/maddinat0r>`_.
* ``noexcept`` is now used when compiling with MSVC2015
(`#215 <https://github.com/fmtlib/fmt/pull/215>`_).
Thanks to `@dmkrepo (Dmitriy) <https://github.com/dmkrepo>`_.
* Added an option to disable use of ``windows.h`` when ``FMT_USE_WINDOWS_H``
is defined as 0 before including ``format.h``
(`#171 <https://github.com/fmtlib/fmt/issues/171>`_).
Thanks to `@alfps (Alf P. Steinbach) <https://github.com/alfps>`_.
* [Breaking] ``windows.h`` is now included with ``NOMINMAX`` unless
``FMT_WIN_MINMAX`` is defined. This is done to prevent breaking code using
``std::min`` and ``std::max`` and only affects the header-only configuration
(`#152 <https://github.com/fmtlib/fmt/issues/152>`_,
`#153 <https://github.com/fmtlib/fmt/pull/153>`_,
`#154 <https://github.com/fmtlib/fmt/pull/154>`_).
Thanks to `@DevO2012 <https://github.com/DevO2012>`_.
* Improved support for custom character types
(`#171 <https://github.com/fmtlib/fmt/issues/171>`_).
Thanks to `@alfps (Alf P. Steinbach) <https://github.com/alfps>`_.
* Added an option to disable use of IOStreams when ``FMT_USE_IOSTREAMS``
is defined as 0 before including ``format.h``
(`#205 <https://github.com/fmtlib/fmt/issues/205>`_,
`#208 <https://github.com/fmtlib/fmt/pull/208>`_).
Thanks to `@JodiTheTigger <https://github.com/JodiTheTigger>`_.
* Improved detection of ``isnan``, ``isinf`` and ``signbit``.
Optimization
~~~~~~~~~~~~
* Made formatting of user-defined types more efficient with a custom stream buffer
(`#92 <https://github.com/fmtlib/fmt/issues/92>`_,
`#230 <https://github.com/fmtlib/fmt/pull/230>`_).
Thanks to `@NotImplemented <https://github.com/NotImplemented>`_.
* Further improved performance of ``fmt::Writer`` on integer formatting
and fixed a minor regression. Now it is ~7% faster than ``karma::generate``
on Karma's benchmark
(`#186 <https://github.com/fmtlib/fmt/issues/186>`_).
* [Breaking] Reduced `compiled code size
<https://github.com/fmtlib/fmt#compile-time-and-code-bloat>`_
(`#143 <https://github.com/fmtlib/fmt/issues/143>`_,
`#149 <https://github.com/fmtlib/fmt/pull/149>`_).
Distribution
~~~~~~~~~~~~
* [Breaking] Headers are now installed in
``${CMAKE_INSTALL_PREFIX}/include/cppformat``
(`#178 <https://github.com/fmtlib/fmt/issues/178>`_).
Thanks to `@jackyf (Eugene V. Lyubimkin) <https://github.com/jackyf>`_.
* [Breaking] Changed the library name from ``format`` to ``cppformat``
for consistency with the project name and to avoid potential conflicts
(`#178 <https://github.com/fmtlib/fmt/issues/178>`_).
Thanks to `@jackyf (Eugene V. Lyubimkin) <https://github.com/jackyf>`_.
* C++ Format is now available in `Debian <https://www.debian.org/>`_ GNU/Linux
(`stretch <https://packages.debian.org/source/stretch/cppformat>`_,
`sid <https://packages.debian.org/source/sid/cppformat>`_) and
derived distributions such as
`Ubuntu <https://launchpad.net/ubuntu/+source/cppformat>`_ 15.10 and later
(`#155 <https://github.com/fmtlib/fmt/issues/155>`_)::
$ sudo apt-get install libcppformat1-dev
Thanks to `@jackyf (Eugene V. Lyubimkin) <https://github.com/jackyf>`_.
* `Packages for Fedora and RHEL <https://admin.fedoraproject.org/pkgdb/package/cppformat/>`_
are now available. Thanks to Dave Johansen.
* C++ Format can now be installed via `Homebrew <http://brew.sh/>`_ on OS X
(`#157 <https://github.com/fmtlib/fmt/issues/157>`_)::
$ brew install cppformat
Thanks to `@ortho <https://github.com/ortho>`_, Anatoliy Bulukin.
Documentation
~~~~~~~~~~~~~
* Migrated from ReadTheDocs to GitHub Pages for better responsiveness
and reliability
(`#128 <https://github.com/fmtlib/fmt/issues/128>`_).
New documentation address is http://cppformat.github.io/.
* Added `Building the documentation
<https://fmt.dev/2.0.0/usage.html#building-the-documentation>`_
section to the documentation.
* Documentation build script is now compatible with Python 3 and newer pip versions.
(`#189 <https://github.com/fmtlib/fmt/pull/189>`_,
`#209 <https://github.com/fmtlib/fmt/issues/209>`_).
Thanks to `@JodiTheTigger <https://github.com/JodiTheTigger>`_ and
`@xentec <https://github.com/xentec>`_.
* Documentation fixes and improvements
(`#36 <https://github.com/fmtlib/fmt/issues/36>`_,
`#75 <https://github.com/fmtlib/fmt/issues/75>`_,
`#125 <https://github.com/fmtlib/fmt/issues/125>`_,
`#160 <https://github.com/fmtlib/fmt/pull/160>`_,
`#161 <https://github.com/fmtlib/fmt/pull/161>`_,
`#162 <https://github.com/fmtlib/fmt/issues/162>`_,
`#165 <https://github.com/fmtlib/fmt/issues/165>`_,
`#210 <https://github.com/fmtlib/fmt/issues/210>`_).
Thanks to `@syohex (Syohei YOSHIDA) <https://github.com/syohex>`_ and
bug reporters.
* Fixed out-of-tree documentation build
(`#177 <https://github.com/fmtlib/fmt/issues/177>`_).
Thanks to `@jackyf (Eugene V. Lyubimkin) <https://github.com/jackyf>`_.
Fixes
~~~~~
* Fixed ``initializer_list`` detection
(`#136 <https://github.com/fmtlib/fmt/issues/136>`_).
Thanks to `@Gachapen (Magnus Bjerke Vik) <https://github.com/Gachapen>`_.
* [Breaking] Fixed formatting of enums with numeric format specifiers in
``fmt::(s)printf``
(`#131 <https://github.com/fmtlib/fmt/issues/131>`_,
`#139 <https://github.com/fmtlib/fmt/issues/139>`_):
.. code:: c++
enum { ANSWER = 42 };
fmt::printf("%d", ANSWER);
Thanks to `@Naios <https://github.com/Naios>`_.
* Improved compatibility with old versions of MinGW
(`#129 <https://github.com/fmtlib/fmt/issues/129>`_,
`#130 <https://github.com/fmtlib/fmt/pull/130>`_,
`#132 <https://github.com/fmtlib/fmt/issues/132>`_).
Thanks to `@cstamford (Christopher Stamford) <https://github.com/cstamford>`_.
* Fixed a compile error on MSVC with disabled exceptions
(`#144 <https://github.com/fmtlib/fmt/issues/144>`_).
* Added a workaround for broken implementation of variadic templates in MSVC2012
(`#148 <https://github.com/fmtlib/fmt/issues/148>`_).
* Placed the anonymous namespace within ``fmt`` namespace for the header-only
configuration
(`#171 <https://github.com/fmtlib/fmt/issues/171>`_).
Thanks to `@alfps (Alf P. Steinbach) <https://github.com/alfps>`_.
* Fixed issues reported by Coverity Scan
(`#187 <https://github.com/fmtlib/fmt/issues/187>`_,
`#192 <https://github.com/fmtlib/fmt/issues/192>`_).
* Implemented a workaround for a name lookup bug in MSVC2010
(`#188 <https://github.com/fmtlib/fmt/issues/188>`_).
* Fixed compiler warnings
(`#95 <https://github.com/fmtlib/fmt/issues/95>`_,
`#96 <https://github.com/fmtlib/fmt/issues/96>`_,
`#114 <https://github.com/fmtlib/fmt/pull/114>`_,
`#135 <https://github.com/fmtlib/fmt/issues/135>`_,
`#142 <https://github.com/fmtlib/fmt/issues/142>`_,
`#145 <https://github.com/fmtlib/fmt/issues/145>`_,
`#146 <https://github.com/fmtlib/fmt/issues/146>`_,
`#158 <https://github.com/fmtlib/fmt/issues/158>`_,
`#163 <https://github.com/fmtlib/fmt/issues/163>`_,
`#175 <https://github.com/fmtlib/fmt/issues/175>`_,
`#190 <https://github.com/fmtlib/fmt/issues/190>`_,
`#191 <https://github.com/fmtlib/fmt/pull/191>`_,
`#194 <https://github.com/fmtlib/fmt/issues/194>`_,
`#196 <https://github.com/fmtlib/fmt/pull/196>`_,
`#216 <https://github.com/fmtlib/fmt/issues/216>`_,
`#218 <https://github.com/fmtlib/fmt/pull/218>`_,
`#220 <https://github.com/fmtlib/fmt/pull/220>`_,
`#229 <https://github.com/fmtlib/fmt/pull/229>`_,
`#233 <https://github.com/fmtlib/fmt/issues/233>`_,
`#234 <https://github.com/fmtlib/fmt/issues/234>`_,
`#236 <https://github.com/fmtlib/fmt/pull/236>`_,
`#281 <https://github.com/fmtlib/fmt/issues/281>`_,
`#289 <https://github.com/fmtlib/fmt/issues/289>`_).
Thanks to `@seanmiddleditch (Sean Middleditch) <https://github.com/seanmiddleditch>`_,
`@dixlorenz (Dix Lorenz) <https://github.com/dixlorenz>`_,
`@CarterLi (李通洲) <https://github.com/CarterLi>`_,
`@Naios <https://github.com/Naios>`_,
`@fmatthew5876 (Matthew Fioravante) <https://github.com/fmatthew5876>`_,
`@LevskiWeng (Levski Weng) <https://github.com/LevskiWeng>`_,
`@rpopescu <https://github.com/rpopescu>`_,
`@gabime (Gabi Melman) <https://github.com/gabime>`_,
`@cubicool (Jeremy Moles) <https://github.com/cubicool>`_,
`@jkflying (Julian Kent) <https://github.com/jkflying>`_,
`@LogicalKnight (Sean L) <https://github.com/LogicalKnight>`_,
`@inguin (Ingo van Lil) <https://github.com/inguin>`_ and
`@Jopie64 (Johan) <https://github.com/Jopie64>`_.
* Fixed portability issues (mostly causing test failures) on ARM, ppc64, ppc64le,
s390x and SunOS 5.11 i386
(`#138 <https://github.com/fmtlib/fmt/issues/138>`_,
`#179 <https://github.com/fmtlib/fmt/issues/179>`_,
`#180 <https://github.com/fmtlib/fmt/issues/180>`_,
`#202 <https://github.com/fmtlib/fmt/issues/202>`_,
`#225 <https://github.com/fmtlib/fmt/issues/225>`_,
`Red Hat Bugzilla Bug 1260297 <https://bugzilla.redhat.com/show_bug.cgi?id=1260297>`_).
Thanks to `@Naios <https://github.com/Naios>`_,
`@jackyf (Eugene V. Lyubimkin) <https://github.com/jackyf>`_ and Dave Johansen.
* Fixed a name conflict with macro ``free`` defined in
``crtdbg.h`` when ``_CRTDBG_MAP_ALLOC`` is set
(`#211 <https://github.com/fmtlib/fmt/issues/211>`_).
* Fixed shared library build on OS X
(`#212 <https://github.com/fmtlib/fmt/pull/212>`_).
Thanks to `@dean0x7d (Dean Moldovan) <https://github.com/dean0x7d>`_.
* Fixed an overload conflict on MSVC when ``/Zc:wchar_t-`` option is specified
(`#214 <https://github.com/fmtlib/fmt/pull/214>`_).
Thanks to `@slavanap (Vyacheslav Napadovsky) <https://github.com/slavanap>`_.
* Improved compatibility with MSVC 2008
(`#236 <https://github.com/fmtlib/fmt/pull/236>`_).
Thanks to `@Jopie64 (Johan) <https://github.com/Jopie64>`_.
* Improved compatibility with bcc32
(`#227 <https://github.com/fmtlib/fmt/issues/227>`_).
* Fixed ``static_assert`` detection on Clang
(`#228 <https://github.com/fmtlib/fmt/pull/228>`_).
Thanks to `@dean0x7d (Dean Moldovan) <https://github.com/dean0x7d>`_.
1.1.0 - 2015-03-06
------------------
* Added ``BasicArrayWriter``, a class template that provides operations for
formatting and writing data into a fixed-size array
(`#105 <https://github.com/fmtlib/fmt/issues/105>`_ and
`#122 <https://github.com/fmtlib/fmt/issues/122>`_):
.. code:: c++
char buffer[100];
fmt::ArrayWriter w(buffer);
w.write("The answer is {}", 42);
* Added `0 A.D. <http://play0ad.com/>`_ and `PenUltima Online (POL)
<http://www.polserver.com/>`_ to the list of notable projects using C++ Format.
* C++ Format now uses MSVC intrinsics for better formatting performance
(`#115 <https://github.com/fmtlib/fmt/pull/115>`_,
`#116 <https://github.com/fmtlib/fmt/pull/116>`_,
`#118 <https://github.com/fmtlib/fmt/pull/118>`_ and
`#121 <https://github.com/fmtlib/fmt/pull/121>`_).
Previously these optimizations where only used on GCC and Clang.
Thanks to `@CarterLi <https://github.com/CarterLi>`_ and
`@objectx <https://github.com/objectx>`_.
* CMake install target (`#119 <https://github.com/fmtlib/fmt/pull/119>`_).
Thanks to `@TrentHouliston <https://github.com/TrentHouliston>`_.
You can now install C++ Format with ``make install`` command.
* Improved `Biicode <http://www.biicode.com/>`_ support
(`#98 <https://github.com/fmtlib/fmt/pull/98>`_ and
`#104 <https://github.com/fmtlib/fmt/pull/104>`_). Thanks to
`@MariadeAnton <https://github.com/MariadeAnton>`_ and
`@franramirez688 <https://github.com/franramirez688>`_.
* Improved support for building with `Android NDK
<https://developer.android.com/tools/sdk/ndk/index.html>`_
(`#107 <https://github.com/fmtlib/fmt/pull/107>`_).
Thanks to `@newnon <https://github.com/newnon>`_.
The `android-ndk-example <https://github.com/fmtlib/android-ndk-example>`_
repository provides and example of using C++ Format with Android NDK:
.. image:: https://raw.githubusercontent.com/fmtlib/android-ndk-example/
master/screenshot.png
* Improved documentation of ``SystemError`` and ``WindowsError``
(`#54 <https://github.com/fmtlib/fmt/issues/54>`_).
* Various code improvements
(`#110 <https://github.com/fmtlib/fmt/pull/110>`_,
`#111 <https://github.com/fmtlib/fmt/pull/111>`_
`#112 <https://github.com/fmtlib/fmt/pull/112>`_).
Thanks to `@CarterLi <https://github.com/CarterLi>`_.
* Improved compile-time errors when formatting wide into narrow strings
(`#117 <https://github.com/fmtlib/fmt/issues/117>`_).
* Fixed ``BasicWriter::write`` without formatting arguments when C++11 support
is disabled (`#109 <https://github.com/fmtlib/fmt/issues/109>`_).
* Fixed header-only build on OS X with GCC 4.9
(`#124 <https://github.com/fmtlib/fmt/issues/124>`_).
* Fixed packaging issues (`#94 <https://github.com/fmtlib/fmt/issues/94>`_).
* Added `changelog <https://github.com/fmtlib/fmt/blob/master/ChangeLog.rst>`_
(`#103 <https://github.com/fmtlib/fmt/issues/103>`_).
1.0.0 - 2015-02-05
------------------
* Add support for a header-only configuration when ``FMT_HEADER_ONLY`` is
defined before including ``format.h``:
.. code:: c++
#define FMT_HEADER_ONLY
#include "format.h"
* Compute string length in the constructor of ``BasicStringRef``
instead of the ``size`` method
(`#79 <https://github.com/fmtlib/fmt/issues/79>`_).
This eliminates size computation for string literals on reasonable optimizing
compilers.
* Fix formatting of types with overloaded ``operator <<`` for ``std::wostream``
(`#86 <https://github.com/fmtlib/fmt/issues/86>`_):
.. code:: c++
fmt::format(L"The date is {0}", Date(2012, 12, 9));
* Fix linkage of tests on Arch Linux
(`#89 <https://github.com/fmtlib/fmt/issues/89>`_).
* Allow precision specifier for non-float arguments
(`#90 <https://github.com/fmtlib/fmt/issues/90>`_):
.. code:: c++
fmt::print("{:.3}\n", "Carpet"); // prints "Car"
* Fix build on Android NDK
(`#93 <https://github.com/fmtlib/fmt/issues/93>`_)
* Improvements to documentation build procedure.
* Remove ``FMT_SHARED`` CMake variable in favor of standard `BUILD_SHARED_LIBS
<http://www.cmake.org/cmake/help/v3.0/variable/BUILD_SHARED_LIBS.html>`_.
* Fix error handling in ``fmt::fprintf``.
* Fix a number of warnings.
0.12.0 - 2014-10-25
-------------------
* [Breaking] Improved separation between formatting and buffer management.
``Writer`` is now a base class that cannot be instantiated directly.
The new ``MemoryWriter`` class implements the default buffer management
with small allocations done on stack. So ``fmt::Writer`` should be replaced
with ``fmt::MemoryWriter`` in variable declarations.
Old code:
.. code:: c++
fmt::Writer w;
New code:
.. code:: c++
fmt::MemoryWriter w;
If you pass ``fmt::Writer`` by reference, you can continue to do so:
.. code:: c++
void f(fmt::Writer &w);
This doesn't affect the formatting API.
* Support for custom memory allocators
(`#69 <https://github.com/fmtlib/fmt/issues/69>`_)
* Formatting functions now accept `signed char` and `unsigned char` strings as
arguments (`#73 <https://github.com/fmtlib/fmt/issues/73>`_):
.. code:: c++
auto s = format("GLSL version: {}", glGetString(GL_VERSION));
* Reduced code bloat. According to the new `benchmark results
<https://github.com/fmtlib/fmt#compile-time-and-code-bloat>`_,
cppformat is close to ``printf`` and by the order of magnitude better than
Boost Format in terms of compiled code size.
* Improved appearance of the documentation on mobile by using the `Sphinx
Bootstrap theme <http://ryan-roemer.github.io/sphinx-bootstrap-theme/>`_:
.. |old| image:: https://cloud.githubusercontent.com/assets/576385/4792130/
cd256436-5de3-11e4-9a62-c077d0c2b003.png
.. |new| image:: https://cloud.githubusercontent.com/assets/576385/4792131/
cd29896c-5de3-11e4-8f59-cac952942bf0.png
+-------+-------+
| Old | New |
+-------+-------+
| |old| | |new| |
+-------+-------+
0.11.0 - 2014-08-21
-------------------
* Safe printf implementation with a POSIX extension for positional arguments:
.. code:: c++
fmt::printf("Elapsed time: %.2f seconds", 1.23);
fmt::printf("%1$s, %3$d %2$s", weekday, month, day);
* Arguments of ``char`` type can now be formatted as integers
(Issue `#55 <https://github.com/fmtlib/fmt/issues/55>`_):
.. code:: c++
fmt::format("0x{0:02X}", 'a');
* Deprecated parts of the API removed.
* The library is now built and tested on MinGW with Appveyor in addition to
existing test platforms Linux/GCC, OS X/Clang, Windows/MSVC.
0.10.0 - 2014-07-01
-------------------
**Improved API**
* All formatting methods are now implemented as variadic functions instead
of using ``operator<<`` for feeding arbitrary arguments into a temporary
formatter object. This works both with C++11 where variadic templates are
used and with older standards where variadic functions are emulated by
providing lightweight wrapper functions defined with the ``FMT_VARIADIC``
macro. You can use this macro for defining your own portable variadic
functions:
.. code:: c++
void report_error(const char *format, const fmt::ArgList &args) {
fmt::print("Error: {}");
fmt::print(format, args);
}
FMT_VARIADIC(void, report_error, const char *)
report_error("file not found: {}", path);
Apart from a more natural syntax, this also improves performance as there
is no need to construct temporary formatter objects and control arguments'
lifetimes. Because the wrapper functions are very lightweight, this doesn't
cause code bloat even in pre-C++11 mode.
* Simplified common case of formatting an ``std::string``. Now it requires a
single function call:
.. code:: c++
std::string s = format("The answer is {}.", 42);
Previously it required 2 function calls:
.. code:: c++
std::string s = str(Format("The answer is {}.") << 42);
Instead of unsafe ``c_str`` function, ``fmt::Writer`` should be used directly
to bypass creation of ``std::string``:
.. code:: c++
fmt::Writer w;
w.write("The answer is {}.", 42);
w.c_str(); // returns a C string
This doesn't do dynamic memory allocation for small strings and is less error
prone as the lifetime of the string is the same as for ``std::string::c_str``
which is well understood (hopefully).
* Improved consistency in naming functions that are a part of the public API.
Now all public functions are lowercase following the standard library
conventions. Previously it was a combination of lowercase and
CapitalizedWords.
Issue `#50 <https://github.com/fmtlib/fmt/issues/50>`_.
* Old functions are marked as deprecated and will be removed in the next
release.
**Other Changes**
* Experimental support for printf format specifications (work in progress):
.. code:: c++
fmt::printf("The answer is %d.", 42);
std::string s = fmt::sprintf("Look, a %s!", "string");
* Support for hexadecimal floating point format specifiers ``a`` and ``A``:
.. code:: c++
print("{:a}", -42.0); // Prints -0x1.5p+5
print("{:A}", -42.0); // Prints -0X1.5P+5
* CMake option ``FMT_SHARED`` that specifies whether to build format as a
shared library (off by default).
0.9.0 - 2014-05-13
------------------
* More efficient implementation of variadic formatting functions.
* ``Writer::Format`` now has a variadic overload:
.. code:: c++
Writer out;
out.Format("Look, I'm {}!", "variadic");
* For efficiency and consistency with other overloads, variadic overload of
the ``Format`` function now returns ``Writer`` instead of ``std::string``.
Use the ``str`` function to convert it to ``std::string``:
.. code:: c++
std::string s = str(Format("Look, I'm {}!", "variadic"));
* Replaced formatter actions with output sinks: ``NoAction`` -> ``NullSink``,
``Write`` -> ``FileSink``, ``ColorWriter`` -> ``ANSITerminalSink``.
This improves naming consistency and shouldn't affect client code unless
these classes are used directly which should be rarely needed.
* Added ``ThrowSystemError`` function that formats a message and throws
``SystemError`` containing the formatted message and system-specific error
description. For example, the following code
.. code:: c++
FILE *f = fopen(filename, "r");
if (!f)
ThrowSystemError(errno, "Failed to open file '{}'") << filename;
will throw ``SystemError`` exception with description
"Failed to open file '<filename>': No such file or directory" if file
doesn't exist.
* Support for AppVeyor continuous integration platform.
* ``Format`` now throws ``SystemError`` in case of I/O errors.
* Improve test infrastructure. Print functions are now tested by redirecting
the output to a pipe.
0.8.0 - 2014-04-14
------------------
* Initial release | {
"source": "yandex/perforator",
"title": "contrib/libs/fmt/ChangeLog.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/fmt/ChangeLog.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 190074
} |
Copyright (c) 2012 - present, Victor Zverovich
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--- Optional exception to the license ---
As an exception, if, as a result of your compiling your source code, portions
of this Software are embedded into a machine-executable object form of such
source code, you may redistribute such embedded portions in such object form
without including the above copyright and permission notices. | {
"source": "yandex/perforator",
"title": "contrib/libs/fmt/LICENSE.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/fmt/LICENSE.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1407
} |
{fmt}
=====
.. image:: https://github.com/fmtlib/fmt/workflows/linux/badge.svg
:target: https://github.com/fmtlib/fmt/actions?query=workflow%3Alinux
.. image:: https://github.com/fmtlib/fmt/workflows/macos/badge.svg
:target: https://github.com/fmtlib/fmt/actions?query=workflow%3Amacos
.. image:: https://github.com/fmtlib/fmt/workflows/windows/badge.svg
:target: https://github.com/fmtlib/fmt/actions?query=workflow%3Awindows
.. image:: https://ci.appveyor.com/api/projects/status/ehjkiefde6gucy1v?svg=true
:target: https://ci.appveyor.com/project/vitaut/fmt
.. image:: https://oss-fuzz-build-logs.storage.googleapis.com/badges/fmt.svg
:alt: fmt is continuously fuzzed at oss-fuzz
:target: https://bugs.chromium.org/p/oss-fuzz/issues/list?\
colspec=ID%20Type%20Component%20Status%20Proj%20Reported%20Owner%20\
Summary&q=proj%3Dfmt&can=1
.. image:: https://img.shields.io/badge/stackoverflow-fmt-blue.svg
:alt: Ask questions at StackOverflow with the tag fmt
:target: https://stackoverflow.com/questions/tagged/fmt
**{fmt}** is an open-source formatting library providing a fast and safe
alternative to C stdio and C++ iostreams.
If you like this project, please consider donating to the BYSOL
Foundation that helps victims of political repressions in Belarus:
https://bysol.org/en/bs/general/.
`Documentation <https://fmt.dev>`__
Q&A: ask questions on `StackOverflow with the tag fmt
<https://stackoverflow.com/questions/tagged/fmt>`_.
Try {fmt} in `Compiler Explorer <https://godbolt.org/z/Eq5763>`_.
Features
--------
* Simple `format API <https://fmt.dev/latest/api.html>`_ with positional arguments
for localization
* Implementation of `C++20 std::format
<https://en.cppreference.com/w/cpp/utility/format>`__
* `Format string syntax <https://fmt.dev/latest/syntax.html>`_ similar to Python's
`format <https://docs.python.org/3/library/stdtypes.html#str.format>`_
* Fast IEEE 754 floating-point formatter with correct rounding, shortness and
round-trip guarantees
* Safe `printf implementation
<https://fmt.dev/latest/api.html#printf-formatting>`_ including the POSIX
extension for positional arguments
* Extensibility: `support for user-defined types
<https://fmt.dev/latest/api.html#formatting-user-defined-types>`_
* High performance: faster than common standard library implementations of
``(s)printf``, iostreams, ``to_string`` and ``to_chars``, see `Speed tests`_
and `Converting a hundred million integers to strings per second
<http://www.zverovich.net/2020/06/13/fast-int-to-string-revisited.html>`_
* Small code size both in terms of source code with the minimum configuration
consisting of just three files, ``core.h``, ``format.h`` and ``format-inl.h``,
and compiled code; see `Compile time and code bloat`_
* Reliability: the library has an extensive set of `tests
<https://github.com/fmtlib/fmt/tree/master/test>`_ and is `continuously fuzzed
<https://bugs.chromium.org/p/oss-fuzz/issues/list?colspec=ID%20Type%20
Component%20Status%20Proj%20Reported%20Owner%20Summary&q=proj%3Dfmt&can=1>`_
* Safety: the library is fully type safe, errors in format strings can be
reported at compile time, automatic memory management prevents buffer overflow
errors
* Ease of use: small self-contained code base, no external dependencies,
permissive MIT `license
<https://github.com/fmtlib/fmt/blob/master/LICENSE.rst>`_
* `Portability <https://fmt.dev/latest/index.html#portability>`_ with
consistent output across platforms and support for older compilers
* Clean warning-free codebase even on high warning levels such as
``-Wall -Wextra -pedantic``
* Locale-independence by default
* Optional header-only configuration enabled with the ``FMT_HEADER_ONLY`` macro
See the `documentation <https://fmt.dev>`_ for more details.
Examples
--------
**Print to stdout** (`run <https://godbolt.org/z/Tevcjh>`_)
.. code:: c++
#include <fmt/core.h>
int main() {
fmt::print("Hello, world!\n");
}
**Format a string** (`run <https://godbolt.org/z/oK8h33>`_)
.. code:: c++
std::string s = fmt::format("The answer is {}.", 42);
// s == "The answer is 42."
**Format a string using positional arguments** (`run <https://godbolt.org/z/Yn7Txe>`_)
.. code:: c++
std::string s = fmt::format("I'd rather be {1} than {0}.", "right", "happy");
// s == "I'd rather be happy than right."
**Print chrono durations** (`run <https://godbolt.org/z/K8s4Mc>`_)
.. code:: c++
#include <fmt/chrono.h>
int main() {
using namespace std::literals::chrono_literals;
fmt::print("Default format: {} {}\n", 42s, 100ms);
fmt::print("strftime-like format: {:%H:%M:%S}\n", 3h + 15min + 30s);
}
Output::
Default format: 42s 100ms
strftime-like format: 03:15:30
**Print a container** (`run <https://godbolt.org/z/MjsY7c>`_)
.. code:: c++
#include <vector>
#include <fmt/ranges.h>
int main() {
std::vector<int> v = {1, 2, 3};
fmt::print("{}\n", v);
}
Output::
[1, 2, 3]
**Check a format string at compile time**
.. code:: c++
std::string s = fmt::format("{:d}", "I am not a number");
This gives a compile-time error in C++20 because ``d`` is an invalid format
specifier for a string.
**Write a file from a single thread**
.. code:: c++
#include <fmt/os.h>
int main() {
auto out = fmt::output_file("guide.txt");
out.print("Don't {}", "Panic");
}
This can be `5 to 9 times faster than fprintf
<http://www.zverovich.net/2020/08/04/optimal-file-buffer-size.html>`_.
**Print with colors and text styles**
.. code:: c++
#include <fmt/color.h>
int main() {
fmt::print(fg(fmt::color::crimson) | fmt::emphasis::bold,
"Hello, {}!\n", "world");
fmt::print(fg(fmt::color::floral_white) | bg(fmt::color::slate_gray) |
fmt::emphasis::underline, "Hello, {}!\n", "мир");
fmt::print(fg(fmt::color::steel_blue) | fmt::emphasis::italic,
"Hello, {}!\n", "世界");
}
Output on a modern terminal:
.. image:: https://user-images.githubusercontent.com/
576385/88485597-d312f600-cf2b-11ea-9cbe-61f535a86e28.png
Benchmarks
----------
Speed tests
~~~~~~~~~~~
================= ============= ===========
Library Method Run Time, s
================= ============= ===========
libc printf 1.04
libc++ std::ostream 3.05
{fmt} 6.1.1 fmt::print 0.75
Boost Format 1.67 boost::format 7.24
Folly Format folly::format 2.23
================= ============= ===========
{fmt} is the fastest of the benchmarked methods, ~35% faster than ``printf``.
The above results were generated by building ``tinyformat_test.cpp`` on macOS
10.14.6 with ``clang++ -O3 -DNDEBUG -DSPEED_TEST -DHAVE_FORMAT``, and taking the
best of three runs. In the test, the format string ``"%0.10f:%04d:%+g:%s:%p:%c:%%\n"``
or equivalent is filled 2,000,000 times with output sent to ``/dev/null``; for
further details refer to the `source
<https://github.com/fmtlib/format-benchmark/blob/master/src/tinyformat-test.cc>`_.
{fmt} is up to 20-30x faster than ``std::ostringstream`` and ``sprintf`` on
floating-point formatting (`dtoa-benchmark <https://github.com/fmtlib/dtoa-benchmark>`_)
and faster than `double-conversion <https://github.com/google/double-conversion>`_ and
`ryu <https://github.com/ulfjack/ryu>`_:
.. image:: https://user-images.githubusercontent.com/576385/
95684665-11719600-0ba8-11eb-8e5b-972ff4e49428.png
:target: https://fmt.dev/unknown_mac64_clang12.0.html
Compile time and code bloat
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The script `bloat-test.py
<https://github.com/fmtlib/format-benchmark/blob/master/bloat-test.py>`_
from `format-benchmark <https://github.com/fmtlib/format-benchmark>`_
tests compile time and code bloat for nontrivial projects.
It generates 100 translation units and uses ``printf()`` or its alternative
five times in each to simulate a medium sized project. The resulting
executable size and compile time (Apple LLVM version 8.1.0 (clang-802.0.42),
macOS Sierra, best of three) is shown in the following tables.
**Optimized build (-O3)**
============= =============== ==================== ==================
Method Compile Time, s Executable size, KiB Stripped size, KiB
============= =============== ==================== ==================
printf 2.6 29 26
printf+string 16.4 29 26
iostreams 31.1 59 55
{fmt} 19.0 37 34
Boost Format 91.9 226 203
Folly Format 115.7 101 88
============= =============== ==================== ==================
As you can see, {fmt} has 60% less overhead in terms of resulting binary code
size compared to iostreams and comes pretty close to ``printf``. Boost Format
and Folly Format have the largest overheads.
``printf+string`` is the same as ``printf`` but with extra ``<string>``
include to measure the overhead of the latter.
**Non-optimized build**
============= =============== ==================== ==================
Method Compile Time, s Executable size, KiB Stripped size, KiB
============= =============== ==================== ==================
printf 2.2 33 30
printf+string 16.0 33 30
iostreams 28.3 56 52
{fmt} 18.2 59 50
Boost Format 54.1 365 303
Folly Format 79.9 445 430
============= =============== ==================== ==================
``libc``, ``lib(std)c++`` and ``libfmt`` are all linked as shared libraries to
compare formatting function overhead only. Boost Format is a
header-only library so it doesn't provide any linkage options.
Running the tests
~~~~~~~~~~~~~~~~~
Please refer to `Building the library`__ for the instructions on how to build
the library and run the unit tests.
__ https://fmt.dev/latest/usage.html#building-the-library
Benchmarks reside in a separate repository,
`format-benchmarks <https://github.com/fmtlib/format-benchmark>`_,
so to run the benchmarks you first need to clone this repository and
generate Makefiles with CMake::
$ git clone --recursive https://github.com/fmtlib/format-benchmark.git
$ cd format-benchmark
$ cmake .
Then you can run the speed test::
$ make speed-test
or the bloat test::
$ make bloat-test
Migrating code
--------------
`clang-tidy-fmt <https://github.com/mikecrowe/clang-tidy-fmt>`_ provides clang
tidy checks for converting occurrences of ``printf`` and ``fprintf`` to
``fmt::print``.
Projects using this library
---------------------------
* `0 A.D. <https://play0ad.com/>`_: a free, open-source, cross-platform
real-time strategy game
* `2GIS <https://2gis.ru/>`_: free business listings with a city map
* `AMPL/MP <https://github.com/ampl/mp>`_:
an open-source library for mathematical programming
* `Aseprite <https://github.com/aseprite/aseprite>`_:
animated sprite editor & pixel art tool
* `AvioBook <https://www.aviobook.aero/en>`_: a comprehensive aircraft
operations suite
* `Blizzard Battle.net <https://battle.net/>`_: an online gaming platform
* `Celestia <https://celestia.space/>`_: real-time 3D visualization of space
* `Ceph <https://ceph.com/>`_: a scalable distributed storage system
* `ccache <https://ccache.dev/>`_: a compiler cache
* `ClickHouse <https://github.com/ClickHouse/ClickHouse>`_: analytical database
management system
* `CUAUV <https://cuauv.org/>`_: Cornell University's autonomous underwater
vehicle
* `Drake <https://drake.mit.edu/>`_: a planning, control, and analysis toolbox
for nonlinear dynamical systems (MIT)
* `Envoy <https://lyft.github.io/envoy/>`_: C++ L7 proxy and communication bus
(Lyft)
* `FiveM <https://fivem.net/>`_: a modification framework for GTA V
* `fmtlog <https://github.com/MengRao/fmtlog>`_: a performant fmtlib-style
logging library with latency in nanoseconds
* `Folly <https://github.com/facebook/folly>`_: Facebook open-source library
* `Grand Mountain Adventure
<https://store.steampowered.com/app/1247360/Grand_Mountain_Adventure/>`_:
A beautiful open-world ski & snowboarding game
* `HarpyWar/pvpgn <https://github.com/pvpgn/pvpgn-server>`_:
Player vs Player Gaming Network with tweaks
* `KBEngine <https://github.com/kbengine/kbengine>`_: an open-source MMOG server
engine
* `Keypirinha <https://keypirinha.com/>`_: a semantic launcher for Windows
* `Kodi <https://kodi.tv/>`_ (formerly xbmc): home theater software
* `Knuth <https://kth.cash/>`_: high-performance Bitcoin full-node
* `Microsoft Verona <https://github.com/microsoft/verona>`_:
research programming language for concurrent ownership
* `MongoDB <https://mongodb.com/>`_: distributed document database
* `MongoDB Smasher <https://github.com/duckie/mongo_smasher>`_: a small tool to
generate randomized datasets
* `OpenSpace <https://openspaceproject.com/>`_: an open-source
astrovisualization framework
* `PenUltima Online (POL) <https://www.polserver.com/>`_:
an MMO server, compatible with most Ultima Online clients
* `PyTorch <https://github.com/pytorch/pytorch>`_: an open-source machine
learning library
* `quasardb <https://www.quasardb.net/>`_: a distributed, high-performance,
associative database
* `Quill <https://github.com/odygrd/quill>`_: asynchronous low-latency logging library
* `QKW <https://github.com/ravijanjam/qkw>`_: generalizing aliasing to simplify
navigation, and executing complex multi-line terminal command sequences
* `redis-cerberus <https://github.com/HunanTV/redis-cerberus>`_: a Redis cluster
proxy
* `redpanda <https://vectorized.io/redpanda>`_: a 10x faster Kafka® replacement
for mission critical systems written in C++
* `rpclib <http://rpclib.net/>`_: a modern C++ msgpack-RPC server and client
library
* `Salesforce Analytics Cloud
<https://www.salesforce.com/analytics-cloud/overview/>`_:
business intelligence software
* `Scylla <https://www.scylladb.com/>`_: a Cassandra-compatible NoSQL data store
that can handle 1 million transactions per second on a single server
* `Seastar <http://www.seastar-project.org/>`_: an advanced, open-source C++
framework for high-performance server applications on modern hardware
* `spdlog <https://github.com/gabime/spdlog>`_: super fast C++ logging library
* `Stellar <https://www.stellar.org/>`_: financial platform
* `Touch Surgery <https://www.touchsurgery.com/>`_: surgery simulator
* `TrinityCore <https://github.com/TrinityCore/TrinityCore>`_: open-source
MMORPG framework
* `Windows Terminal <https://github.com/microsoft/terminal>`_: the new Windows
terminal
`More... <https://github.com/search?q=fmtlib&type=Code>`_
If you are aware of other projects using this library, please let me know
by `email <mailto:[email protected]>`_ or by submitting an
`issue <https://github.com/fmtlib/fmt/issues>`_.
Motivation
----------
So why yet another formatting library?
There are plenty of methods for doing this task, from standard ones like
the printf family of function and iostreams to Boost Format and FastFormat
libraries. The reason for creating a new library is that every existing
solution that I found either had serious issues or didn't provide
all the features I needed.
printf
~~~~~~
The good thing about ``printf`` is that it is pretty fast and readily available
being a part of the C standard library. The main drawback is that it
doesn't support user-defined types. ``printf`` also has safety issues although
they are somewhat mitigated with `__attribute__ ((format (printf, ...))
<https://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html>`_ in GCC.
There is a POSIX extension that adds positional arguments required for
`i18n <https://en.wikipedia.org/wiki/Internationalization_and_localization>`_
to ``printf`` but it is not a part of C99 and may not be available on some
platforms.
iostreams
~~~~~~~~~
The main issue with iostreams is best illustrated with an example:
.. code:: c++
std::cout << std::setprecision(2) << std::fixed << 1.23456 << "\n";
which is a lot of typing compared to printf:
.. code:: c++
printf("%.2f\n", 1.23456);
Matthew Wilson, the author of FastFormat, called this "chevron hell". iostreams
don't support positional arguments by design.
The good part is that iostreams support user-defined types and are safe although
error handling is awkward.
Boost Format
~~~~~~~~~~~~
This is a very powerful library which supports both ``printf``-like format
strings and positional arguments. Its main drawback is performance. According to
various benchmarks, it is much slower than other methods considered here. Boost
Format also has excessive build times and severe code bloat issues (see
`Benchmarks`_).
FastFormat
~~~~~~~~~~
This is an interesting library which is fast, safe and has positional arguments.
However, it has significant limitations, citing its author:
Three features that have no hope of being accommodated within the
current design are:
* Leading zeros (or any other non-space padding)
* Octal/hexadecimal encoding
* Runtime width/alignment specification
It is also quite big and has a heavy dependency, STLSoft, which might be too
restrictive for using it in some projects.
Boost Spirit.Karma
~~~~~~~~~~~~~~~~~~
This is not really a formatting library but I decided to include it here for
completeness. As iostreams, it suffers from the problem of mixing verbatim text
with arguments. The library is pretty fast, but slower on integer formatting
than ``fmt::format_to`` with format string compilation on Karma's own benchmark,
see `Converting a hundred million integers to strings per second
<http://www.zverovich.net/2020/06/13/fast-int-to-string-revisited.html>`_.
License
-------
{fmt} is distributed under the MIT `license
<https://github.com/fmtlib/fmt/blob/master/LICENSE.rst>`_.
Documentation License
---------------------
The `Format String Syntax <https://fmt.dev/latest/syntax.html>`_
section in the documentation is based on the one from Python `string module
documentation <https://docs.python.org/3/library/string.html#module-string>`_.
For this reason the documentation is distributed under the Python Software
Foundation license available in `doc/python-license.txt
<https://raw.github.com/fmtlib/fmt/master/doc/python-license.txt>`_.
It only applies if you distribute the documentation of {fmt}.
Maintainers
-----------
The {fmt} library is maintained by Victor Zverovich (`vitaut
<https://github.com/vitaut>`_) and Jonathan Müller (`foonathan
<https://github.com/foonathan>`_) with contributions from many other people.
See `Contributors <https://github.com/fmtlib/fmt/graphs/contributors>`_ and
`Releases <https://github.com/fmtlib/fmt/releases>`_ for some of the names.
Let us know if your contribution is not listed or mentioned incorrectly and
we'll make it right. | {
"source": "yandex/perforator",
"title": "contrib/libs/fmt/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/fmt/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 19373
} |
# Changelog
## [1.66.0](https://github.com/googleapis/python-api-common-protos/compare/v1.65.0...v1.66.0) (2024-11-12)
### Features
* Add `MISSING_ORIGIN` and `OVERLOADED_CREDENTIALS` to `ErrorReason` enum ([d0478d8](https://github.com/googleapis/python-api-common-protos/commit/d0478d8beacb6c53aa29aa0ac14b949421de8112))
* Add field `protobuf_pythonic_types_enabled` to `ExperimentalFeatures` message ([d0478d8](https://github.com/googleapis/python-api-common-protos/commit/d0478d8beacb6c53aa29aa0ac14b949421de8112))
* Add field `selective_gapic_generation` to `CommonLanguageSettings` message ([d0478d8](https://github.com/googleapis/python-api-common-protos/commit/d0478d8beacb6c53aa29aa0ac14b949421de8112))
* Add field `time_series_resource_hierarchy_level` to `MetricDescriptorMetadata` message ([d0478d8](https://github.com/googleapis/python-api-common-protos/commit/d0478d8beacb6c53aa29aa0ac14b949421de8112))
* Add message `SelectiveGapicGeneration` ([d0478d8](https://github.com/googleapis/python-api-common-protos/commit/d0478d8beacb6c53aa29aa0ac14b949421de8112))
## [1.65.0](https://github.com/googleapis/python-api-common-protos/compare/v1.64.0...v1.65.0) (2024-08-27)
### Features
* Add field `experimental_features` to message `PythonSettings` ([#249](https://github.com/googleapis/python-api-common-protos/issues/249)) ([139490f](https://github.com/googleapis/python-api-common-protos/commit/139490fedcebf1a6674d9cf058226e6814208619))
## [1.64.0](https://github.com/googleapis/python-api-common-protos/compare/v1.63.2...v1.64.0) (2024-08-26)
### Features
* Add FieldInfo.referenced_types for generics ([2ba3577](https://github.com/googleapis/python-api-common-protos/commit/2ba35774cb6ea31513b1985e3a391c5c3435e7be))
### Bug Fixes
* Un-deprecate Endpoint.aliases field ([2ba3577](https://github.com/googleapis/python-api-common-protos/commit/2ba35774cb6ea31513b1985e3a391c5c3435e7be))
### Documentation
* Fix formatting in http.proto comments ([2ba3577](https://github.com/googleapis/python-api-common-protos/commit/2ba35774cb6ea31513b1985e3a391c5c3435e7be))
* Improve MethodSettings selector examples ([2ba3577](https://github.com/googleapis/python-api-common-protos/commit/2ba35774cb6ea31513b1985e3a391c5c3435e7be))
* Reformat comments in context proto ([2ba3577](https://github.com/googleapis/python-api-common-protos/commit/2ba35774cb6ea31513b1985e3a391c5c3435e7be))
* Update ResourceDescriptor.plural docs with AIP-122 nested collections guidance ([2ba3577](https://github.com/googleapis/python-api-common-protos/commit/2ba35774cb6ea31513b1985e3a391c5c3435e7be))
## [1.63.2](https://github.com/googleapis/python-api-common-protos/compare/v1.63.1...v1.63.2) (2024-06-19)
### Bug Fixes
* **deps:** Require protobuf>=3.20.2 ([c77c0dc](https://github.com/googleapis/python-api-common-protos/commit/c77c0dc5d29ef780d781a3c5d757736a9ed09674))
* Regenerate pb2 files for compatibility with protobuf 5.x ([c77c0dc](https://github.com/googleapis/python-api-common-protos/commit/c77c0dc5d29ef780d781a3c5d757736a9ed09674))
## [1.63.1](https://github.com/googleapis/python-api-common-protos/compare/v1.63.0...v1.63.1) (2024-05-30)
### Bug Fixes
* Increase upper limit for protobuf 5.X versions ([#212](https://github.com/googleapis/python-api-common-protos/issues/212)) ([28fc17a](https://github.com/googleapis/python-api-common-protos/commit/28fc17a9208aa98782acc6bee6c40ec12b959706))
## [1.63.0](https://github.com/googleapis/python-api-common-protos/compare/v1.62.0...v1.63.0) (2024-03-08)
### Features
* Add `api_version` field to `ServiceOptions` in `google/api/client.proto` ([6f9c4d2](https://github.com/googleapis/python-api-common-protos/commit/6f9c4d2b4b787d9ed2b447d7b99281aa3dcf97b5))
* Add `LOCATION_POLICY_VIOLATED` enum to `ErrorReason` in `google/api/error_reason.proto` ([6f9c4d2](https://github.com/googleapis/python-api-common-protos/commit/6f9c4d2b4b787d9ed2b447d7b99281aa3dcf97b5))
* Add `rest_reference_documentation_uri` field to `ServiceOptions` in `google/api/client.proto` ([6f9c4d2](https://github.com/googleapis/python-api-common-protos/commit/6f9c4d2b4b787d9ed2b447d7b99281aa3dcf97b5))
## [1.62.0](https://github.com/googleapis/python-api-common-protos/compare/v1.61.0...v1.62.0) (2023-12-01)
### Features
* Add `auto_populated_fields` field of `MethodSettings` in `google/api/client_pb2` ([#194](https://github.com/googleapis/python-api-common-protos/issues/194)) ([4b0c73a](https://github.com/googleapis/python-api-common-protos/commit/4b0c73a40f9bf5337fe451c0210f73eadd196b99))
* Add support for Python 3.12 ([#192](https://github.com/googleapis/python-api-common-protos/issues/192)) ([336cdf3](https://github.com/googleapis/python-api-common-protos/commit/336cdf351d4e87891d735837817d2cfc4e5a9fc7))
### Bug Fixes
* Migrate to native namespace packages ([#187](https://github.com/googleapis/python-api-common-protos/issues/187)) ([713e388](https://github.com/googleapis/python-api-common-protos/commit/713e3887a3293aea314060e84bdcf8a12eda3d6c))
## [1.61.0](https://github.com/googleapis/python-api-common-protos/compare/v1.60.0...v1.61.0) (2023-10-09)
### Features
* Add `google/api/field_info.proto` ([2d39f37](https://github.com/googleapis/python-api-common-protos/commit/2d39f37212fe886b3029e1043ca28789e2d66876))
* Add `IDENTIFIER` to `FieldBehavior` enum ([2d39f37](https://github.com/googleapis/python-api-common-protos/commit/2d39f37212fe886b3029e1043ca28789e2d66876))
## [1.60.0](https://github.com/googleapis/python-api-common-protos/compare/v1.59.1...v1.60.0) (2023-07-27)
### Features
* Add `google/api/policy.proto` ([b2cb5c2](https://github.com/googleapis/python-api-common-protos/commit/b2cb5c257ae8d0869d33581b116995620ddae0b2))
* Add `method_policies` to `Control` ([b2cb5c2](https://github.com/googleapis/python-api-common-protos/commit/b2cb5c257ae8d0869d33581b116995620ddae0b2))
## [1.59.1](https://github.com/googleapis/python-api-common-protos/compare/v1.59.0...v1.59.1) (2023-06-06)
### Bug Fixes
* Invalid `dev` version identifiers in `setup.py` ([#166](https://github.com/googleapis/python-api-common-protos/issues/166)) ([c38e03a](https://github.com/googleapis/python-api-common-protos/commit/c38e03aa06eedf65373c283f16e7bbbd5622f37b)), closes [#165](https://github.com/googleapis/python-api-common-protos/issues/165)
## [1.59.0](https://github.com/googleapis/python-api-common-protos/compare/v1.58.0...v1.59.0) (2023-03-20)
### Features
* Add overrides_by_request_protocol to BackendRule in google/api/backend.proto ([77376dd](https://github.com/googleapis/python-api-common-protos/commit/77376dd02af0a1c9255a50516550d2474536fa9d))
* Add proto_reference_documentation_uri to Publishing in google/api/client.proto ([77376dd](https://github.com/googleapis/python-api-common-protos/commit/77376dd02af0a1c9255a50516550d2474536fa9d))
* Add SERVICE_NOT_VISIBLE and GCP_SUSPENDED to ErrorReason in google/api/error_reason.proto ([77376dd](https://github.com/googleapis/python-api-common-protos/commit/77376dd02af0a1c9255a50516550d2474536fa9d))
### Documentation
* Use rst syntax in readme ([77376dd](https://github.com/googleapis/python-api-common-protos/commit/77376dd02af0a1c9255a50516550d2474536fa9d))
## [1.58.0](https://github.com/googleapis/python-api-common-protos/compare/v1.57.1...v1.58.0) (2023-01-06)
### Features
* Add google/rpc/context/audit_context.proto ([41f1529](https://github.com/googleapis/python-api-common-protos/commit/41f1529500e535ec83e2d72f8e97dfda5469cb72))
* Add google/rpc/http.proto ([41f1529](https://github.com/googleapis/python-api-common-protos/commit/41f1529500e535ec83e2d72f8e97dfda5469cb72))
## [1.57.1](https://github.com/googleapis/python-api-common-protos/compare/v1.57.0...v1.57.1) (2022-12-08)
### Bug Fixes
* Mark reference_docs_uri field in google/api/client.proto as deprecated ([#150](https://github.com/googleapis/python-api-common-protos/issues/150)) ([52b5018](https://github.com/googleapis/python-api-common-protos/commit/52b5018abf0902a1e582a406c993b51e0d2aa3cd))
## [1.57.0](https://github.com/googleapis/python-api-common-protos/compare/v1.56.4...v1.57.0) (2022-11-15)
### Features
* Add support for Python 3.10 ([#143](https://github.com/googleapis/python-api-common-protos/issues/143)) ([63ca888](https://github.com/googleapis/python-api-common-protos/commit/63ca888512be84508fcf95e4d5d40df036a85e18))
* Add support for Python 3.11 ([#145](https://github.com/googleapis/python-api-common-protos/issues/145)) ([b9dbb21](https://github.com/googleapis/python-api-common-protos/commit/b9dbb219ea46abd9851af1fc41ea37f9d5631c0b))
* added google.api.JwtLocation.cookie ([6af2132](https://github.com/googleapis/python-api-common-protos/commit/6af21322879cba158e0a5992c9799e68c1744fac))
* added google.api.Service.publishing and client libraries settings ([6af2132](https://github.com/googleapis/python-api-common-protos/commit/6af21322879cba158e0a5992c9799e68c1744fac))
* new fields in enum google.api.ErrorReason ([6af2132](https://github.com/googleapis/python-api-common-protos/commit/6af21322879cba158e0a5992c9799e68c1744fac))
### Bug Fixes
* deprecate google.api.BackendRule.min_deadline ([6af2132](https://github.com/googleapis/python-api-common-protos/commit/6af21322879cba158e0a5992c9799e68c1744fac))
* **deps:** Require protobuf >=3.19.5 ([#141](https://github.com/googleapis/python-api-common-protos/issues/141)) ([9ea3530](https://github.com/googleapis/python-api-common-protos/commit/9ea3530b459269e964fcc98db1c5025e05d6495f))
### Documentation
* minor updates to comments ([6af2132](https://github.com/googleapis/python-api-common-protos/commit/6af21322879cba158e0a5992c9799e68c1744fac))
## [1.56.4](https://github.com/googleapis/python-api-common-protos/compare/v1.56.3...v1.56.4) (2022-07-12)
### Bug Fixes
* require python 3.7+ ([#119](https://github.com/googleapis/python-api-common-protos/issues/119)) ([507b58d](https://github.com/googleapis/python-api-common-protos/commit/507b58dfa0516aedf57880b384e92cda97152398))
## [1.56.3](https://github.com/googleapis/python-api-common-protos/compare/v1.56.2...v1.56.3) (2022-06-21)
### Bug Fixes
* **deps:** allow protobuf < 5.0.0 ([#112](https://github.com/googleapis/python-api-common-protos/issues/112)) ([67b0231](https://github.com/googleapis/python-api-common-protos/commit/67b02313bf47d86ac84917756ff026e331665637))
### Documentation
* fix changelog header to consistent size ([#108](https://github.com/googleapis/python-api-common-protos/issues/108)) ([d315b9f](https://github.com/googleapis/python-api-common-protos/commit/d315b9f23f5dbbce27c965a2b692a8d1dcf03d60))
## [1.56.2](https://github.com/googleapis/python-api-common-protos/compare/v1.56.1...v1.56.2) (2022-05-26)
### Bug Fixes
* **deps:** require grpcio >= 1.0.0, <2.0.0dev ([4a402ce](https://github.com/googleapis/python-api-common-protos/commit/4a402ce798c8364679e69eefdaadcf61fc289308))
* **deps:** require protobuf>= 3.15.0, <4.0.0dev ([#105](https://github.com/googleapis/python-api-common-protos/issues/105)) ([4a402ce](https://github.com/googleapis/python-api-common-protos/commit/4a402ce798c8364679e69eefdaadcf61fc289308))
## [1.56.1](https://github.com/googleapis/python-api-common-protos/compare/v1.56.0...v1.56.1) (2022-05-05)
### Bug Fixes
* **deps:** require protobuf >=1.15.0 ([f04ed64](https://github.com/googleapis/python-api-common-protos/commit/f04ed64b233e1ff95370ef412ad5ecb92cb5780e))
* include tests directory ([#103](https://github.com/googleapis/python-api-common-protos/issues/103)) ([72e5df1](https://github.com/googleapis/python-api-common-protos/commit/72e5df15ce63012f7d5c7781a51687e85a2cf63c))
* regenerate pb2 files using the latest version of grpcio-tools ([f04ed64](https://github.com/googleapis/python-api-common-protos/commit/f04ed64b233e1ff95370ef412ad5ecb92cb5780e))
## [1.56.0](https://github.com/googleapis/python-api-common-protos/compare/v1.55.0...v1.56.0) (2022-03-17)
### Features
* add google/api/error_reason.proto ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
* add google/api/visibility.proto ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
* add google/type/decimal.proto ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
* add google/type/interval.proto ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
* add google/type/localized_text.proto ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
* add google/type/phone_number.proto ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
* update all protos and pb2 files ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
### Bug Fixes
* expose all names in longrunning _pb2's ([#90](https://github.com/googleapis/python-api-common-protos/issues/90)) ([09e9ccd](https://github.com/googleapis/python-api-common-protos/commit/09e9ccd86c21dceb3a5add66cc4bf5009cb255a9))
* re-generate pb2 files ([#87](https://github.com/googleapis/python-api-common-protos/issues/87)) ([6260547](https://github.com/googleapis/python-api-common-protos/commit/6260547506f122ca9ee833aca0669d1650304a11))
* re-generate pb2 files using grpcio-tools<1.44.0 ([#93](https://github.com/googleapis/python-api-common-protos/issues/93)) ([76bb9f6](https://github.com/googleapis/python-api-common-protos/commit/76bb9f66f9674ad4c3a7fdc8812dadfb25b170a6))
* remove deprecated fields `aliases` and `features` from google/api/endpoint.proto ([62c04b8](https://github.com/googleapis/python-api-common-protos/commit/62c04b83ef9ce972760407d8e9e9e0d77bbb071c))
## [1.55.0](https://github.com/googleapis/python-api-common-protos/compare/v1.54.0...v1.55.0) (2022-02-15)
### Features
* add location proto files. ([#84](https://github.com/googleapis/python-api-common-protos/issues/84)) ([9a33e56](https://github.com/googleapis/python-api-common-protos/commit/9a33e56ac6a07a2e717edc55a39fa7cf2f9eec15))
## [1.54.0](https://www.github.com/googleapis/python-api-common-protos/compare/v1.53.0...v1.54.0) (2021-12-07)
### Features
* add extended_operations.proto ([#77](https://www.github.com/googleapis/python-api-common-protos/issues/77)) ([bc85849](https://www.github.com/googleapis/python-api-common-protos/commit/bc85849e21494b267d87cd6dc5d0a0e23e012470))
* add google/api/routing.proto ([#75](https://www.github.com/googleapis/python-api-common-protos/issues/75)) ([1ae0bbc](https://www.github.com/googleapis/python-api-common-protos/commit/1ae0bbcc9747af4dd467e7a246c1a2a4cd5ef2ec))
## [1.53.0](https://www.github.com/googleapis/python-api-common-protos/compare/v1.52.0...v1.53.0) (2021-02-25)
### Features
* add `google.api.ResourceDescriptor.Style` ([4ce679c](https://www.github.com/googleapis/python-api-common-protos/commit/4ce679cd49771946bf781108e92e07cdf04a61eb))
* add API method signatures to longrunning operations ([8de7ae2](https://www.github.com/googleapis/python-api-common-protos/commit/8de7ae28dfe5dd4d0cb99dd3b89a8f1e614bbe6d))
* add gapic_metadata_pb2 ([#38](https://www.github.com/googleapis/python-api-common-protos/issues/38)) ([8de7ae2](https://www.github.com/googleapis/python-api-common-protos/commit/8de7ae28dfe5dd4d0cb99dd3b89a8f1e614bbe6d))
* add UNORDERED_LIST to field options ([8de7ae2](https://www.github.com/googleapis/python-api-common-protos/commit/8de7ae28dfe5dd4d0cb99dd3b89a8f1e614bbe6d))
* add WaitOperation method to longrunning operations ([8de7ae2](https://www.github.com/googleapis/python-api-common-protos/commit/8de7ae28dfe5dd4d0cb99dd3b89a8f1e614bbe6d))
* require python >=3.6 and ([#31](https://www.github.com/googleapis/python-api-common-protos/issues/31)) ([4ce679c](https://www.github.com/googleapis/python-api-common-protos/commit/4ce679cd49771946bf781108e92e07cdf04a61eb))
### Bug Fixes
* add `create_key` to FieldDescriptors ([4ce679c](https://www.github.com/googleapis/python-api-common-protos/commit/4ce679cd49771946bf781108e92e07cdf04a61eb))
* Generate gRPC files for long-running operations ([#13](https://www.github.com/googleapis/python-api-common-protos/issues/13)) ([a9ce288](https://www.github.com/googleapis/python-api-common-protos/commit/a9ce28840ddfec712da5b296f43e6c3131840db4))
### Documentation
* add link to PyPI ([#10](https://www.github.com/googleapis/python-api-common-protos/issues/10)) ([3f79402](https://www.github.com/googleapis/python-api-common-protos/commit/3f7940226b0e22aef31b82c8dc2196aa25b48a3f))
## [1.53.0dev1](https://www.github.com/googleapis/python-api-common-protos/compare/v1.52.0...v1.53.0dev1) (2021-01-27)
### Features
* add `google.api.ResourceDescriptor.Style` ([4ce679c](https://www.github.com/googleapis/python-api-common-protos/commit/4ce679cd49771946bf781108e92e07cdf04a61eb))
* require python >=3.6 and ([#31](https://www.github.com/googleapis/python-api-common-protos/issues/31)) ([4ce679c](https://www.github.com/googleapis/python-api-common-protos/commit/4ce679cd49771946bf781108e92e07cdf04a61eb))
### Bug Fixes
* add `create_key` to FieldDescriptors ([4ce679c](https://www.github.com/googleapis/python-api-common-protos/commit/4ce679cd49771946bf781108e92e07cdf04a61eb))
* Generate gRPC files for long-running operations ([#13](https://www.github.com/googleapis/python-api-common-protos/issues/13)) ([a9ce288](https://www.github.com/googleapis/python-api-common-protos/commit/a9ce28840ddfec712da5b296f43e6c3131840db4))
### Documentation
* add link to PyPI ([#10](https://www.github.com/googleapis/python-api-common-protos/issues/10)) ([3f79402](https://www.github.com/googleapis/python-api-common-protos/commit/3f7940226b0e22aef31b82c8dc2196aa25b48a3f))
## 1.52.0 (2020-06-04)
### Features
* create api-common-protos repo for python common protos ([4ef4b0d](https://www.github.com/googleapis/python-api-common-protos/commit/4ef4b0d177136bfbd19f4c00ccf2f6d7eaccb153)) | {
"source": "yandex/perforator",
"title": "contrib/libs/googleapis-common-protos/CHANGELOG.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/googleapis-common-protos/CHANGELOG.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 18163
} |
# Contributor Code of Conduct
As contributors and maintainers of this project,
and in the interest of fostering an open and welcoming community,
we pledge to respect all people who contribute through reporting issues,
posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project
a harassment-free experience for everyone,
regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information,
such as physical or electronic
addresses, without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct.
By adopting this Code of Conduct,
project maintainers commit themselves to fairly and consistently
applying these principles to every aspect of managing this project.
Project maintainers who do not follow or enforce the Code of Conduct
may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported by opening an issue
or contacting one or more of the project maintainers.
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/) | {
"source": "yandex/perforator",
"title": "contrib/libs/googleapis-common-protos/CODE_OF_CONDUCT.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/googleapis-common-protos/CODE_OF_CONDUCT.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1980
} |
# How to Contribute
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.
## Contributor License Agreement
Contributions to this project must be accompanied by a Contributor License
Agreement (CLA). You (or your employer) retain the copyright to your
contribution; this simply gives us permission to use and redistribute your
contributions as part of the project. Head over to
<https://cla.developers.google.com/> to see your current agreements on file or
to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Code reviews
All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
information on using pull requests.
## Community Guidelines
This project follows
[Google's Open Source Community Guidelines](https://opensource.google/conduct/). | {
"source": "yandex/perforator",
"title": "contrib/libs/googleapis-common-protos/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/googleapis-common-protos/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1102
} |
Google APIs common protos
-------------------------
.. image:: https://img.shields.io/pypi/v/googleapis-common-protos.svg
:target: https://pypi.org/project/googleapis-common-protos/
googleapis-common-protos contains the python classes generated from the common
protos in the `googleapis/googleapis <https://github.com/googleapis/googleapis>`_ repository. | {
"source": "yandex/perforator",
"title": "contrib/libs/googleapis-common-protos/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/googleapis-common-protos/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 361
} |
# Security Policy
To report a security issue, please use [g.co/vulnz](https://g.co/vulnz).
The Google Security Team will respond within 5 working days of your report on g.co/vulnz.
We use g.co/vulnz for our intake, and do coordination and disclosure here using GitHub Security Advisory to privately discuss and fix the issue. | {
"source": "yandex/perforator",
"title": "contrib/libs/googleapis-common-protos/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/googleapis-common-protos/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 328
} |
gRPC C++ - Building from source
===========================
This document has detailed instructions on how to build gRPC C++ from source. Note that it only covers the build of gRPC itself and is meant for gRPC C++ contributors and/or power users.
Other should follow the user instructions. See the [How to use](https://github.com/grpc/grpc/tree/master/src/cpp#to-start-using-grpc-c) instructions for guidance on how to add gRPC as a dependency to a C++ application (there are several ways and system-wide installation is often not the best choice).
# Pre-requisites
## Linux
```sh
$ [sudo] apt-get install build-essential autoconf libtool pkg-config
```
If you plan to build using CMake
```sh
$ [sudo] apt-get install cmake
```
If you are a contributor and plan to build and run tests, install the following as well:
```sh
$ # clang and LLVM C++ lib is only required for sanitizer builds
$ [sudo] apt-get install clang libc++-dev
```
## MacOS
On a Mac, you will first need to
install Xcode or
[Command Line Tools for Xcode](https://developer.apple.com/download/more/)
and then run the following command from a terminal:
```sh
$ [sudo] xcode-select --install
```
To build gRPC from source, you may need to install the following
packages from [Homebrew](https://brew.sh):
```sh
$ brew install autoconf automake libtool shtool
```
If you plan to build using CMake, follow the instructions from https://cmake.org/download/
*Tip*: when building,
you *may* want to explicitly set the `LIBTOOL` and `LIBTOOLIZE`
environment variables when running `make` to ensure the version
installed by `brew` is being used:
```sh
$ LIBTOOL=glibtool LIBTOOLIZE=glibtoolize make
```
## Windows
To prepare for cmake + Microsoft Visual C++ compiler build
- Install Visual Studio 2019 or later (Visual C++ compiler will be used).
- Install [Git](https://git-scm.com/).
- Install [CMake](https://cmake.org/download/).
- Install [nasm](https://www.nasm.us/) and add it to `PATH` (`choco install nasm`) - *required by boringssl*
- (Optional) Install [Ninja](https://ninja-build.org/) (`choco install ninja`)
# Clone the repository (including submodules)
Before building, you need to clone the gRPC github repository and download submodules containing source code
for gRPC's dependencies (that's done by the `submodule` command or `--recursive` flag). Use following commands
to clone the gRPC repository at the [latest stable release tag](https://github.com/grpc/grpc/releases)
## Unix
```sh
$ git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc
$ cd grpc
$ git submodule update --init
```
## Windows
```
> git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc
> cd grpc
> git submodule update --init
```
NOTE: The `bazel` build tool uses a different model for dependencies. You only need to worry about downloading submodules if you're building
with something else than `bazel` (e.g. `cmake`).
# Build from source
In the C++ world, there's no "standard" build system that would work for all supported use cases and on all supported platforms.
Therefore, gRPC supports several major build systems, which should satisfy most users. Depending on your needs
we recommend building using `bazel` or `cmake`.
## Building with bazel (recommended)
Bazel is the primary build system for gRPC C++. If you're comfortable using bazel, we can certainly recommend it.
Using bazel will give you the best developer experience in addition to faster and cleaner builds.
You'll need `bazel` version `1.0.0` or higher to build gRPC.
See [Installing Bazel](https://docs.bazel.build/versions/master/install.html) for instructions how to install bazel on your system.
We support building with `bazel` on Linux, MacOS and Windows.
From the grpc repository root
```
# Build gRPC C++
$ bazel build :all
```
```
# Run all the C/C++ tests
$ bazel test --config=dbg //test/...
```
NOTE: If you are a gRPC maintainer and you have access to our test cluster, you should use our [gRPC's Remote Execution environment](tools/remote_build/README.md)
to get significant improvement to the build and test speed (and a bunch of other very useful features).
## Building with CMake
### Linux/Unix, Using Make
Run from the grpc directory after cloning the repo with --recursive or updating submodules.
```
$ mkdir -p cmake/build
$ cd cmake/build
$ cmake ../..
$ make
```
If you want to build shared libraries (`.so` files), run `cmake` with `-DBUILD_SHARED_LIBS=ON`.
### Windows, Using Visual Studio 2019 or later
When using the "Visual Studio" generator,
cmake will generate a solution (`grpc.sln`) that contains a VS project for
every target defined in `CMakeLists.txt` (+ a few extra convenience projects
added automatically by cmake). After opening the solution with Visual Studio
you will be able to browse and build the code.
```
> @rem Run from grpc directory after cloning the repo with --recursive or updating submodules.
> md .build
> cd .build
> cmake .. -G "Visual Studio 16 2019"
> cmake --build . --config Release
```
Using gRPC C++ as a DLL is not recommended, but you can still enable it by running `cmake` with `-DBUILD_SHARED_LIBS=ON`.
### Windows, Using Ninja (faster build).
Please note that when using Ninja, you will still need Visual C++ (part of Visual Studio)
installed to be able to compile the C/C++ sources.
```
> @rem Run from grpc directory after cloning the repo with --recursive or updating submodules.
> cd cmake
> md build
> cd build
> call "%VS140COMNTOOLS%..\..\VC\vcvarsall.bat" x64
> cmake ..\.. -GNinja -DCMAKE_BUILD_TYPE=Release
> cmake --build .
```
Using gRPC C++ as a DLL is not recommended, but you can still enable it by running `cmake` with `-DBUILD_SHARED_LIBS=ON`.
### Windows: A note on building shared libs (DLLs)
Windows DLL build is supported at a "best effort" basis and we don't recommend using gRPC C++ as a DLL as there are some known drawbacks around how C++ DLLs work on Windows. For example, there is no stable C++ ABI and you can't safely allocate memory in one DLL, and free it in another etc.
That said, we don't actively prohibit building DLLs on windows (it can be enabled in cmake with `-DBUILD_SHARED_LIBS=ON`), and are free to use the DLL builds
at your own risk.
- you've been warned that there are some important drawbacks and some things might not work at all or will be broken in interesting ways.
- we don't have extensive testing for DLL builds in place (to avoid maintenance costs, increased test duration etc.) so regressions / build breakages might occur
### Dependency management
gRPC's CMake build system has two options for handling dependencies.
CMake can build the dependencies for you, or it can search for libraries
that are already installed on your system and use them to build gRPC.
This behavior is controlled by the `gRPC_<depname>_PROVIDER` CMake variables,
e.g. `gRPC_CARES_PROVIDER`. The options that these variables take are as follows:
* module - build dependencies alongside gRPC. The source code is obtained from
gRPC's git submodules.
* package - use external copies of dependencies that are already available
on your system. These could come from your system package manager, or perhaps
you pre-installed them using CMake with the `CMAKE_INSTALL_PREFIX` option.
For example, if you set `gRPC_CARES_PROVIDER=module`, then CMake will build
c-ares before building gRPC. On the other hand, if you set
`gRPC_CARES_PROVIDER=package`, then CMake will search for a copy of c-ares
that's already installed on your system and use it to build gRPC.
### Install after build
Perform the following steps to install gRPC using CMake.
* Set `-DgRPC_INSTALL=ON`
* Build the `install` target
The install destination is controlled by the
[`CMAKE_INSTALL_PREFIX`](https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html) variable.
If you are running CMake v3.13 or newer you can build gRPC's dependencies
in "module" mode and install them alongside gRPC in a single step.
[Example](test/distrib/cpp/run_distrib_test_cmake_module_install.sh)
If you are building gRPC < 1.27 or if you are using CMake < 3.13 you will need
to select "package" mode (rather than "module" mode) for the dependencies.
This means you will need to have external copies of these libraries available
on your system. This [example](test/distrib/cpp/run_distrib_test_cmake.sh) shows
how to install dependencies with cmake before proceeding to installing gRPC itself.
```
# NOTE: all of gRPC's dependencies need to be already installed
$ cmake ../.. -DgRPC_INSTALL=ON \
-DCMAKE_BUILD_TYPE=Release \
-DgRPC_ABSL_PROVIDER=package \
-DgRPC_CARES_PROVIDER=package \
-DgRPC_PROTOBUF_PROVIDER=package \
-DgRPC_RE2_PROVIDER=package \
-DgRPC_SSL_PROVIDER=package \
-DgRPC_ZLIB_PROVIDER=package
$ make
$ make install
```
### Cross-compiling
You can use CMake to cross-compile gRPC for another architecture. In order to
do so, you will first need to build `protoc` and `grpc_cpp_plugin`
for the host architecture. These tools are used during the build of gRPC, so
we need copies of executables that can be run natively.
You will likely need to install the toolchain for the platform you are
targeting for your cross-compile. Once you have done so, you can write a
toolchain file to tell CMake where to find the compilers and system tools
that will be used for this build.
This toolchain file is specified to CMake by setting the `CMAKE_TOOLCHAIN_FILE`
variable.
```
$ cmake ../.. -DCMAKE_TOOLCHAIN_FILE=path/to/file
$ make
```
[Cross-compile example](test/distrib/cpp/run_distrib_test_cmake_aarch64_cross.sh)
### A note on SONAME and its ABI compatibility implications in the cmake build
Best efforts are made to bump the SONAME revision during ABI breaches. While a
change in the SONAME clearly indicates an ABI incompatibility, no hard guarantees
can be made about any sort of ABI stability across the same SONAME version.
## Building with make on UNIX systems (deprecated)
NOTE: `make` used to be gRPC's default build system, but we're no longer recommending it. You should use `bazel` or `cmake` instead. The `Makefile` is only intended for internal usage and is not meant for public consumption.
From the grpc repository root
```sh
$ make
```
NOTE: if you get an error on linux such as 'aclocal-1.15: command not found', which can happen if you ran 'make' before installing the pre-reqs, try the following:
```sh
$ git clean -f -d -x && git submodule foreach --recursive git clean -f -d -x
$ [sudo] apt-get install build-essential autoconf libtool pkg-config
$ make
```
### A note on `protoc`
By default gRPC uses [protocol buffers](https://github.com/protocolbuffers/protobuf),
you will need the `protoc` compiler to generate stub server and client code.
If you compile gRPC from source, as described above, the Makefile will
automatically try compiling the `protoc` in third_party if you cloned the
repository recursively and it detects that you do not already have 'protoc' compiler
installed. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/BUILDING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/BUILDING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 11115
} |
## Community Code of Conduct
gRPC follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/CODE-OF-CONDUCT.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/CODE-OF-CONDUCT.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 137
} |
# gRPC Concepts Overview
Remote Procedure Calls (RPCs) provide a useful abstraction for building
distributed applications and services. The libraries in this repository
provide a concrete implementation of the gRPC protocol, layered over HTTP/2.
These libraries enable communication between clients and servers using any
combination of the supported languages.
## Interface
Developers using gRPC start with a language agnostic description of an RPC service (a collection
of methods). From this description, gRPC will generate client and server side interfaces
in any of the supported languages. The server implements
the service interface, which can be remotely invoked by the client interface.
By default, gRPC uses [Protocol Buffers](https://github.com/protocolbuffers/protobuf) as the
Interface Definition Language (IDL) for describing both the service interface
and the structure of the payload messages. It is possible to use other
alternatives if desired.
### Invoking & handling remote calls
Starting from an interface definition in a .proto file, gRPC provides
Protocol Compiler plugins that generate Client- and Server-side APIs.
gRPC users call into these APIs on the Client side and implement
the corresponding API on the server side.
#### Synchronous vs. asynchronous
Synchronous RPC calls, that block until a response arrives from the server, are
the closest approximation to the abstraction of a procedure call that RPC
aspires to.
On the other hand, networks are inherently asynchronous and in many scenarios,
it is desirable to have the ability to start RPCs without blocking the current
thread.
The gRPC programming surface in most languages comes in both synchronous and
asynchronous flavors.
## Streaming
gRPC supports streaming semantics, where either the client or the server (or both)
send a stream of messages on a single RPC call. The most general case is
Bidirectional Streaming where a single gRPC call establishes a stream in which both
the client and the server can send a stream of messages to each other. The streamed
messages are delivered in the order they were sent.
# Protocol
The [gRPC protocol](doc/PROTOCOL-HTTP2.md) specifies the abstract requirements for communication between
clients and servers. A concrete embedding over HTTP/2 completes the picture by
fleshing out the details of each of the required operations.
## Abstract gRPC protocol
A gRPC call comprises of a bidirectional stream of messages, initiated by the client. In the client-to-server direction, this stream begins with a mandatory `Call Header`, followed by optional `Initial-Metadata`, followed by zero or more `Payload Messages`. A client signals end of its message stream by means of an underlying lower level protocol. The server-to-client direction contains an optional `Initial-Metadata`, followed by zero or more `Payload Messages` terminated with a mandatory `Status` and optional `Status-Metadata` (a.k.a.,`Trailing-Metadata`).
## Implementation over HTTP/2
The abstract protocol defined above is implemented over [HTTP/2](https://http2.github.io/). gRPC bidirectional streams are mapped to HTTP/2 streams. The contents of `Call Header` and `Initial Metadata` are sent as HTTP/2 headers and subject to HPACK compression. `Payload Messages` are serialized into a byte stream of length prefixed gRPC frames which are then fragmented into HTTP/2 frames at the sender and reassembled at the receiver. `Status` and `Trailing-Metadata` are sent as HTTP/2 trailing headers (a.k.a., trailers). A client signals end of its message stream by setting `END_STREAM` flag on the last DATA frame.
For a detailed description see [doc/PROTOCOL-HTTP2.md](doc/PROTOCOL-HTTP2.md).
## Flow Control
gRPC uses the flow control mechanism in HTTP/2. This enables fine-grained control of memory used for buffering in-flight messages. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/CONCEPTS.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/CONCEPTS.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 3842
} |
# How to contribute
We definitely welcome your patches and contributions to gRPC! Please read the gRPC
organization's [governance rules](https://github.com/grpc/grpc-community/blob/master/governance.md)
and [contribution guidelines](https://github.com/grpc/grpc-community/blob/master/CONTRIBUTING.md) before proceeding.
If you are new to github, please start by reading [Pull Request
howto](https://help.github.com/articles/about-pull-requests/)
If you are looking for features to work on, please filter the issues list with the label ["disposition/help wanted"](https://github.com/grpc/grpc/issues?q=label%3A%22disposition%2Fhelp+wanted%22).
Please note that some of these feature requests might have been closed in the past as a result of them being marked as stale due to there being no activity, but these are still valid feature requests.
## Legal requirements
In order to protect both you and ourselves, you will need to sign the
[Contributor License
Agreement](https://identity.linuxfoundation.org/projects/cncf).
## Cloning the repository
Before starting any development work you will need a local copy of the gRPC repository.
Please follow the instructions in [Building gRPC C++: Clone the repository](BUILDING.md#clone-the-repository-including-submodules).
## Building & Running tests
Different languages use different build systems. To hide the complexity
of needing to build with many different build systems, a portable python
script that unifies the experience of building and testing gRPC in different
languages and on different platforms is provided.
To build gRPC in the language of choice (e.g. `c++`, `csharp`, `php`, `python`, `ruby`, ...)
- Prepare your development environment based on language-specific instructions in `src/YOUR-LANGUAGE` directory.
- The language-specific instructions might involve installing C/C++ prerequisites listed in
[Building gRPC C++: Prerequisites](BUILDING.md#pre-requisites). This is because gRPC implementations
in this repository are using the native gRPC "core" library internally.
- Run
```
python tools/run_tests/run_tests.py -l YOUR_LANGUAGE --build_only
```
- To also run all the unit tests after building
```
python tools/run_tests/run_tests.py -l YOUR_LANGUAGE
```
You can also run `python tools/run_tests/run_tests.py --help` to discover useful command line flags supported. For more details,
see [tools/run_tests](tools/run_tests) where you will also find guidance on how to run various other test suites (e.g. interop tests, benchmarks).
## Generated project files
To ease maintenance of language- and platform- specific build systems, many
projects files are generated using templates and should not be edited by hand.
Run `tools/buildgen/generate_projects.sh` to regenerate. See
[templates](templates) for details.
As a rule of thumb, if you see the "sanity tests" failing you've most likely
edited generated files or you didn't regenerate the projects properly (or your
code formatting doesn't match our code style).
## Guidelines for Pull Requests
How to get your contributions merged smoothly and quickly.
- Create **small PRs** that are narrowly focused on **addressing a single
concern**. We often times receive PRs that are trying to fix several things
at a time, but only one fix is considered acceptable, nothing gets merged and
both author's & review's time is wasted. Create more PRs to address different
concerns and everyone will be happy.
- For speculative changes, consider opening an issue and discussing it first.
If you are suggesting a behavioral or API change, consider starting with a
[gRFC proposal](https://github.com/grpc/proposal).
- Provide a good **PR description** as a record of **what** change is being made
and **why** it was made. Link to a GitHub issue if it exists.
- Don't fix code style and formatting unless you are already changing that line
to address an issue. PRs with irrelevant changes won't be merged. If you do
want to fix formatting or style, do that in a separate PR.
- If you are adding a new file, make sure it has the copyright message template
at the top as a comment. You can copy over the message from an existing file
and update the year.
- Unless your PR is trivial, you should expect there will be reviewer comments
that you'll need to address before merging. We expect you to be reasonably
responsive to those comments, otherwise the PR will be closed after 2-3 weeks
of inactivity.
- If you have non-trivial contributions, please consider adding an entry to [the
AUTHORS file](https://github.com/grpc/grpc/blob/master/AUTHORS) listing the
copyright holder for the contribution (yourself, if you are signing the
individual CLA, or your company, for corporate CLAs) in the same PR as your
contribution. This needs to be done only once, for each company, or
individual. Please keep this file in alphabetical order.
- Maintain **clean commit history** and use **meaningful commit messages**.
PRs with messy commit history are difficult to review and won't be merged.
Use `rebase -i upstream/master` to curate your commit history and/or to
bring in latest changes from master (but avoid rebasing in the middle of
a code review).
- Keep your PR up to date with upstream/master (if there are merge conflicts,
we can't really merge your change).
- If you are regenerating the projects using
`tools/buildgen/generate_projects.sh`, make changes to generated files a
separate commit with commit message `regenerate projects`. Mixing changes
to generated and hand-written files make your PR difficult to review.
Note that running this script requires the installation of Python packages
`pyyaml` and `mako` (typically installed using `pip`) as well as a recent
version of [`go`](https://golang.org/doc/install#install).
- **All tests need to be passing** before your change can be merged.
We recommend you **run tests locally** before creating your PR to catch
breakages early on (see [tools/run_tests](tools/run_tests). Ultimately, the
green signal will be provided by our testing infrastructure. The reviewer
will help you if there are test failures that seem not related to the change
you are making.
- Exceptions to the rules can be made if there's a compelling reason for doing
so.
## Obtaining Commit Access
We grant Commit Access to contributors based on the following criteria:
* Sustained contribution to the gRPC project.
* Deep understanding of the areas contributed to, and good consideration of various reliability, usability and performance tradeoffs.
* Contributions demonstrate that obtaining Commit Access will significantly reduce friction for the contributors or others.
In addition to submitting PRs, a Contributor with Commit Access can:
* Review PRs and merge once other checks and criteria pass.
* Triage bugs and PRs and assign appropriate labels and reviewers.
### Obtaining Commit Access without Code Contributions
The [gRPC organization](https://github.com/grpc) is comprised of multiple repositories and commit access is usually restricted to one or more of these repositories. Some repositories such as the [grpc.github.io](https://github.com/grpc/grpc.github.io/) do not have code, but the same principle of sustained, high quality contributions, with a good understanding of the fundamentals, apply. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 7385
} |
This repository is governed by the gRPC organization's [governance rules](https://github.com/grpc/grpc-community/blob/master/governance.md). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/GOVERNANCE.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/GOVERNANCE.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 140
} |
This page lists all active maintainers of this repository. If you were a
maintainer and would like to add your name to the Emeritus list, please send us a
PR.
See [GOVERNANCE.md](https://github.com/grpc/grpc-community/blob/master/governance.md)
for governance guidelines and how to become a maintainer.
See [CONTRIBUTING.md](https://github.com/grpc/grpc-community/blob/master/CONTRIBUTING.md)
for general contribution guidelines.
## Maintainers (in alphabetical order)
- [a11r](https://github.com/a11r), Google LLC
- [apolcyn](https://github.com/apolcyn), Google LLC
- [arjunroy](https://github.com/arjunroy), Google LLC
- [ctiller](https://github.com/ctiller), Google LLC
- [daniel-j-born](https://github.com/daniel-j-born), Google LLC
- [dapengzhang0](https://github.com/dapengzhang0), Google LLC
- [dfawley](https://github.com/dfawley), Google LLC
- [dklempner](https://github.com/dklempner), Google LLC
- [drfloob](https://github.com/drfloob), Google LLC
- [ejona86](https://github.com/ejona86), Google LLC
- [gnossen](https://github.com/gnossen), Google LLC
- [guantaol](https://github.com/guantaol), Google LLC
- [hcaseyal](https://github.com/hcaseyal), Google LLC
- [jtattermusch](https://github.com/jtattermusch), Google LLC
- [LittleCVR](https://github.com/littlecvr), Google LLC
- [markdroth](https://github.com/markdroth), Google LLC
- [matthewstevenson88](https://github.com/matthewstevenson88), Google LLC
- [medinandres](https://github.com/medinandres), Google LLC
- [murgatroid99](https://github.com/murgatroid99), Google LLC
- [nanahpang](https://github.com/nanahpang), Google LLC
- [pfreixes](https://github.com/pfreixes), Skyscanner Ltd
- [ran-su](https://github.com/ran-su), Google LLC
- [sanjaypujare](https://github.com/sanjaypujare), Google LLC
- [sergiitk](https://github.com/sergiitk), Google LLC
- [soheilhy](https://github.com/soheilhy), Google LLC
- [stanley-cheung](https://github.com/stanley-cheung), Google LLC
- [veblush](https://github.com/veblush), Google LLC
- [vishalpowar](https://github.com/vishalpowar), Google LLC
- [Vizerai](https://github.com/Vizerai), Google LLC
- [wenbozhu](https://github.com/wenbozhu), Google LLC
- [yashykt](https://github.com/yashykt), Google LLC
- [yihuazhang](https://github.com/yihuazhang), Google LLC
- [ZhenLian](https://github.com/ZhenLian), Google LLC
- [ZhouyihaiDing](https://github.com/ZhouyihaiDing), Google LLC
## Emeritus Maintainers (in alphabetical order)
- [adelez](https://github.com/adelez), Google LLC
- [AspirinSJL](https://github.com/AspirinSJL), Google LLC
- [billfeng327](https://github.com/billfeng327), Google LLC
- [bogdandrutu](https://github.com/bogdandrutu), Google LLC
- [dgquintas](https://github.com/dgquintas), Google LLC
- [ericgribkoff](https://github.com/ericgribkoff), Google LLC
- [fengli79](https://github.com/fengli79), Google LLC
- [jboeuf](https://github.com/jboeuf), Google LLC
- [jcanizales](https://github.com/jcanizales), Google LLC
- [jiangtaoli2016](https://github.com/jiangtaoli2016), Google LLC
- [jkolhe](https://github.com/jkolhe), Google LLC
- [jpalmerLinuxFoundation](https://github.com/jpalmerLinuxFoundation), Linux Foundation
- [justinburke](https://github.com/justinburke), Google LLC
- [karthikravis](https://github.com/karthikravis), Google LLC
- [kpayson64](https://github.com/kpayson64), Google LLC
- [kumaralokgithub](https://github.com/kumaralokgithub), Google LLC
- [lidizheng](https://github.com/lidizheng), Google LLC
- [lyuxuan](https://github.com/lyuxuan), Google LLC
- [matt-kwong](https://github.com/matt-kwong), Google LLC
- [mehrdada](https://github.com/mehrdada), Dropbox, Inc.
- [mhaidrygoog](https://github.com/mhaidrygoog), Google LLC
- [mit-mit](https://github.com/mit-mit), Google LLC
- [mpwarres](https://github.com/mpwarres), Google LLC
- [muxi](https://github.com/muxi), Google LLC
- [nathanielmanistaatgoogle](https://github.com/nathanielmanistaatgoogle), Google LLC
- [ncteisen](https://github.com/ncteisen), Google LLC
- [nicolasnoble](https://github.com/nicolasnoble), Google LLC
- [pmarks-net](https://github.com/pmarks-net), Google LLC
- [qixuanl1](https://github.com/qixuanl1), Google LLC
- [rmstar](https://github.com/rmstar), Google LLC
- [sheenaqotj](https://github.com/sheenaqotj), Google LLC
- [slash-lib](https://github.com/slash-lib), Google LLC
- [soltanmm](https://github.com/soltanmm), Google LLC
- [sreecha](https://github.com/sreecha), LinkedIn
- [srini100](https://github.com/srini100), Google LLC
- [summerxyt](https://github.com/summerxyt), Google LLC
- [vjpai](https://github.com/vjpai), Google LLC
- [wcevans](https://github.com/wcevans), Google LLC
- [y-zeng](https://github.com/y-zeng), Google LLC
- [yang-g](https://github.com/yang-g), Google LLC
- [zpencer](https://github.com/zpencer), Google LLC | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/MAINTAINERS.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/MAINTAINERS.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4774
} |
# Top-level Items by language
## Bazel
* [grpc.bzl](grpc.bzl)
## Objective-C
* [gRPC.podspec](gRPC.podspec)
## PHP
* [composer.json](composer.json)
* [config.m4](config.m4)
* [package.xml](package.xml)
## Python
* [requirements.txt](requirements.txt)
* [setup.cfg](setup.cfg)
* [setup.py](setup.py)
* [PYTHON-MANIFEST.in](PYTHON-MANIFEST.in)
## Ruby
* [Gemfile](Gemfile)
* [grpc.gemspec](grpc.gemspec)
* [Rakefile](Rakefile) | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/MANIFEST.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/MANIFEST.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 429
} |
# gRPC – An RPC library and framework
gRPC is a modern, open source, high-performance remote procedure call (RPC)
framework that can run anywhere. gRPC enables client and server applications to
communicate transparently, and simplifies the building of connected systems.
<table>
<tr>
<td><b>Homepage:</b></td>
<td><a href="https://grpc.io/">grpc.io</a></td>
</tr>
<tr>
<td><b>Mailing List:</b></td>
<td><a href="https://groups.google.com/forum/#!forum/grpc-io">[email protected]</a></td>
</tr>
</table>
[](https://gitter.im/grpc/grpc?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
## To start using gRPC
To maximize usability, gRPC supports the standard method for adding dependencies
to a user's chosen language (if there is one). In most languages, the gRPC
runtime comes as a package available in a user's language package manager.
For instructions on how to use the language-specific gRPC runtime for a project,
please refer to these documents
- [C++](src/cpp): follow the instructions under the `src/cpp` directory
- [C#/.NET](https://github.com/grpc/grpc-dotnet): NuGet packages `Grpc.Net.Client`, `Grpc.AspNetCore.Server`
- [Dart](https://github.com/grpc/grpc-dart): pub package `grpc`
- [Go](https://github.com/grpc/grpc-go): `go get google.golang.org/grpc`
- [Java](https://github.com/grpc/grpc-java): Use JARs from Maven Central
Repository
- [Kotlin](https://github.com/grpc/grpc-kotlin): Use JARs from Maven Central
Repository
- [Node](https://github.com/grpc/grpc-node): `npm install @grpc/grpc-js`
- [Objective-C](src/objective-c): Add `gRPC-ProtoRPC` dependency to podspec
- [PHP](src/php): `pecl install grpc`
- [Python](src/python/grpcio): `pip install grpcio`
- [Ruby](src/ruby): `gem install grpc`
- [WebJS](https://github.com/grpc/grpc-web): follow the grpc-web instructions
Per-language quickstart guides and tutorials can be found in the
[documentation section on the grpc.io website](https://grpc.io/docs/). Code
examples are available in the [examples](examples) directory.
Precompiled bleeding-edge package builds of gRPC `master` branch's `HEAD` are
uploaded daily to [packages.grpc.io](https://packages.grpc.io).
## To start developing gRPC
Contributions are welcome!
Please read [How to contribute](CONTRIBUTING.md) which will guide you through
the entire workflow of how to build the source code, how to run the tests, and
how to contribute changes to the gRPC codebase. The "How to contribute" document
also contains info on how the contribution process works and contains best
practices for creating contributions.
## Troubleshooting
Sometimes things go wrong. Please check out the
[Troubleshooting guide](TROUBLESHOOTING.md) if you are experiencing issues with
gRPC.
## Performance
See the
[Performance dashboard](https://grafana-dot-grpc-testing.appspot.com/)
for performance numbers of master branch daily builds.
## Concepts
See [gRPC Concepts](CONCEPTS.md)
## About This Repository
This repository contains source code for gRPC libraries implemented in multiple
languages written on top of a shared C core library [src/core](src/core).
Libraries in different languages may be in various states of development. We are
seeking contributions for all of these libraries:
| Language | Source |
| ----------------------- | ---------------------------------- |
| Shared C [core library] | [src/core](src/core) |
| C++ | [src/cpp](src/cpp) |
| Ruby | [src/ruby](src/ruby) |
| Python | [src/python](src/python) |
| PHP | [src/php](src/php) |
| C# (core library based) | [src/csharp](src/csharp) |
| Objective-C | [src/objective-c](src/objective-c) |
| Language | Source repo |
| -------------------- | -------------------------------------------------- |
| Java | [grpc-java](https://github.com/grpc/grpc-java) |
| Kotlin | [grpc-kotlin](https://github.com/grpc/grpc-kotlin) |
| Go | [grpc-go](https://github.com/grpc/grpc-go) |
| NodeJS | [grpc-node](https://github.com/grpc/grpc-node) |
| WebJS | [grpc-web](https://github.com/grpc/grpc-web) |
| Dart | [grpc-dart](https://github.com/grpc/grpc-dart) |
| .NET (pure C# impl.) | [grpc-dotnet](https://github.com/grpc/grpc-dotnet) |
| Swift | [grpc-swift](https://github.com/grpc/grpc-swift) | | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4737
} |
# Security Policy
For information on gRPC Security Policy and reporting potentional security issues, please see [gRPC CVE Process](https://github.com/grpc/proposal/blob/master/P4-grpc-cve-process.md). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 201
} |
# Troubleshooting gRPC
This guide is for troubleshooting gRPC implementations based on C core library (sources for most of them are living in the `grpc/grpc` repository).
## Enabling extra logging and tracing
Extra logging can be very useful for diagnosing problems. All gRPC implementations based on C core library support
the `GRPC_VERBOSITY` and `GRPC_TRACE` environment variables that can be used to increase the amount of information
that gets printed to stderr.
## GRPC_VERBOSITY
`GRPC_VERBOSITY` is used to set the minimum level of log messages printed by gRPC (supported values are `DEBUG`, `INFO` and `ERROR`). If this environment variable is unset, only `ERROR` logs will be printed.
## GRPC_TRACE
`GRPC_TRACE` can be used to enable extra logging for some internal gRPC components. Enabling the right traces can be invaluable
for diagnosing for what is going wrong when things aren't working as intended. Possible values for `GRPC_TRACE` are listed in [Environment Variables Overview](doc/environment_variables.md).
Multiple traces can be enabled at once (use comma as separator).
```
# Enable debug logs for an application
GRPC_VERBOSITY=debug ./helloworld_application_using_grpc
```
```
# Print information about invocations of low-level C core API.
# Note that trace logs of log level DEBUG won't be displayed.
# Also note that most tracers user log level INFO, so without setting
# GPRC_VERBOSITY accordingly, no traces will be printed.
GRPC_VERBOSITY=info GRPC_TRACE=api ./helloworld_application_using_grpc
```
```
# Print info from 3 different tracers, including tracing logs with log level DEBUG
GRPC_VERBOSITY=debug GRPC_TRACE=tcp,http,api ./helloworld_application_using_grpc
```
Known limitations: `GPRC_TRACE=tcp` is currently not implemented for Windows (you won't see any tcp traces).
Please note that the `GRPC_TRACE` environment variable has nothing to do with gRPC's "tracing" feature (= tracing RPCs in
microservice environment to gain insight about how requests are processed by deployment), it is merely used to enable printing
of extra logs. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/TROUBLESHOOTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/TROUBLESHOOTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2082
} |
Building and installing a packaged release of jemalloc can be as simple as
typing the following while in the root directory of the source tree:
./configure
make
make install
If building from unpackaged developer sources, the simplest command sequence
that might work is:
./autogen.sh
make
make install
You can uninstall the installed build artifacts like this:
make uninstall
Notes:
- "autoconf" needs to be installed
- Documentation is built by the default target only when xsltproc is
available. Build will warn but not stop if the dependency is missing.
## Advanced configuration
The 'configure' script supports numerous options that allow control of which
functionality is enabled, where jemalloc is installed, etc. Optionally, pass
any of the following arguments (not a definitive list) to 'configure':
* `--help`
Print a definitive list of options.
* `--prefix=<install-root-dir>`
Set the base directory in which to install. For example:
./configure --prefix=/usr/local
will cause files to be installed into /usr/local/include, /usr/local/lib,
and /usr/local/man.
* `--with-version=(<major>.<minor>.<bugfix>-<nrev>-g<gid>|VERSION)`
The VERSION file is mandatory for successful configuration, and the
following steps are taken to assure its presence:
1) If --with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid> is specified,
generate VERSION using the specified value.
2) If --with-version is not specified in either form and the source
directory is inside a git repository, try to generate VERSION via 'git
describe' invocations that pattern-match release tags.
3) If VERSION is missing, generate it with a bogus version:
0.0.0-0-g0000000000000000000000000000000000000000
Note that --with-version=VERSION bypasses (1) and (2), which simplifies
VERSION configuration when embedding a jemalloc release into another
project's git repository.
* `--with-rpath=<colon-separated-rpath>`
Embed one or more library paths, so that libjemalloc can find the libraries
it is linked to. This works only on ELF-based systems.
* `--with-mangling=<map>`
Mangle public symbols specified in <map> which is a comma-separated list of
name:mangled pairs.
For example, to use ld's --wrap option as an alternative method for
overriding libc's malloc implementation, specify something like:
--with-mangling=malloc:__wrap_malloc,free:__wrap_free[...]
Note that mangling happens prior to application of the prefix specified by
--with-jemalloc-prefix, and mangled symbols are then ignored when applying
the prefix.
* `--with-jemalloc-prefix=<prefix>`
Prefix all public APIs with <prefix>. For example, if <prefix> is
"prefix_", API changes like the following occur:
malloc() --> prefix_malloc()
malloc_conf --> prefix_malloc_conf
/etc/malloc.conf --> /etc/prefix_malloc.conf
MALLOC_CONF --> PREFIX_MALLOC_CONF
This makes it possible to use jemalloc at the same time as the system
allocator, or even to use multiple copies of jemalloc simultaneously.
By default, the prefix is "", except on OS X, where it is "je_". On OS X,
jemalloc overlays the default malloc zone, but makes no attempt to actually
replace the "malloc", "calloc", etc. symbols.
* `--without-export`
Don't export public APIs. This can be useful when building jemalloc as a
static library, or to avoid exporting public APIs when using the zone
allocator on OSX.
* `--with-private-namespace=<prefix>`
Prefix all library-private APIs with <prefix>je_. For shared libraries,
symbol visibility mechanisms prevent these symbols from being exported, but
for static libraries, naming collisions are a real possibility. By
default, <prefix> is empty, which results in a symbol prefix of je_ .
* `--with-install-suffix=<suffix>`
Append <suffix> to the base name of all installed files, such that multiple
versions of jemalloc can coexist in the same installation directory. For
example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0.
* `--with-malloc-conf=<malloc_conf>`
Embed `<malloc_conf>` as a run-time options string that is processed prior to
the malloc_conf global variable, the /etc/malloc.conf symlink, and the
MALLOC_CONF environment variable. For example, to change the default decay
time to 30 seconds:
--with-malloc-conf=decay_ms:30000
* `--enable-debug`
Enable assertions and validation code. This incurs a substantial
performance hit, but is very useful during application development.
* `--disable-stats`
Disable statistics gathering functionality. See the "opt.stats_print"
option documentation for usage details.
* `--enable-prof`
Enable heap profiling and leak detection functionality. See the "opt.prof"
option documentation for usage details. When enabled, there are several
approaches to backtracing, and the configure script chooses the first one
in the following list that appears to function correctly:
+ libunwind (requires --enable-prof-libunwind)
+ libgcc (unless --disable-prof-libgcc)
+ gcc intrinsics (unless --disable-prof-gcc)
* `--enable-prof-libunwind`
Use the libunwind library (http://www.nongnu.org/libunwind/) for stack
backtracing.
* `--disable-prof-libgcc`
Disable the use of libgcc's backtracing functionality.
* `--disable-prof-gcc`
Disable the use of gcc intrinsics for backtracing.
* `--with-static-libunwind=<libunwind.a>`
Statically link against the specified libunwind.a rather than dynamically
linking with -lunwind.
* `--disable-fill`
Disable support for junk/zero filling of memory. See the "opt.junk" and
"opt.zero" option documentation for usage details.
* `--disable-zone-allocator`
Disable zone allocator for Darwin. This means jemalloc won't be hooked as
the default allocator on OSX/iOS.
* `--enable-utrace`
Enable utrace(2)-based allocation tracing. This feature is not broadly
portable (FreeBSD has it, but Linux and OS X do not).
* `--enable-xmalloc`
Enable support for optional immediate termination due to out-of-memory
errors, as is commonly implemented by "xmalloc" wrapper function for malloc.
See the "opt.xmalloc" option documentation for usage details.
* `--enable-lazy-lock`
Enable code that wraps pthread_create() to detect when an application
switches from single-threaded to multi-threaded mode, so that it can avoid
mutex locking/unlocking operations while in single-threaded mode. In
practice, this feature usually has little impact on performance unless
thread-specific caching is disabled.
* `--disable-cache-oblivious`
Disable cache-oblivious large allocation alignment by default, for large
allocation requests with no alignment constraints. If this feature is
disabled, all large allocations are page-aligned as an implementation
artifact, which can severely harm CPU cache utilization. However, the
cache-oblivious layout comes at the cost of one extra page per large
allocation, which in the most extreme case increases physical memory usage
for the 16 KiB size class to 20 KiB.
* `--disable-syscall`
Disable use of syscall(2) rather than {open,read,write,close}(2). This is
intended as a workaround for systems that place security limitations on
syscall(2).
* `--disable-cxx`
Disable C++ integration. This will cause new and delete operator
implementations to be omitted.
* `--with-xslroot=<path>`
Specify where to find DocBook XSL stylesheets when building the
documentation.
* `--with-lg-page=<lg-page>`
Specify the base 2 log of the allocator page size, which must in turn be at
least as large as the system page size. By default the configure script
determines the host's page size and sets the allocator page size equal to
the system page size, so this option need not be specified unless the
system page size may change between configuration and execution, e.g. when
cross compiling.
* `--with-lg-hugepage=<lg-hugepage>`
Specify the base 2 log of the system huge page size. This option is useful
when cross compiling, or when overriding the default for systems that do
not explicitly support huge pages.
* `--with-lg-quantum=<lg-quantum>`
Specify the base 2 log of the minimum allocation alignment. jemalloc needs
to know the minimum alignment that meets the following C standard
requirement (quoted from the April 12, 2011 draft of the C11 standard):
> The pointer returned if the allocation succeeds is suitably aligned so
that it may be assigned to a pointer to any type of object with a
fundamental alignment requirement and then used to access such an object
or an array of such objects in the space allocated [...]
This setting is architecture-specific, and although jemalloc includes known
safe values for the most commonly used modern architectures, there is a
wrinkle related to GNU libc (glibc) that may impact your choice of
<lg-quantum>. On most modern architectures, this mandates 16-byte
alignment (<lg-quantum>=4), but the glibc developers chose not to meet this
requirement for performance reasons. An old discussion can be found at
<https://sourceware.org/bugzilla/show_bug.cgi?id=206> . Unlike glibc,
jemalloc does follow the C standard by default (caveat: jemalloc
technically cheats for size classes smaller than the quantum), but the fact
that Linux systems already work around this allocator noncompliance means
that it is generally safe in practice to let jemalloc's minimum alignment
follow glibc's lead. If you specify `--with-lg-quantum=3` during
configuration, jemalloc will provide additional size classes that are not
16-byte-aligned (24, 40, and 56).
* `--with-lg-vaddr=<lg-vaddr>`
Specify the number of significant virtual address bits. By default, the
configure script attempts to detect virtual address size on those platforms
where it knows how, and picks a default otherwise. This option may be
useful when cross-compiling.
* `--disable-initial-exec-tls`
Disable the initial-exec TLS model for jemalloc's internal thread-local
storage (on those platforms that support explicit settings). This can allow
jemalloc to be dynamically loaded after program startup (e.g. using dlopen).
Note that in this case, there will be two malloc implementations operating
in the same process, which will almost certainly result in confusing runtime
crashes if pointers leak from one implementation to the other.
* `--disable-libdl`
Disable the usage of libdl, namely dlsym(3) which is required by the lazy
lock option. This can allow building static binaries.
The following environment variables (not a definitive list) impact configure's
behavior:
* `CFLAGS="?"`
* `CXXFLAGS="?"`
Pass these flags to the C/C++ compiler. Any flags set by the configure
script are prepended, which means explicitly set flags generally take
precedence. Take care when specifying flags such as -Werror, because
configure tests may be affected in undesirable ways.
* `EXTRA_CFLAGS="?"`
* `EXTRA_CXXFLAGS="?"`
Append these flags to CFLAGS/CXXFLAGS, without passing them to the
compiler(s) during configuration. This makes it possible to add flags such
as -Werror, while allowing the configure script to determine what other
flags are appropriate for the specified configuration.
* `CPPFLAGS="?"`
Pass these flags to the C preprocessor. Note that CFLAGS is not passed to
'cpp' when 'configure' is looking for include files, so you must use
CPPFLAGS instead if you need to help 'configure' find header files.
* `LD_LIBRARY_PATH="?"`
'ld' uses this colon-separated list to find libraries.
* `LDFLAGS="?"`
Pass these flags when linking.
* `PATH="?"`
'configure' uses this to find programs.
In some cases it may be necessary to work around configuration results that do
not match reality. For example, Linux 4.5 added support for the MADV_FREE flag
to madvise(2), which can cause problems if building on a host with MADV_FREE
support and deploying to a target without. To work around this, use a cache
file to override the relevant configuration variable defined in configure.ac,
e.g.:
echo "je_cv_madv_free=no" > config.cache && ./configure -C
## Advanced compilation
To build only parts of jemalloc, use the following targets:
build_lib_shared
build_lib_static
build_lib
build_doc_html
build_doc_man
build_doc
To install only parts of jemalloc, use the following targets:
install_bin
install_include
install_lib_shared
install_lib_static
install_lib_pc
install_lib
install_doc_html
install_doc_man
install_doc
To clean up build results to varying degrees, use the following make targets:
clean
distclean
relclean
## Advanced installation
Optionally, define make variables when invoking make, including (not
exclusively):
* `INCLUDEDIR="?"`
Use this as the installation prefix for header files.
* `LIBDIR="?"`
Use this as the installation prefix for libraries.
* `MANDIR="?"`
Use this as the installation prefix for man pages.
* `DESTDIR="?"`
Prepend DESTDIR to INCLUDEDIR, LIBDIR, DATADIR, and MANDIR. This is useful
when installing to a different path than was specified via --prefix.
* `CC="?"`
Use this to invoke the C compiler.
* `CFLAGS="?"`
Pass these flags to the compiler.
* `CPPFLAGS="?"`
Pass these flags to the C preprocessor.
* `LDFLAGS="?"`
Pass these flags when linking.
* `PATH="?"`
Use this to search for programs used during configuration and building.
## Development
If you intend to make non-trivial changes to jemalloc, use the 'autogen.sh'
script rather than 'configure'. This re-generates 'configure', enables
configuration dependency rules, and enables re-generation of automatically
generated source files.
The build system supports using an object directory separate from the source
tree. For example, you can create an 'obj' directory, and from within that
directory, issue configuration and build commands:
autoconf
mkdir obj
cd obj
../configure --enable-autogen
make
## Documentation
The manual page is generated in both html and roff formats. Any web browser
can be used to view the html manual. The roff manual page can be formatted
prior to installation via the following command:
nroff -man -t doc/jemalloc.3 | {
"source": "yandex/perforator",
"title": "contrib/libs/jemalloc/INSTALL.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/jemalloc/INSTALL.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 14796
} |
This document summarizes the common approaches for performance fine tuning with
jemalloc (as of 5.3.0). The default configuration of jemalloc tends to work
reasonably well in practice, and most applications should not have to tune any
options. However, in order to cover a wide range of applications and avoid
pathological cases, the default setting is sometimes kept conservative and
suboptimal, even for many common workloads. When jemalloc is properly tuned for
a specific application / workload, it is common to improve system level metrics
by a few percent, or make favorable trade-offs.
## Notable runtime options for performance tuning
Runtime options can be set via
[malloc_conf](http://jemalloc.net/jemalloc.3.html#tuning).
* [background_thread](http://jemalloc.net/jemalloc.3.html#background_thread)
Enabling jemalloc background threads generally improves the tail latency for
application threads, since unused memory purging is shifted to the dedicated
background threads. In addition, unintended purging delay caused by
application inactivity is avoided with background threads.
Suggested: `background_thread:true` when jemalloc managed threads can be
allowed.
* [metadata_thp](http://jemalloc.net/jemalloc.3.html#opt.metadata_thp)
Allowing jemalloc to utilize transparent huge pages for its internal
metadata usually reduces TLB misses significantly, especially for programs
with large memory footprint and frequent allocation / deallocation
activities. Metadata memory usage may increase due to the use of huge
pages.
Suggested for allocation intensive programs: `metadata_thp:auto` or
`metadata_thp:always`, which is expected to improve CPU utilization at a
small memory cost.
* [dirty_decay_ms](http://jemalloc.net/jemalloc.3.html#opt.dirty_decay_ms) and
[muzzy_decay_ms](http://jemalloc.net/jemalloc.3.html#opt.muzzy_decay_ms)
Decay time determines how fast jemalloc returns unused pages back to the
operating system, and therefore provides a fairly straightforward trade-off
between CPU and memory usage. Shorter decay time purges unused pages faster
to reduces memory usage (usually at the cost of more CPU cycles spent on
purging), and vice versa.
Suggested: tune the values based on the desired trade-offs.
* [narenas](http://jemalloc.net/jemalloc.3.html#opt.narenas)
By default jemalloc uses multiple arenas to reduce internal lock contention.
However high arena count may also increase overall memory fragmentation,
since arenas manage memory independently. When high degree of parallelism
is not expected at the allocator level, lower number of arenas often
improves memory usage.
Suggested: if low parallelism is expected, try lower arena count while
monitoring CPU and memory usage.
* [percpu_arena](http://jemalloc.net/jemalloc.3.html#opt.percpu_arena)
Enable dynamic thread to arena association based on running CPU. This has
the potential to improve locality, e.g. when thread to CPU affinity is
present.
Suggested: try `percpu_arena:percpu` or `percpu_arena:phycpu` if
thread migration between processors is expected to be infrequent.
Examples:
* High resource consumption application, prioritizing CPU utilization:
`background_thread:true,metadata_thp:auto` combined with relaxed decay time
(increased `dirty_decay_ms` and / or `muzzy_decay_ms`,
e.g. `dirty_decay_ms:30000,muzzy_decay_ms:30000`).
* High resource consumption application, prioritizing memory usage:
`background_thread:true,tcache_max:4096` combined with shorter decay time
(decreased `dirty_decay_ms` and / or `muzzy_decay_ms`,
e.g. `dirty_decay_ms:5000,muzzy_decay_ms:5000`), and lower arena count
(e.g. number of CPUs).
* Low resource consumption application:
`narenas:1,tcache_max:1024` combined with shorter decay time (decreased
`dirty_decay_ms` and / or `muzzy_decay_ms`,e.g.
`dirty_decay_ms:1000,muzzy_decay_ms:0`).
* Extremely conservative -- minimize memory usage at all costs, only suitable when
allocation activity is very rare:
`narenas:1,tcache:false,dirty_decay_ms:0,muzzy_decay_ms:0`
Note that it is recommended to combine the options with `abort_conf:true` which
aborts immediately on illegal options.
## Beyond runtime options
In addition to the runtime options, there are a number of programmatic ways to
improve application performance with jemalloc.
* [Explicit arenas](http://jemalloc.net/jemalloc.3.html#arenas.create)
Manually created arenas can help performance in various ways, e.g. by
managing locality and contention for specific usages. For example,
applications can explicitly allocate frequently accessed objects from a
dedicated arena with
[mallocx()](http://jemalloc.net/jemalloc.3.html#MALLOCX_ARENA) to improve
locality. In addition, explicit arenas often benefit from individually
tuned options, e.g. relaxed [decay
time](http://jemalloc.net/jemalloc.3.html#arena.i.dirty_decay_ms) if
frequent reuse is expected.
* [Extent hooks](http://jemalloc.net/jemalloc.3.html#arena.i.extent_hooks)
Extent hooks allow customization for managing underlying memory. One use
case for performance purpose is to utilize huge pages -- for example,
[HHVM](https://github.com/facebook/hhvm/blob/master/hphp/util/alloc.cpp)
uses explicit arenas with customized extent hooks to manage 1GB huge pages
for frequently accessed data, which reduces TLB misses significantly.
* [Explicit thread-to-arena
binding](http://jemalloc.net/jemalloc.3.html#thread.arena)
It is common for some threads in an application to have different memory
access / allocation patterns. Threads with heavy workloads often benefit
from explicit binding, e.g. binding very active threads to dedicated arenas
may reduce contention at the allocator level. | {
"source": "yandex/perforator",
"title": "contrib/libs/jemalloc/TUNING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/jemalloc/TUNING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 5934
} |
Thank you for helping us improve libarchive.
The following guidelines will help ensure your contribution gets prompt attention.
# Bugs and other Issues
If you encounter any problems with libarchive,
[please file an issue on our issue tracker](https://github.com/libarchive/libarchive/issues).
All bug reports should include the following information. You can copy the text below directly into the issue tracker to get started:
```
Basic Information
Version of libarchive:
How you obtained it: (build from source, pre-packaged binary, etc)
Operating system and version:
What compiler and/or IDE you are using (include version):
If you are using a pre-packaged binary
Exact package name and version:
Repository you obtained it from:
Description of the problem you are seeing:
What did you do?
What did you expect to happen?
What actually happened?
What log files or error messages were produced?
How the libarchive developers can reproduce your problem:
What other software was involved?
What other files were involved?
How can we obtain any of the above?
```
Depending on the specific type of issue, other information will be helpful:
## Test Failures
If you see any test failures, please include the information above and also:
* Names of the tests that failed.
* Look for the .log files in the /tmp/libarchive_test_*date-and-time* directories. (On Mac OS, look in $TMPDIR which is different than /tmp.)
Please paste the .log files you will find there directly into your report.
## Problems using libarchive in a program
If you are trying to write a program using libarchive, please include the information above and also:
* It will help us if we can actually run the program. This is easiest if you can provide source to a short program that illustrates your problem.
* If you have a sufficiently short program that shows the problem, you can either paste it into the report or [put it into a gist](https://gist.github.com).
## Libarchive produced incorrect output
Please tell us what program you ran, any command-line arguments you provided, and details of the input files (`ls -l` output is helpful here). If the problem involved a command-line program, please copy the full terminal text into the report, including the command line and any error messages.
Please try to make the output file available to us. Unless it is very large, you can upload it into a fresh github repository and provide a link in your issue report.
## Libarchive could not read a particular input file
Note: If you can provide a **very small** input file that reproduces the problem, we can add that to our test suite. This will ensure that the bug does not reappear in the future.
A link to the relevant file is usually sufficient.
If you cannot provide the input file or a link to the file, please let us know if there is some other way to obtain it.
## Documentation improvements
We are always interested in improving the libarchive documentation. Please tell us about any errors you find, including:
* Typos or errors in the manpages provided with libarchive source.
* Mistakes in the [libarchive Wiki](https://github.com/libarchive/libarchive/wiki)
* Problems with the PDF or Wiki files that are automatically generated from the manpages.
# Code Submissions
We welcome all code submissions. But of course, some code submissions are easier for us to respond to than others. The best code submissions:
* Address a single issue. There have been many cases where a simple fix to an obvious problem did not get handled for months because the patch that was provided also included an unrelated change affecting an especially complex area of the code.
* Follow existing libarchive code style and conventions. Libarchive generally follows [BSD KNF](https://www.freebsd.org/cgi/man.cgi?query=style&sektion=9) for formatting code.
* Do not make unnecessary changes to existing whitespace, capitalization, or spelling.
* Include detailed instructions for reproducing the problem you're fixing. We do try to verify that a submission actually fixes a real problem. If we can't reproduce the problem, it will take us longer to evaluate the fix. For this reason, we encourage you to file an issue report first with details on reproducing the problem, then refer to that issue in your pull request.
* Includes a test case. The libarchive Wiki has [detailed documentation for adding new test cases](https://github.com/libarchive/libarchive/wiki/LibarchiveAddingTest).
* Are provided via Github pull requests. We welcome patches in almost any format, but github's pull request management makes it significantly easier for us to evaluate and test changes. | {
"source": "yandex/perforator",
"title": "contrib/libs/libarchive/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/libarchive/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4703
} |
# Welcome to libarchive!
The libarchive project develops a portable, efficient C library that
can read and write streaming archives in a variety of formats. It
also includes implementations of the common `tar`, `cpio`, and `zcat`
command-line tools that use the libarchive library.
## Questions? Issues?
* https://www.libarchive.org is the home for ongoing
libarchive development, including documentation,
and links to the libarchive mailing lists.
* To report an issue, use the issue tracker at
https://github.com/libarchive/libarchive/issues
* To submit an enhancement to libarchive, please
submit a pull request via GitHub: https://github.com/libarchive/libarchive/pulls
## Contents of the Distribution
This distribution bundle includes the following major components:
* **libarchive**: a library for reading and writing streaming archives
* **tar**: the 'bsdtar' program is a full-featured 'tar' implementation built on libarchive
* **cpio**: the 'bsdcpio' program is a different interface to essentially the same functionality
* **cat**: the 'bsdcat' program is a simple replacement tool for zcat, bzcat, xzcat, and such
* **unzip**: the 'bsdunzip' program is a simple replacement tool for Info-ZIP's unzip
* **examples**: Some small example programs that you may find useful.
* **examples/minitar**: a compact sample demonstrating use of libarchive.
* **contrib**: Various items sent to me by third parties; please contact the authors with any questions.
The top-level directory contains the following information files:
* **NEWS** - highlights of recent changes
* **COPYING** - what you can do with this
* **INSTALL** - installation instructions
* **README** - this file
* **CMakeLists.txt** - input for "cmake" build tool, see INSTALL
* **configure** - configuration script, see INSTALL for details. If your copy of the source lacks a `configure` script, you can try to construct it by running the script in `build/autogen.sh` (or use `cmake`).
The following files in the top-level directory are used by the 'configure' script:
* `Makefile.am`, `aclocal.m4`, `configure.ac` - used to build this distribution, only needed by maintainers
* `Makefile.in`, `config.h.in` - templates used by configure script
## Documentation
In addition to the informational articles and documentation
in the online [libarchive Wiki](https://github.com/libarchive/libarchive/wiki),
the distribution also includes a number of manual pages:
* bsdtar.1 explains the use of the bsdtar program
* bsdcpio.1 explains the use of the bsdcpio program
* bsdcat.1 explains the use of the bsdcat program
* libarchive.3 gives an overview of the library as a whole
* archive_read.3, archive_write.3, archive_write_disk.3, and
archive_read_disk.3 provide detailed calling sequences for the read
and write APIs
* archive_entry.3 details the "struct archive_entry" utility class
* archive_internals.3 provides some insight into libarchive's
internal structure and operation.
* libarchive-formats.5 documents the file formats supported by the library
* cpio.5, mtree.5, and tar.5 provide detailed information about these
popular archive formats, including hard-to-find details about
modern cpio and tar variants.
The manual pages above are provided in the 'doc' directory in
a number of different formats.
You should also read the copious comments in `archive.h` and the
source code for the sample programs for more details. Please let us
know about any errors or omissions you find.
## Supported Formats
Currently, the library automatically detects and reads the following formats:
* Old V7 tar archives
* POSIX ustar
* GNU tar format (including GNU long filenames, long link names, and sparse files)
* Solaris 9 extended tar format (including ACLs)
* POSIX pax interchange format
* POSIX octet-oriented cpio
* SVR4 ASCII cpio
* Binary cpio (big-endian or little-endian)
* PWB binary cpio
* ISO9660 CD-ROM images (with optional Rockridge or Joliet extensions)
* ZIP archives (with uncompressed or "deflate" compressed entries, including support for encrypted Zip archives)
* ZIPX archives (with support for bzip2, ppmd8, lzma and xz compressed entries)
* GNU and BSD 'ar' archives
* 'mtree' format
* 7-Zip archives (including archives that use zstandard compression)
* Microsoft CAB format
* LHA and LZH archives
* RAR and RAR 5.0 archives (with some limitations due to RAR's proprietary status)
* XAR archives
The library also detects and handles any of the following before evaluating the archive:
* uuencoded files
* files with RPM wrapper
* gzip compression
* bzip2 compression
* compress/LZW compression
* lzma, lzip, and xz compression
* lz4 compression
* lzop compression
* zstandard compression
The library can create archives in any of the following formats:
* POSIX ustar
* POSIX pax interchange format
* "restricted" pax format, which will create ustar archives except for
entries that require pax extensions (for long filenames, ACLs, etc).
* Old GNU tar format
* Old V7 tar format
* POSIX octet-oriented cpio
* SVR4 "newc" cpio
* Binary cpio (little-endian)
* PWB binary cpio
* shar archives
* ZIP archives (with uncompressed or "deflate" compressed entries)
* GNU and BSD 'ar' archives
* 'mtree' format
* ISO9660 format
* 7-Zip archives
* XAR archives
When creating archives, the result can be filtered with any of the following:
* uuencode
* gzip compression
* bzip2 compression
* compress/LZW compression
* lzma, lzip, and xz compression
* lz4 compression
* lzop compression
* zstandard compression
## Notes about the Library Design
The following notes address many of the most common
questions we are asked about libarchive:
* This is a heavily stream-oriented system. That means that
it is optimized to read or write the archive in a single
pass from beginning to end. For example, this allows
libarchive to process archives too large to store on disk
by processing them on-the-fly as they are read from or
written to a network or tape drive. This also makes
libarchive useful for tools that need to produce
archives on-the-fly (such as webservers that provide
archived contents of a users account).
* In-place modification and random access to the contents
of an archive are not directly supported. For some formats,
this is not an issue: For example, tar.gz archives are not
designed for random access. In some other cases, libarchive
can re-open an archive and scan it from the beginning quickly
enough to provide the needed abilities even without true
random access. Of course, some applications do require true
random access; those applications should consider alternatives
to libarchive.
* The library is designed to be extended with new compression and
archive formats. The only requirement is that the format be
readable or writable as a stream and that each archive entry be
independent. There are articles on the libarchive Wiki explaining
how to extend libarchive.
* On read, compression and format are always detected automatically.
* The same API is used for all formats; it should be very
easy for software using libarchive to transparently handle
any of libarchive's archiving formats.
* Libarchive's automatic support for decompression can be used
without archiving by explicitly selecting the "raw" and "empty"
formats.
* I've attempted to minimize static link pollution. If you don't
explicitly invoke a particular feature (such as support for a
particular compression or format), it won't get pulled in to
statically-linked programs. In particular, if you don't explicitly
enable a particular compression or decompression support, you won't
need to link against the corresponding compression or decompression
libraries. This also reduces the size of statically-linked
binaries in environments where that matters.
* The library is generally _thread safe_ depending on the platform:
it does not define any global variables of its own. However, some
platforms do not provide fully thread-safe versions of key C library
functions. On those platforms, libarchive will use the non-thread-safe
functions. Patches to improve this are of great interest to us.
* The function `archive_write_disk_header()` is _not_ thread safe on
POSIX machines and could lead to security issue resulting in world
writeable directories. Thus it must be mutexed by the calling code.
This is due to calling `umask(oldumask = umask(0))`, which sets the
umask for the whole process to 0 for a short time frame.
In case other thread calls the same function in parallel, it might
get interrupted by it and cause the executable to use umask=0 for the
remaining execution.
This will then lead to implicitly created directories to have 777
permissions without sticky bit.
* In particular, libarchive's modules to read or write a directory
tree do use `chdir()` to optimize the directory traversals. This
can cause problems for programs that expect to do disk access from
multiple threads. Of course, those modules are completely
optional and you can use the rest of libarchive without them.
* The library is _not_ thread aware, however. It does no locking
or thread management of any kind. If you create a libarchive
object and need to access it from multiple threads, you will
need to provide your own locking.
* On read, the library accepts whatever blocks you hand it.
Your read callback is free to pass the library a byte at a time
or mmap the entire archive and give it to the library at once.
On write, the library always produces correctly-blocked output.
* The object-style approach allows you to have multiple archive streams
open at once. bsdtar uses this in its "@archive" extension.
* The archive itself is read/written using callback functions.
You can read an archive directly from an in-memory buffer or
write it to a socket, if you wish. There are some utility
functions to provide easy-to-use "open file," etc, capabilities.
* The read/write APIs are designed to allow individual entries
to be read or written to any data source: You can create
a block of data in memory and add it to a tar archive without
first writing a temporary file. You can also read an entry from
an archive and write the data directly to a socket. If you want
to read/write entries to disk, there are convenience functions to
make this especially easy.
* Note: The "pax interchange format" is a POSIX standard extended tar
format that should be used when the older _ustar_ format is not
appropriate. It has many advantages over other tar formats
(including the legacy GNU tar format) and is widely supported by
current tar implementations. | {
"source": "yandex/perforator",
"title": "contrib/libs/libarchive/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/libarchive/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 10849
} |
# Security Policy
If you have discovered a security vulnerability in this project, please report it
privately. **Do not disclose it as a public issue.** This gives us time to work with you
to fix the issue before public exposure, reducing the chance that the exploit will be
used before a patch is released.
You may submit the report in the following ways:
- send an email to [email protected]; and/or
- send us a [private vulnerability report](https://github.com/libarchive/libarchive/security/advisories/new)
Please provide the following information in your report:
- A description of the vulnerability and its impact
- How to reproduce the issue
This project is maintained by volunteers on a reasonable-effort basis. As such, we ask
that you give me 90 days to work on a fix before public exposure. | {
"source": "yandex/perforator",
"title": "contrib/libs/libarchive/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/libarchive/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 812
} |
<picture>
<source media="(prefers-color-scheme: dark)" srcset="assets/libbpf-logo-sideways-darkbg.png" width="40%">
<img src="assets/libbpf-logo-sideways.png" width="40%">
</picture>
libbpf
[](https://github.com/libbpf/libbpf/actions/workflows/test.yml)
[](https://scan.coverity.com/projects/libbpf)
[](https://github.com/libbpf/libbpf/actions?query=workflow%3ACodeQL+branch%3Amaster)
[](https://oss-fuzz-build-logs.storage.googleapis.com/index.html#libbpf)
[](https://libbpf.readthedocs.io/en/latest/)
======
**This is the official home of the libbpf library.**
*Please use this Github repository for building and packaging libbpf
and when using it in your projects through Git submodule.*
Libbpf *authoritative source code* is developed as part of [bpf-next Linux source
tree](https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next) under
`tools/lib/bpf` subdirectory and is periodically synced to Github. As such, all the
libbpf changes should be sent to [BPF mailing list](http://vger.kernel.org/vger-lists.html#bpf),
please don't open PRs here unless you are changing Github-specific parts of libbpf
(e.g., Github-specific Makefile).
Libbpf and general BPF usage questions
======================================
Libbpf documentation can be found [here](https://libbpf.readthedocs.io/en/latest/api.html).
It's an ongoing effort and has ways to go, but please take a look and consider contributing as well.
Please check out [libbpf-bootstrap](https://github.com/libbpf/libbpf-bootstrap)
and [the companion blog post](https://nakryiko.com/posts/libbpf-bootstrap/) for
the examples of building BPF applications with libbpf.
[libbpf-tools](https://github.com/iovisor/bcc/tree/master/libbpf-tools) are also
a good source of the real-world libbpf-based tracing tools.
See also ["BPF CO-RE reference guide"](https://nakryiko.com/posts/bpf-core-reference-guide/)
for the coverage of practical aspects of building BPF CO-RE applications and
["BPF CO-RE"](https://nakryiko.com/posts/bpf-portability-and-co-re/) for
general introduction into BPF portability issues and BPF CO-RE origins.
All general BPF questions, including kernel functionality, libbpf APIs and
their application, should be sent to [email protected] mailing list. You can
subscribe to it [here](http://vger.kernel.org/vger-lists.html#bpf) and search
its archive [here](https://lore.kernel.org/bpf/). Please search the archive
before asking new questions. It very well might be that this was already
addressed or answered before.
[email protected] is monitored by many more people and they will happily try
to help you with whatever issue you have. This repository's PRs and issues
should be opened only for dealing with issues pertaining to specific way this
libbpf mirror repo is set up and organized.
Building libbpf
===============
libelf is an internal dependency of libbpf and thus it is required to link
against and must be installed on the system for applications to work.
pkg-config is used by default to find libelf, and the program called can be
overridden with `PKG_CONFIG`.
If using `pkg-config` at build time is not desired, it can be disabled by
setting `NO_PKG_CONFIG=1` when calling make.
To build both static libbpf.a and shared libbpf.so:
```bash
$ cd src
$ make
```
To build only static libbpf.a library in directory
build/ and install them together with libbpf headers in a staging directory
root/:
```bash
$ cd src
$ mkdir build root
$ BUILD_STATIC_ONLY=y OBJDIR=build DESTDIR=root make install
```
To build both static libbpf.a and shared libbpf.so against a custom libelf
dependency installed in /build/root/ and install them together with libbpf
headers in a build directory /build/root/:
```bash
$ cd src
$ PKG_CONFIG_PATH=/build/root/lib64/pkgconfig DESTDIR=/build/root make install
```
BPF CO-RE (Compile Once – Run Everywhere)
=========================================
Libbpf supports building BPF CO-RE-enabled applications, which, in contrast to
[BCC](https://github.com/iovisor/bcc/), do not require Clang/LLVM runtime
being deployed to target servers and doesn't rely on kernel-devel headers
being available.
It does rely on kernel to be built with [BTF type
information](https://www.kernel.org/doc/html/latest/bpf/btf.html), though.
Some major Linux distributions come with kernel BTF already built in:
- Fedora 31+
- RHEL 8.2+
- OpenSUSE Tumbleweed (in the next release, as of 2020-06-04)
- Arch Linux (from kernel 5.7.1.arch1-1)
- Manjaro (from kernel 5.4 if compiled after 2021-06-18)
- Ubuntu 20.10
- Debian 11 (amd64/arm64)
If your kernel doesn't come with BTF built-in, you'll need to build custom
kernel. You'll need:
- `pahole` 1.16+ tool (part of `dwarves` package), which performs DWARF to
BTF conversion;
- kernel built with `CONFIG_DEBUG_INFO_BTF=y` option;
- you can check if your kernel has BTF built-in by looking for
`/sys/kernel/btf/vmlinux` file:
```shell
$ ls -la /sys/kernel/btf/vmlinux
-r--r--r--. 1 root root 3541561 Jun 2 18:16 /sys/kernel/btf/vmlinux
```
To develop and build BPF programs, you'll need Clang/LLVM 10+. The following
distributions have Clang/LLVM 10+ packaged by default:
- Fedora 32+
- Ubuntu 20.04+
- Arch Linux
- Ubuntu 20.10 (LLVM 11)
- Debian 11 (LLVM 11)
- Alpine 3.13+
Otherwise, please make sure to update it on your system.
The following resources are useful to understand what BPF CO-RE is and how to
use it:
- [BPF CO-RE reference guide](https://nakryiko.com/posts/bpf-core-reference-guide/)
- [BPF Portability and CO-RE](https://nakryiko.com/posts/bpf-portability-and-co-re/)
- [HOWTO: BCC to libbpf conversion](https://nakryiko.com/posts/bcc-to-libbpf-howto-guide/)
- [libbpf-tools in BCC repo](https://github.com/iovisor/bcc/tree/master/libbpf-tools)
contain lots of real-world tools converted from BCC to BPF CO-RE. Consider
converting some more to both contribute to the BPF community and gain some
more experience with it.
Distributions
=============
Distributions packaging libbpf from this mirror:
- [Fedora](https://src.fedoraproject.org/rpms/libbpf)
- [Gentoo](https://packages.gentoo.org/packages/dev-libs/libbpf)
- [Debian](https://packages.debian.org/source/sid/libbpf)
- [Arch](https://archlinux.org/packages/core/x86_64/libbpf/)
- [Ubuntu](https://packages.ubuntu.com/source/impish/libbpf)
- [Alpine](https://pkgs.alpinelinux.org/packages?name=libbpf)
Benefits of packaging from the mirror over packaging from kernel sources:
- Consistent versioning across distributions.
- No ties to any specific kernel, transparent handling of older kernels.
Libbpf is designed to be kernel-agnostic and work across multitude of
kernel versions. It has built-in mechanisms to gracefully handle older
kernels, that are missing some of the features, by working around or
gracefully degrading functionality. Thus libbpf is not tied to a specific
kernel version and can/should be packaged and versioned independently.
- Continuous integration testing via
[GitHub Actions](https://github.com/libbpf/libbpf/actions).
- Static code analysis via [LGTM](https://lgtm.com/projects/g/libbpf/libbpf)
and [Coverity](https://scan.coverity.com/projects/libbpf).
Package dependencies of libbpf, package names may vary across distros:
- zlib
- libelf
[](https://repology.org/project/libbpf/versions)
bpf-next to Github sync
=======================
All the gory details of syncing can be found in `scripts/sync-kernel.sh`
script. See [SYNC.md](SYNC.md) for instruction.
Some header files in this repo (`include/linux/*.h`) are reduced versions of
their counterpart files at
[bpf-next](https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/)'s
`tools/include/linux/*.h` to make compilation successful.
License
=======
This work is dual-licensed under BSD 2-clause license and GNU LGPL v2.1 license.
You can choose between one of them if you use this work.
`SPDX-License-Identifier: BSD-2-Clause OR LGPL-2.1` | {
"source": "yandex/perforator",
"title": "contrib/libs/libbpf/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/libbpf/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 8508
} |
<picture>
<source media="(prefers-color-scheme: dark)" srcset="assets/libbpf-logo-sideways-darkbg.png" width="40%">
<img src="assets/libbpf-logo-sideways.png" width="40%">
</picture>
Libbpf sync
===========
Libbpf *authoritative source code* is developed as part of [bpf-next Linux source
tree](https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next) under
`tools/lib/bpf` subdirectory and is periodically synced to Github.
Most of the mundane mechanical things like bpf and bpf-next tree merge, Git
history transformation, cherry-picking relevant commits, re-generating
auto-generated headers, etc. are taken care by
[sync-kernel.sh script](https://github.com/libbpf/libbpf/blob/master/scripts/sync-kernel.sh).
But occasionally human needs to do few extra things to make everything work
nicely.
This document goes over the process of syncing libbpf sources from Linux repo
to this Github repository. Feel free to contribute fixes and additions if you
run into new problems not outlined here.
Setup expectations
------------------
Sync script has particular expectation of upstream Linux repo setup. It
expects that current HEAD of that repo points to bpf-next's master branch and
that there is a separate local branch pointing to bpf tree's master branch.
This is important, as the script will automatically merge their histories for
the purpose of libbpf sync.
Below, we assume that Linux repo is located at `~/linux`, it's current head is
at latest `bpf-next/master`, and libbpf's Github repo is located at
`~/libbpf`, checked out to latest commit on `master` branch. It doesn't matter
from where to run `sync-kernel.sh` script, but we'll be running it from inside
`~/libbpf`.
```
$ cd ~/linux && git remote -v | grep -E '^(bpf|bpf-next)'
bpf https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git (fetch)
bpf ssh://[email protected]/pub/scm/linux/kernel/git/bpf/bpf.git
(push)
bpf-next
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git (fetch)
bpf-next
ssh://[email protected]/pub/scm/linux/kernel/git/bpf/bpf-next.git (push)
$ git branch -vv | grep -E '^? (master|bpf-master)'
* bpf-master 2d311f480b52 [bpf/master] riscv, bpf: Fix patch_text implicit declaration
master c8ee37bde402 [bpf-next/master] libbpf: Fix bpf_xdp_query() in old kernels
$ git checkout bpf-master && git pull && git checkout master && git pull
...
$ git log --oneline -n1
c8ee37bde402 (HEAD -> master, bpf-next/master) libbpf: Fix bpf_xdp_query() in old kernels
$ cd ~/libbpf && git checkout master && git pull
Your branch is up to date with 'libbpf/master'.
Already up to date.
```
Running setup script
--------------------
First step is to always run `sync-kernel.sh` script. It expects three arguments:
```
$ scripts/sync-kernel.sh <libbpf-repo> <kernel-repo> <bpf-branch>
```
Note, that we'll store script's entire output in `/tmp/libbpf-sync.txt` and
put it into PR summary later on. **Please store scripts output and include it
in PR summary for others to check for anything unexpected and suspicious.**
```
$ scripts/sync-kernel.sh ~/libbpf ~/linux bpf-master | tee /tmp/libbpf-sync.txt
Dumping existing libbpf commit signatures...
WORKDIR: /home/andriin/libbpf
LINUX REPO: /home/andriin/linux
LIBBPF REPO: /home/andriin/libbpf
...
```
Most of the time this will go very uneventful. One expected case when sync
script might require user intervention is if `bpf` tree has some libbpf fixes,
which is nowadays not a very frequent occurence. But if that happens, script
will show you a diff between expected state as of latest bpf-next and synced
Github repo state. And will ask if these changes look good. Please use your
best judgement to verify that differences are indeed from expected `bpf` tree
fixes. E.g., it might look like below:
```
Comparing list of files...
Comparing file contents...
--- /home/andriin/linux/include/uapi/linux/netdev.h 2023-02-27 16:54:42.270583372 -0800
+++ /home/andriin/libbpf/include/uapi/linux/netdev.h 2023-02-27 16:54:34.615530796 -0800
@@ -19,7 +19,7 @@
* @NETDEV_XDP_ACT_XSK_ZEROCOPY: This feature informs if netdev supports AF_XDP
* in zero copy mode.
* @NETDEV_XDP_ACT_HW_OFFLOAD: This feature informs if netdev supports XDP hw
- * oflloading.
+ * offloading.
* @NETDEV_XDP_ACT_RX_SG: This feature informs if netdev implements non-linear
* XDP buffer support in the driver napi callback.
* @NETDEV_XDP_ACT_NDO_XMIT_SG: This feature informs if netdev implements
/home/andriin/linux/include/uapi/linux/netdev.h and /home/andriin/libbpf/include/uapi/linux/netdev.h are different!
Unfortunately, there are some inconsistencies, please double check.
Does everything look good? [y/N]:
```
If it looks sensible and expected, type `y` and script will proceed.
If sync is successful, your `~/linux` repo will be left in original state on
the original HEAD commit. `~/libbpf` repo will now be on a new branch, named
`libbpf-sync-<timestamp>` (e.g., `libbpf-sync-2023-02-28T00-53-40.072Z`).
Push this branch into your fork of `libbpf/libbpf` Github repo and create a PR:
```
$ git push --set-upstream origin libbpf-sync-2023-02-28T00-53-40.072Z
Enumerating objects: 130, done.
Counting objects: 100% (115/115), done.
Delta compression using up to 80 threads
Compressing objects: 100% (28/28), done.
Writing objects: 100% (32/32), 5.57 KiB | 1.86 MiB/s, done.
Total 32 (delta 21), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (21/21), completed with 9 local objects.
remote:
remote: Create a pull request for 'libbpf-sync-2023-02-28T00-53-40.072Z' on GitHub by visiting:
remote: https://github.com/anakryiko/libbpf/pull/new/libbpf-sync-2023-02-28T00-53-40.072Z
remote:
To github.com:anakryiko/libbpf.git
* [new branch] libbpf-sync-2023-02-28T00-53-40.072Z -> libbpf-sync-2023-02-28T00-53-40.072Z
Branch 'libbpf-sync-2023-02-28T00-53-40.072Z' set up to track remote branch 'libbpf-sync-2023-02-28T00-53-40.072Z' from 'origin'.
```
**Please, adjust PR name to have a properly looking timestamp. Libbpf
maintainers will be very thankful for that!**
By default Github will turn above branch name into PR with subject "Libbpf sync
2023 02 28 t00 53 40.072 z". Please fix this into a proper timestamp, e.g.:
"Libbpf sync 2023-02-28T00:53:40.072Z". Thank you!
**Please don't forget to paste contents of /tmp/libbpf-sync.txt into PR
summary!**
Once PR is created, libbpf CI will run a bunch of tests to check that
everything is good. In simple cases that would be all you'd need to do. In more
complicated cases some extra adjustments might be necessary.
**Please, keep naming and style consistent.** Prefix CI-related fixes with `ci: `
prefix. If you had to modify sync script, prefix it with `sync: `. Also make
sure that each such commit has `Signed-off-by: Your Full Name <[email protected]>`,
just like you'd do that for Linux upstream patch. Libbpf closely follows kernel
conventions and styling, so please help maintaining that.
Including new sources
---------------------
If entirely new source files (typically `*.c`) were added to the library in the
kernel repository, it may be necessary to add these to the build system
manually (you may notice linker errors otherwise), because the script cannot
handle such changes automatically. To that end, edit `src/Makefile` as
necessary. Commit
[c2495832ced4](https://github.com/libbpf/libbpf/commit/c2495832ced4239bcd376b9954db38a6addd89ca)
is an example of how to go about doing that.
Similarly, if new public API header files were added, the `Makefile` will need
to be adjusted as well.
Updating allow/deny lists
-------------------------
Libbpf CI intentionally runs a subset of latest BPF selftests on old kernel
(4.9 and 5.5, currently). It happens from time to time that some tests that
previously were successfully running on old kernels now don't, typically due to
reliance on some freshly added kernel feature. It might look something like this in [CI logs](https://github.com/libbpf/libbpf/actions/runs/4206303272/jobs/7299609578#step:4:2733):
```
All error logs:
serial_test_xdp_info:FAIL:get_xdp_none errno=2
#283 xdp_info:FAIL
Summary: 49/166 PASSED, 5 SKIPPED, 1 FAILED
```
In such case we can either work with upstream to fix test to be compatible with
old kernels, or we'll have to add a test into a denylist (or remove it from
allowlist, like was [done](https://github.com/libbpf/libbpf/commit/ea284299025bf85b85b4923191de6463cd43ccd6)
for the case above).
```
$ find . -name '*LIST*'
./ci/vmtest/configs/ALLOWLIST-4.9.0
./ci/vmtest/configs/DENYLIST-5.5.0
./ci/vmtest/configs/DENYLIST-latest.s390x
./ci/vmtest/configs/DENYLIST-latest
./ci/vmtest/configs/ALLOWLIST-5.5.0
```
Please determine which tests need to be added/removed from which list. And then
add that as a separate commit. **Please keep using the same branch name, so
that the same PR can be updated.** There is no need to open new PRs for each
such fix.
Regenerating vmlinux.h header
-----------------------------
To compile latest BPF selftests against old kernels, we check in pre-generated
[vmlinux.h](https://github.com/libbpf/libbpf/blob/master/.github/actions/build-selftests/vmlinux.h)
header file, located at `.github/actions/build-selftests/vmlinux.h`, which
contains type definitions from latest upstream kernel. When after libbpf sync
upstream BPF selftests require new kernel types, we'd need to regenerate
`vmlinux.h` and check it in as well.
This will looks something like this in [CI logs](https://github.com/libbpf/libbpf/actions/runs/4198939244/jobs/7283214243#step:4:1903):
```
In file included from progs/test_spin_lock_fail.c:5:
/home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/bpf_experimental.h:73:53: error: declaration of 'struct bpf_rb_root' will not be visible outside of this function [-Werror,-Wvisibility]
extern struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
^
/home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/bpf_experimental.h:81:35: error: declaration of 'struct bpf_rb_root' will not be visible outside of this function [-Werror,-Wvisibility]
extern void bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
^
/home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/bpf_experimental.h:90:52: error: declaration of 'struct bpf_rb_root' will not be visible outside of this function [-Werror,-Wvisibility]
extern struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root) __ksym;
^
3 errors generated.
make: *** [Makefile:572: /home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/test_spin_lock_fail.bpf.o] Error 1
make: *** Waiting for unfinished jobs....
Error: Process completed with exit code 2.
```
You'll need to build latest upstream kernel from `bpf-next` tree, using BPF
selftest configs. Concat arch-agnostic and arch-specific configs, build kernel,
then use bpftool to dump `vmlinux.h`:
```
$ cd ~/linux
$ cat tools/testing/selftests/bpf/config \
tools/testing/selftests/bpf/config.x86_64 > .config
$ make -j$(nproc) olddefconfig all
...
$ bpftool btf dump file ~/linux/vmlinux format c > ~/libbpf/.github/actions/build-selftests/vmlinux.h
$ cd ~/libbpf && git add . && git commit -s
```
Check in generated `vmlinux.h`, don't forget to use `ci: ` commit prefix, add
it on top of sync commits. Push to Github and let libbpf CI do the checking for
you. See [this commit](https://github.com/libbpf/libbpf/commit/34212c94a64df8eeb1dd5d064630a65e1dfd4c20)
for reference.
Troubleshooting
---------------
If something goes wrong and sync script exits early or is terminated early by
user, you might end up with `~/linux` repo on temporary sync-related branch.
Don't worry, though, sync script never destroys repo state, it follows
"copy-on-write" philosophy and creates new branches where necessary. So it's
very easy to restore previous state. So if anything goes wrong, it's easy to
start fresh:
```
$ git branch | grep -E 'libbpf-.*Z'
libbpf-baseline-2023-02-28T00-43-35.146Z
libbpf-bpf-baseline-2023-02-28T00-43-35.146Z
libbpf-bpf-tip-2023-02-28T00-43-35.146Z
libbpf-squash-base-2023-02-28T00-43-35.146Z
* libbpf-squash-tip-2023-02-28T00-43-35.146Z
$ git cherry-pick --abort
$ git checkout master && git branch | grep -E 'libbpf-.*Z' | xargs git br -D
Switched to branch 'master'
Your branch is up to date with 'bpf-next/master'.
Deleted branch libbpf-baseline-2023-02-28T00-43-35.146Z (was 951bce29c898).
Deleted branch libbpf-bpf-baseline-2023-02-28T00-43-35.146Z (was 3a70e0d4c9d7).
Deleted branch libbpf-bpf-tip-2023-02-28T00-43-35.146Z (was 2d311f480b52).
Deleted branch libbpf-squash-base-2023-02-28T00-43-35.146Z (was 957f109ef883).
Deleted branch libbpf-squash-tip-2023-02-28T00-43-35.146Z (was be66130d2339).
Deleted branch libbpf-tip-2023-02-28T00-43-35.146Z (was 2d311f480b52).
```
You might need to do the same for your `~/libbpf` repo sometimes, depending at
which stage sync script was terminated. | {
"source": "yandex/perforator",
"title": "contrib/libs/libbpf/SYNC.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/libbpf/SYNC.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 13231
} |
This library implements a compatibility layer between various libc implementations.
The rationale for the library implementation is described in https://st.yandex-team.ru/IGNIETFERRO-1439.
The code is taken from multiple sources, thus both LICENSE() and VERSION() tags are not very representative.
During development one can make use of the following mapping of `OS_SDK` into glibc version.
| Ubuntu | glibc |
| ------ | ----- |
| 24.04 | 2.39 |
| 22.04 | 2.35 |
| 20.04 | 2.30 |
| 18.04 | 2.27 |
| 16.04 | 2.23 |
| 14.04 | 2.18 |
| 12.04 | 2.15 |
| 10.04 | 2.11 |
Use the following commands to update the table above:
1. `ya make util -DOS_SDK=ubuntu-xx -G | grep OS_SDK_ROOT | head -n 1`
2. `cd ~/.ya/tools/v4/$RESOURCE_ID`
3. `readelf -V $(find . -name 'libc.so.6')`
4. Take the latest version from `.gnu.version_d` section prior to `GLIBC_PRIVATE` | {
"source": "yandex/perforator",
"title": "contrib/libs/libc_compat/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/libc_compat/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 861
} |
LZ4 - Library Files
================================
The `/lib` directory contains many files, but depending on project's objectives,
not all of them are required.
Limited systems may want to reduce the nb of source files to include
as a way to reduce binary size and dependencies.
Capabilities are added at the "level" granularity, detailed below.
#### Level 1 : Minimal LZ4 build
The minimum required is **`lz4.c`** and **`lz4.h`**,
which provides the fast compression and decompression algorithms.
They generate and decode data using the [LZ4 block format].
#### Level 2 : High Compression variant
For more compression ratio at the cost of compression speed,
the High Compression variant called **lz4hc** is available.
Add files **`lz4hc.c`** and **`lz4hc.h`**.
This variant also compresses data using the [LZ4 block format],
and depends on regular `lib/lz4.*` source files.
#### Level 3 : Frame support, for interoperability
In order to produce compressed data compatible with `lz4` command line utility,
it's necessary to use the [official interoperable frame format].
This format is generated and decoded automatically by the **lz4frame** library.
Its public API is described in `lib/lz4frame.h`.
In order to work properly, lz4frame needs all other modules present in `/lib`,
including, lz4 and lz4hc, and also **xxhash**.
So it's necessary to also include `xxhash.c` and `xxhash.h`.
#### Level 4 : File compression operations
As a helper around file operations,
the library has been recently extended with `lz4file.c` and `lz4file.h`
(still considered experimental at the time of this writing).
These helpers allow opening, reading, writing, and closing files
using transparent LZ4 compression / decompression.
As a consequence, using `lz4file` adds a dependency on `<stdio.h>`.
`lz4file` relies on `lz4frame` in order to produce compressed data
conformant to the [LZ4 Frame format] specification.
Consequently, to enable this capability,
it's necessary to include all `*.c` and `*.h` files from `lib/` directory.
#### Advanced / Experimental API
Definitions which are not guaranteed to remain stable in future versions,
are protected behind macros, such as `LZ4_STATIC_LINKING_ONLY`.
As the name suggests, these definitions should only be invoked
in the context of static linking ***only***.
Otherwise, dependent application may fail on API or ABI break in the future.
The associated symbols are also not exposed by the dynamic library by default.
Should they be nonetheless needed, it's possible to force their publication
by using build macros `LZ4_PUBLISH_STATIC_FUNCTIONS`
and `LZ4F_PUBLISH_STATIC_FUNCTIONS`.
#### Build macros
The following build macro can be selected to adjust source code behavior at compilation time :
- `LZ4_FAST_DEC_LOOP` : this triggers a speed optimized decompression loop, more powerful on modern cpus.
This loop works great on `x86`, `x64` and `aarch64` cpus, and is automatically enabled for them.
It's also possible to enable or disable it manually, by passing `LZ4_FAST_DEC_LOOP=1` or `0` to the preprocessor.
For example, with `gcc` : `-DLZ4_FAST_DEC_LOOP=1`,
and with `make` : `CPPFLAGS+=-DLZ4_FAST_DEC_LOOP=1 make lz4`.
- `LZ4_DISTANCE_MAX` : control the maximum offset that the compressor will allow.
Set to 65535 by default, which is the maximum value supported by lz4 format.
Reducing maximum distance will reduce opportunities for LZ4 to find matches,
hence will produce a worse compression ratio.
Setting a smaller max distance could allow compatibility with specific decoders with limited memory budget.
This build macro only influences the compressed output of the compressor.
- `LZ4_DISABLE_DEPRECATE_WARNINGS` : invoking a deprecated function will make the compiler generate a warning.
This is meant to invite users to update their source code.
Should this be a problem, it's generally possible to make the compiler ignore these warnings,
for example with `-Wno-deprecated-declarations` on `gcc`,
or `_CRT_SECURE_NO_WARNINGS` for Visual Studio.
This build macro offers another project-specific method
by defining `LZ4_DISABLE_DEPRECATE_WARNINGS` before including the LZ4 header files.
- `LZ4_FORCE_SW_BITCOUNT` : by default, the compression algorithm tries to determine lengths
by using bitcount instructions, generally implemented as fast single instructions in many cpus.
In case the target cpus doesn't support it, or compiler intrinsic doesn't work, or feature bad performance,
it's possible to use an optimized software path instead.
This is achieved by setting this build macros.
In most cases, it's not expected to be necessary,
but it can be legitimately considered for less common platforms.
- `LZ4_ALIGN_TEST` : alignment test ensures that the memory area
passed as argument to become a compression state is suitably aligned.
This test can be disabled if it proves flaky, by setting this value to 0.
- `LZ4_USER_MEMORY_FUNCTIONS` : replace calls to `<stdlib.h>`'s `malloc()`, `calloc()` and `free()`
by user-defined functions, which must be named `LZ4_malloc()`, `LZ4_calloc()` and `LZ4_free()`.
User functions must be available at link time.
- `LZ4_STATIC_LINKING_ONLY_DISABLE_MEMORY_ALLOCATION` :
Remove support of dynamic memory allocation.
For more details, see description of this macro in `lib/lz4.c`.
- `LZ4_STATIC_LINKING_ONLY_ENDIANNESS_INDEPENDENT_OUTPUT` : experimental feature aimed at producing the same
compressed output on platforms of different endianness (i.e. little-endian and big-endian).
Output on little-endian platforms shall remain unchanged, while big-endian platforms will start producing
the same output as little-endian ones. This isn't expected to impact backward- and forward-compatibility
in any way.
- `LZ4_FREESTANDING` : by setting this build macro to 1,
LZ4/HC removes dependencies on the C standard library,
including allocation functions and `memmove()`, `memcpy()`, and `memset()`.
This build macro is designed to help use LZ4/HC in restricted environments
(embedded, bootloader, etc).
For more details, see description of this macro in `lib/lz4.h`.
- `LZ4_HEAPMODE` : Select how stateless compression functions like `LZ4_compress_default()`
allocate memory for their hash table,
in memory stack (0:default, fastest), or in memory heap (1:requires malloc()).
- `LZ4HC_HEAPMODE` : Select how stateless HC compression functions like `LZ4_compress_HC()`
allocate memory for their workspace:
in stack (0), or in heap (1:default).
Since workspace is rather large, stack can be inconvenient, hence heap mode is recommended.
- `LZ4F_HEAPMODE` : selects how `LZ4F_compressFrame()` allocates the compression state,
either on stack (default, value 0) or using heap memory (value 1).
#### Makefile variables
The following `Makefile` variables can be selected to alter the profile of produced binaries :
- `BUILD_SHARED` : generate `liblz4` dynamic library (enabled by default)
- `BUILD_STATIC` : generate `liblz4` static library (enabled by default)
#### Amalgamation
lz4 source code can be amalgamated into a single file.
One can combine all source code into `lz4_all.c` by using following command:
```
cat lz4.c lz4hc.c lz4frame.c > lz4_all.c
```
(`cat` file order is important) then compile `lz4_all.c`.
All `*.h` files present in `/lib` remain necessary to compile `lz4_all.c`.
#### Windows : using MinGW+MSYS to create DLL
DLL can be created using MinGW+MSYS with the `make liblz4` command.
This command creates `dll\liblz4.dll` and the import library `dll\liblz4.lib`.
To override the `dlltool` command when cross-compiling on Linux, just set the `DLLTOOL` variable. Example of cross compilation on Linux with mingw-w64 64 bits:
```
make BUILD_STATIC=no CC=x86_64-w64-mingw32-gcc DLLTOOL=x86_64-w64-mingw32-dlltool OS=Windows_NT
```
The import library is only required with Visual C++.
The header files `lz4.h`, `lz4hc.h`, `lz4frame.h` and the dynamic library
`dll\liblz4.dll` are required to compile a project using gcc/MinGW.
The dynamic library has to be added to linking options.
It means that if a project that uses LZ4 consists of a single `test-dll.c`
file it should be linked with `dll\liblz4.dll`. For example:
```
$(CC) $(CFLAGS) -Iinclude/ test-dll.c -o test-dll dll\liblz4.dll
```
The compiled executable will require LZ4 DLL which is available at `dll\liblz4.dll`.
#### Miscellaneous
Other files present in the directory are not source code. They are :
- `LICENSE` : contains the BSD license text
- `Makefile` : `make` script to compile and install lz4 library (static and dynamic)
- `liblz4.pc.in` : for `pkg-config` (used in `make install`)
- `README.md` : this file
[official interoperable frame format]: ../doc/lz4_Frame_format.md
[LZ4 Frame format]: ../doc/lz4_Frame_format.md
[LZ4 block format]: ../doc/lz4_Block_format.md
#### License
All source material within __lib__ directory are BSD 2-Clause licensed.
See [LICENSE](LICENSE) for details.
The license is also reminded at the top of each source file. | {
"source": "yandex/perforator",
"title": "contrib/libs/lz4/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/lz4/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 9062
} |
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK --> | {
"source": "yandex/perforator",
"title": "contrib/libs/mimalloc/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/mimalloc/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2655
} |
<img align="left" width="100" height="100" src="doc/mimalloc-logo.png"/>
[<img align="right" src="https://dev.azure.com/Daan0324/mimalloc/_apis/build/status/microsoft.mimalloc?branchName=dev"/>](https://dev.azure.com/Daan0324/mimalloc/_build?definitionId=1&_a=summary)
# mimalloc
mimalloc (pronounced "me-malloc")
is a general purpose allocator with excellent [performance](#performance) characteristics.
Initially developed by Daan Leijen for the runtime systems of the
[Koka](https://koka-lang.github.io) and [Lean](https://github.com/leanprover/lean) languages.
Latest release tag: `v2.1.7` (2024-05-21).
Latest v1 tag: `v1.8.7` (2024-05-21).
mimalloc is a drop-in replacement for `malloc` and can be used in other programs
without code changes, for example, on dynamically linked ELF-based systems (Linux, BSD, etc.) you can use it as:
```
> LD_PRELOAD=/usr/lib/libmimalloc.so myprogram
```
It also includes a robust way to override the default allocator in [Windows](#override_on_windows). Notable aspects of the design include:
- __small and consistent__: the library is about 8k LOC using simple and
consistent data structures. This makes it very suitable
to integrate and adapt in other projects. For runtime systems it
provides hooks for a monotonic _heartbeat_ and deferred freeing (for
bounded worst-case times with reference counting).
Partly due to its simplicity, mimalloc has been ported to many systems (Windows, macOS,
Linux, WASM, various BSD's, Haiku, MUSL, etc) and has excellent support for dynamic overriding.
At the same time, it is an industrial strength allocator that runs (very) large scale
distributed services on thousands of machines with excellent worst case latencies.
- __free list sharding__: instead of one big free list (per size class) we have
many smaller lists per "mimalloc page" which reduces fragmentation and
increases locality --
things that are allocated close in time get allocated close in memory.
(A mimalloc page contains blocks of one size class and is usually 64KiB on a 64-bit system).
- __free list multi-sharding__: the big idea! Not only do we shard the free list
per mimalloc page, but for each page we have multiple free lists. In particular, there
is one list for thread-local `free` operations, and another one for concurrent `free`
operations. Free-ing from another thread can now be a single CAS without needing
sophisticated coordination between threads. Since there will be
thousands of separate free lists, contention is naturally distributed over the heap,
and the chance of contending on a single location will be low -- this is quite
similar to randomized algorithms like skip lists where adding
a random oracle removes the need for a more complex algorithm.
- __eager page purging__: when a "page" becomes empty (with increased chance
due to free list sharding) the memory is marked to the OS as unused (reset or decommitted)
reducing (real) memory pressure and fragmentation, especially in long running
programs.
- __secure__: _mimalloc_ can be built in secure mode, adding guard pages,
randomized allocation, encrypted free lists, etc. to protect against various
heap vulnerabilities. The performance penalty is usually around 10% on average
over our benchmarks.
- __first-class heaps__: efficiently create and use multiple heaps to allocate across different regions.
A heap can be destroyed at once instead of deallocating each object separately.
- __bounded__: it does not suffer from _blowup_ \[1\], has bounded worst-case allocation
times (_wcat_) (upto OS primitives), bounded space overhead (~0.2% meta-data, with low
internal fragmentation), and has no internal points of contention using only atomic operations.
- __fast__: In our benchmarks (see [below](#performance)),
_mimalloc_ outperforms other leading allocators (_jemalloc_, _tcmalloc_, _Hoard_, etc),
and often uses less memory. A nice property is that it does consistently well over a wide range
of benchmarks. There is also good huge OS page support for larger server programs.
The [documentation](https://microsoft.github.io/mimalloc) gives a full overview of the API.
You can read more on the design of _mimalloc_ in the [technical report](https://www.microsoft.com/en-us/research/publication/mimalloc-free-list-sharding-in-action) which also has detailed benchmark results.
Enjoy!
### Branches
* `master`: latest stable release (based on `dev-slice`).
* `dev`: development branch for mimalloc v1. Use this branch for submitting PR's.
* `dev-slice`: development branch for mimalloc v2. This branch is downstream of `dev` (and is essentially equal to `dev` except for
`src/segment.c`)
### Releases
Note: the `v2.x` version has a different algorithm for managing internal mimalloc pages (as slices) that tends to use reduce
memory usage
and fragmentation compared to mimalloc `v1.x` (especially for large workloads). Should otherwise have similar performance
(see [below](#performance)); please report if you observe any significant performance regression.
* 2024-05-21, `v1.8.7`, `v2.1.7`: Fix build issues on less common platforms. Started upstreaming patches
from the CPython [integration](https://github.com/python/cpython/issues/113141#issuecomment-2119255217). Upstream `vcpkg` patches.
* 2024-05-13, `v1.8.6`, `v2.1.6`: Fix build errors on various (older) platforms. Refactored aligned allocation.
* 2024-04-22, `v1.8.4`, `v2.1.4`: Fixes various bugs and build issues. Add `MI_LIBC_MUSL` cmake flag for musl builds.
Free-ing code is refactored into a separate module (`free.c`). Mimalloc page info is simplified with the block size
directly available (and new `block_size_shift` to improve aligned block free-ing).
New approach to collection of abandoned segments: When
a thread terminates the segments it owns are abandoned (containing still live objects) and these can be
reclaimed by other threads. We no longer use a list of abandoned segments but this is now done using bitmaps in arena's
which is more concurrent (and more aggressive). Abandoned memory can now also be reclaimed if a thread frees an object in
an abandoned page (which can be disabled using `mi_option_abandoned_reclaim_on_free`). The option `mi_option_max_segment_reclaim`
gives a maximum percentage of abandoned segments that can be reclaimed per try (=10%).
* 2023-04-24, `v1.8.2`, `v2.1.2`: Fixes build issues on freeBSD, musl, and C17 (UE 5.1.1). Reduce code size/complexity
by removing regions and segment-cache's and only use arenas with improved memory purging -- this may improve memory
usage as well for larger services. Renamed options for consistency. Improved Valgrind and ASAN checking.
* 2023-04-03, `v1.8.1`, `v2.1.1`: Fixes build issues on some platforms.
* 2023-03-29, `v1.8.0`, `v2.1.0`: Improved support dynamic overriding on Windows 11. Improved tracing precision
with [asan](#asan) and [Valgrind](#valgrind), and added Windows event tracing [ETW](#ETW) (contributed by Xinglong He). Created an OS
abstraction layer to make it easier to port and separate platform dependent code (in `src/prim`). Fixed C++ STL compilation on older Microsoft C++ compilers, and various small bug fixes.
* 2022-12-23, `v1.7.9`, `v2.0.9`: Supports building with [asan](#asan) and improved [Valgrind](#valgrind) support.
Support arbitrary large alignments (in particular for `std::pmr` pools).
Added C++ STL allocators attached to a specific heap (thanks @vmarkovtsev).
Heap walks now visit all object (including huge objects). Support Windows nano server containers (by Johannes Schindelin,@dscho).
Various small bug fixes.
* 2022-11-03, `v1.7.7`, `v2.0.7`: Initial support for [Valgrind](#valgrind) for leak testing and heap block overflow
detection. Initial
support for attaching heaps to a speficic memory area (only in v2). Fix `realloc` behavior for zero size blocks, remove restriction to integral multiple of the alignment in `alloc_align`, improved aligned allocation performance, reduced contention with many threads on few processors (thank you @dposluns!), vs2022 support, support `pkg-config`, .
* 2022-04-14, `v1.7.6`, `v2.0.6`: fix fallback path for aligned OS allocation on Windows, improve Windows aligned allocation
even when compiling with older SDK's, fix dynamic overriding on macOS Monterey, fix MSVC C++ dynamic overriding, fix
warnings under Clang 14, improve performance if many OS threads are created and destroyed, fix statistics for large object
allocations, using MIMALLOC_VERBOSE=1 has no maximum on the number of error messages, various small fixes.
* 2022-02-14, `v1.7.5`, `v2.0.5` (alpha): fix malloc override on
Windows 11, fix compilation with musl, potentially reduced
committed memory, add `bin/minject` for Windows,
improved wasm support, faster aligned allocation,
various small fixes.
* [Older release notes](#older-release-notes)
Special thanks to:
* [David Carlier](https://devnexen.blogspot.com/) (@devnexen) for his many contributions, and making
mimalloc work better on many less common operating systems, like Haiku, Dragonfly, etc.
* Mary Feofanova (@mary3000), Evgeniy Moiseenko, and Manuel Pöter (@mpoeter) for making mimalloc TSAN checkable, and finding
memory model bugs using the [genMC] model checker.
* Weipeng Liu (@pongba), Zhuowei Li, Junhua Wang, and Jakub Szymanski, for their early support of mimalloc and deployment
at large scale services, leading to many improvements in the mimalloc algorithms for large workloads.
* Jason Gibson (@jasongibson) for exhaustive testing on large scale workloads and server environments, and finding complex bugs
in (early versions of) `mimalloc`.
* Manuel Pöter (@mpoeter) and Sam Gross(@colesbury) for finding an ABA concurrency issue in abandoned segment reclamation. Sam also created the [no GIL](https://github.com/colesbury/nogil) Python fork which
uses mimalloc internally.
[genMC]: https://plv.mpi-sws.org/genmc/
### Usage
mimalloc is used in various large scale low-latency services and programs, for example:
<a href="https://www.bing.com"><img height="50" align="left" src="https://upload.wikimedia.org/wikipedia/commons/e/e9/Bing_logo.svg"></a>
<a href="https://azure.microsoft.com/"><img height="50" align="left" src="https://upload.wikimedia.org/wikipedia/commons/a/a8/Microsoft_Azure_Logo.svg"></a>
<a href="https://deathstrandingpc.505games.com"><img height="100" src="doc/ds-logo.png"></a>
<a href="https://docs.unrealengine.com/4.26/en-US/WhatsNew/Builds/ReleaseNotes/4_25/"><img height="100" src="doc/unreal-logo.svg"></a>
<a href="https://cab.spbu.ru/software/spades/"><img height="100" src="doc/spades-logo.png"></a>
# Building
## Windows
Open `ide/vs2022/mimalloc.sln` in Visual Studio 2022 and build.
The `mimalloc` project builds a static library (in `out/msvc-x64`), while the
`mimalloc-override` project builds a DLL for overriding malloc
in the entire program.
## macOS, Linux, BSD, etc.
We use [`cmake`](https://cmake.org)<sup>1</sup> as the build system:
```
> mkdir -p out/release
> cd out/release
> cmake ../..
> make
```
This builds the library as a shared (dynamic)
library (`.so` or `.dylib`), a static library (`.a`), and
as a single object file (`.o`).
`> sudo make install` (install the library and header files in `/usr/local/lib` and `/usr/local/include`)
You can build the debug version which does many internal checks and
maintains detailed statistics as:
```
> mkdir -p out/debug
> cd out/debug
> cmake -DCMAKE_BUILD_TYPE=Debug ../..
> make
```
This will name the shared library as `libmimalloc-debug.so`.
Finally, you can build a _secure_ version that uses guard pages, encrypted
free lists, etc., as:
```
> mkdir -p out/secure
> cd out/secure
> cmake -DMI_SECURE=ON ../..
> make
```
This will name the shared library as `libmimalloc-secure.so`.
Use `ccmake`<sup>2</sup> instead of `cmake`
to see and customize all the available build options.
Notes:
1. Install CMake: `sudo apt-get install cmake`
2. Install CCMake: `sudo apt-get install cmake-curses-gui`
## Single source
You can also directly build the single `src/static.c` file as part of your project without
needing `cmake` at all. Make sure to also add the mimalloc `include` directory to the include path.
# Using the library
The preferred usage is including `<mimalloc.h>`, linking with
the shared- or static library, and using the `mi_malloc` API exclusively for allocation. For example,
```
> gcc -o myprogram -lmimalloc myfile.c
```
mimalloc uses only safe OS calls (`mmap` and `VirtualAlloc`) and can co-exist
with other allocators linked to the same program.
If you use `cmake`, you can simply use:
```
find_package(mimalloc 1.4 REQUIRED)
```
in your `CMakeLists.txt` to find a locally installed mimalloc. Then use either:
```
target_link_libraries(myapp PUBLIC mimalloc)
```
to link with the shared (dynamic) library, or:
```
target_link_libraries(myapp PUBLIC mimalloc-static)
```
to link with the static library. See `test\CMakeLists.txt` for an example.
For best performance in C++ programs, it is also recommended to override the
global `new` and `delete` operators. For convenience, mimalloc provides
[`mimalloc-new-delete.h`](https://github.com/microsoft/mimalloc/blob/master/include/mimalloc-new-delete.h) which does this for you -- just include it in a single(!) source file in your project.
In C++, mimalloc also provides the `mi_stl_allocator` struct which implements the `std::allocator`
interface.
You can pass environment variables to print verbose messages (`MIMALLOC_VERBOSE=1`)
and statistics (`MIMALLOC_SHOW_STATS=1`) (in the debug version):
```
> env MIMALLOC_SHOW_STATS=1 ./cfrac 175451865205073170563711388363
175451865205073170563711388363 = 374456281610909315237213 * 468551
heap stats: peak total freed unit
normal 2: 16.4 kb 17.5 mb 17.5 mb 16 b ok
normal 3: 16.3 kb 15.2 mb 15.2 mb 24 b ok
normal 4: 64 b 4.6 kb 4.6 kb 32 b ok
normal 5: 80 b 118.4 kb 118.4 kb 40 b ok
normal 6: 48 b 48 b 48 b 48 b ok
normal 17: 960 b 960 b 960 b 320 b ok
heap stats: peak total freed unit
normal: 33.9 kb 32.8 mb 32.8 mb 1 b ok
huge: 0 b 0 b 0 b 1 b ok
total: 33.9 kb 32.8 mb 32.8 mb 1 b ok
malloc requested: 32.8 mb
committed: 58.2 kb 58.2 kb 58.2 kb 1 b ok
reserved: 2.0 mb 2.0 mb 2.0 mb 1 b ok
reset: 0 b 0 b 0 b 1 b ok
segments: 1 1 1
-abandoned: 0
pages: 6 6 6
-abandoned: 0
mmaps: 3
mmap fast: 0
mmap slow: 1
threads: 0
elapsed: 2.022s
process: user: 1.781s, system: 0.016s, faults: 756, reclaims: 0, rss: 2.7 mb
```
The above model of using the `mi_` prefixed API is not always possible
though in existing programs that already use the standard malloc interface,
and another option is to override the standard malloc interface
completely and redirect all calls to the _mimalloc_ library instead .
## Environment Options
You can set further options either programmatically (using [`mi_option_set`](https://microsoft.github.io/mimalloc/group__options.html)), or via environment variables:
- `MIMALLOC_SHOW_STATS=1`: show statistics when the program terminates.
- `MIMALLOC_VERBOSE=1`: show verbose messages.
- `MIMALLOC_SHOW_ERRORS=1`: show error and warning messages.
Advanced options:
- `MIMALLOC_ARENA_EAGER_COMMIT=2`: turns on eager commit for the large arenas (usually 1GiB) from which mimalloc
allocates segments and pages. Set this to 2 (default) to
only enable this on overcommit systems (e.g. Linux). Set this to 1 to enable explicitly on other systems
as well (like Windows or macOS) which may improve performance (as the whole arena is committed at once).
Note that eager commit only increases the commit but not the actual the peak resident set
(rss) so it is generally ok to enable this.
- `MIMALLOC_PURGE_DELAY=N`: the delay in `N` milli-seconds (by default `10`) after which mimalloc will purge
OS pages that are not in use. This signals to the OS that the underlying physical memory can be reused which
can reduce memory fragmentation especially in long running (server) programs. Setting `N` to `0` purges immediately when
a page becomes unused which can improve memory usage but also decreases performance. Setting `N` to a higher
value like `100` can improve performance (sometimes by a lot) at the cost of potentially using more memory at times.
Setting it to `-1` disables purging completely.
- `MIMALLOC_PURGE_DECOMMITS=1`: By default "purging" memory means unused memory is decommitted (`MEM_DECOMMIT` on Windows,
`MADV_DONTNEED` (which decresease rss immediately) on `mmap` systems). Set this to 0 to instead "reset" unused
memory on a purge (`MEM_RESET` on Windows, generally `MADV_FREE` (which does not decrease rss immediately) on `mmap` systems).
Mimalloc generally does not "free" OS memory but only "purges" OS memory, in other words, it tries to keep virtual
address ranges and decommits within those ranges (to make the underlying physical memory available to other processes).
Further options for large workloads and services:
- `MIMALLOC_USE_NUMA_NODES=N`: pretend there are at most `N` NUMA nodes. If not set, the actual NUMA nodes are detected
at runtime. Setting `N` to 1 may avoid problems in some virtual environments. Also, setting it to a lower number than
the actual NUMA nodes is fine and will only cause threads to potentially allocate more memory across actual NUMA
nodes (but this can happen in any case as NUMA local allocation is always a best effort but not guaranteed).
- `MIMALLOC_ALLOW_LARGE_OS_PAGES=1`: use large OS pages (2 or 4MiB) when available; for some workloads this can significantly
improve performance. When this option is disabled, it also disables transparent huge pages (THP) for the process
(on Linux and Android). Use `MIMALLOC_VERBOSE` to check if the large OS pages are enabled -- usually one needs
to explicitly give permissions for large OS pages (as on [Windows][windows-huge] and [Linux][linux-huge]). However, sometimes
the OS is very slow to reserve contiguous physical memory for large OS pages so use with care on systems that
can have fragmented memory (for that reason, we generally recommend to use `MIMALLOC_RESERVE_HUGE_OS_PAGES` instead whenever possible).
- `MIMALLOC_RESERVE_HUGE_OS_PAGES=N`: where `N` is the number of 1GiB _huge_ OS pages. This reserves the huge pages at
startup and sometimes this can give a large (latency) performance improvement on big workloads.
Usually it is better to not use `MIMALLOC_ALLOW_LARGE_OS_PAGES=1` in combination with this setting. Just like large
OS pages, use with care as reserving
contiguous physical memory can take a long time when memory is fragmented (but reserving the huge pages is done at
startup only once).
Note that we usually need to explicitly give permission for huge OS pages (as on [Windows][windows-huge] and [Linux][linux-huge])).
With huge OS pages, it may be beneficial to set the setting
`MIMALLOC_EAGER_COMMIT_DELAY=N` (`N` is 1 by default) to delay the initial `N` segments (of 4MiB)
of a thread to not allocate in the huge OS pages; this prevents threads that are short lived
and allocate just a little to take up space in the huge OS page area (which cannot be purged as huge OS pages are pinned
to physical memory).
The huge pages are usually allocated evenly among NUMA nodes.
We can use `MIMALLOC_RESERVE_HUGE_OS_PAGES_AT=N` where `N` is the numa node (starting at 0) to allocate all
the huge pages at a specific numa node instead.
Use caution when using `fork` in combination with either large or huge OS pages: on a fork, the OS uses copy-on-write
for all pages in the original process including the huge OS pages. When any memory is now written in that area, the
OS will copy the entire 1GiB huge page (or 2MiB large page) which can cause the memory usage to grow in large increments.
[linux-huge]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/tuning_and_optimizing_red_hat_enterprise_linux_for_oracle_9i_and_10g_databases/sect-oracle_9i_and_10g_tuning_guide-large_memory_optimization_big_pages_and_huge_pages-configuring_huge_pages_in_red_hat_enterprise_linux_4_or_5
[windows-huge]: https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/enable-the-lock-pages-in-memory-option-windows?view=sql-server-2017
## Secure Mode
_mimalloc_ can be build in secure mode by using the `-DMI_SECURE=ON` flags in `cmake`. This build enables various mitigations
to make mimalloc more robust against exploits. In particular:
- All internal mimalloc pages are surrounded by guard pages and the heap metadata is behind a guard page as well (so a buffer overflow
exploit cannot reach into the metadata).
- All free list pointers are
[encoded](https://github.com/microsoft/mimalloc/blob/783e3377f79ee82af43a0793910a9f2d01ac7863/include/mimalloc-internal.h#L396)
with per-page keys which is used both to prevent overwrites with a known pointer, as well as to detect heap corruption.
- Double free's are detected (and ignored).
- The free lists are initialized in a random order and allocation randomly chooses between extension and reuse within a page to
mitigate against attacks that rely on a predicable allocation order. Similarly, the larger heap blocks allocated by mimalloc
from the OS are also address randomized.
As always, evaluate with care as part of an overall security strategy as all of the above are mitigations but not guarantees.
## Debug Mode
When _mimalloc_ is built using debug mode, various checks are done at runtime to catch development errors.
- Statistics are maintained in detail for each object size. They can be shown using `MIMALLOC_SHOW_STATS=1` at runtime.
- All objects have padding at the end to detect (byte precise) heap block overflows.
- Double free's, and freeing invalid heap pointers are detected.
- Corrupted free-lists and some forms of use-after-free are detected.
# Overriding Standard Malloc
Overriding the standard `malloc` (and `new`) can be done either _dynamically_ or _statically_.
## Dynamic override
This is the recommended way to override the standard malloc interface.
### Dynamic Override on Linux, BSD
On these ELF-based systems we preload the mimalloc shared
library so all calls to the standard `malloc` interface are
resolved to the _mimalloc_ library.
```
> env LD_PRELOAD=/usr/lib/libmimalloc.so myprogram
```
You can set extra environment variables to check that mimalloc is running,
like:
```
> env MIMALLOC_VERBOSE=1 LD_PRELOAD=/usr/lib/libmimalloc.so myprogram
```
or run with the debug version to get detailed statistics:
```
> env MIMALLOC_SHOW_STATS=1 LD_PRELOAD=/usr/lib/libmimalloc-debug.so myprogram
```
### Dynamic Override on MacOS
On macOS we can also preload the mimalloc shared
library so all calls to the standard `malloc` interface are
resolved to the _mimalloc_ library.
```
> env DYLD_INSERT_LIBRARIES=/usr/lib/libmimalloc.dylib myprogram
```
Note that certain security restrictions may apply when doing this from
the [shell](https://stackoverflow.com/questions/43941322/dyld-insert-libraries-ignored-when-calling-application-through-bash).
### Dynamic Override on Windows
<span id="override_on_windows">Dynamically overriding on mimalloc on Windows</span>
is robust and has the particular advantage to be able to redirect all malloc/free calls that go through
the (dynamic) C runtime allocator, including those from other DLL's or libraries.
As it intercepts all allocation calls on a low level, it can be used reliably
on large programs that include other 3rd party components.
There are four requirements to make the overriding work robustly:
1. Use the C-runtime library as a DLL (using the `/MD` or `/MDd` switch).
2. Link your program explicitly with `mimalloc-override.dll` library.
To ensure the `mimalloc-override.dll` is loaded at run-time it is easiest to insert some
call to the mimalloc API in the `main` function, like `mi_version()`
(or use the `/INCLUDE:mi_version` switch on the linker). See the `mimalloc-override-test` project
for an example on how to use this.
3. The [`mimalloc-redirect.dll`](bin) (or `mimalloc-redirect32.dll`) must be put
in the same folder as the main `mimalloc-override.dll` at runtime (as it is a dependency of that DLL).
The redirection DLL ensures that all calls to the C runtime malloc API get redirected to
mimalloc functions (which reside in `mimalloc-override.dll`).
4. Ensure the `mimalloc-override.dll` comes as early as possible in the import
list of the final executable (so it can intercept all potential allocations).
For best performance on Windows with C++, it
is also recommended to also override the `new`/`delete` operations (by including
[`mimalloc-new-delete.h`](include/mimalloc-new-delete.h)
a single(!) source file in your project).
The environment variable `MIMALLOC_DISABLE_REDIRECT=1` can be used to disable dynamic
overriding at run-time. Use `MIMALLOC_VERBOSE=1` to check if mimalloc was successfully redirected.
We cannot always re-link an executable with `mimalloc-override.dll`, and similarly, we cannot always
ensure the the DLL comes first in the import table of the final executable.
In many cases though we can patch existing executables without any recompilation
if they are linked with the dynamic C runtime (`ucrtbase.dll`) -- just put the `mimalloc-override.dll`
into the import table (and put `mimalloc-redirect.dll` in the same folder)
Such patching can be done for example with [CFF Explorer](https://ntcore.com/?page_id=388) or
the [`minject`](bin) program.
## Static override
On Unix-like systems, you can also statically link with _mimalloc_ to override the standard
malloc interface. The recommended way is to link the final program with the
_mimalloc_ single object file (`mimalloc.o`). We use
an object file instead of a library file as linkers give preference to
that over archives to resolve symbols. To ensure that the standard
malloc interface resolves to the _mimalloc_ library, link it as the first
object file. For example:
```
> gcc -o myprogram mimalloc.o myfile1.c ...
```
Another way to override statically that works on all platforms, is to
link statically to mimalloc (as shown in the introduction) and include a
header file in each source file that re-defines `malloc` etc. to `mi_malloc`.
This is provided by [`mimalloc-override.h`](https://github.com/microsoft/mimalloc/blob/master/include/mimalloc-override.h). This only works reliably though if all sources are
under your control or otherwise mixing of pointers from different heaps may occur!
# Tools
Generally, we recommend using the standard allocator with memory tracking tools, but mimalloc
can also be build to support the [address sanitizer][asan] or the excellent [Valgrind] tool.
Moreover, it can be build to support Windows event tracing ([ETW]).
This has a small performance overhead but does allow detecting memory leaks and byte-precise
buffer overflows directly on final executables. See also the `test/test-wrong.c` file to test with various tools.
## Valgrind
To build with [valgrind] support, use the `MI_TRACK_VALGRIND=ON` cmake option:
```
> cmake ../.. -DMI_TRACK_VALGRIND=ON
```
This can also be combined with secure mode or debug mode.
You can then run your programs directly under valgrind:
```
> valgrind <myprogram>
```
If you rely on overriding `malloc`/`free` by mimalloc (instead of using the `mi_malloc`/`mi_free` API directly),
you also need to tell `valgrind` to not intercept those calls itself, and use:
```
> MIMALLOC_SHOW_STATS=1 valgrind --soname-synonyms=somalloc=*mimalloc* -- <myprogram>
```
By setting the `MIMALLOC_SHOW_STATS` environment variable you can check that mimalloc is indeed
used and not the standard allocator. Even though the [Valgrind option][valgrind-soname]
is called `--soname-synonyms`, this also
works when overriding with a static library or object file. Unfortunately, it is not possible to
dynamically override mimalloc using `LD_PRELOAD` together with `valgrind`.
See also the `test/test-wrong.c` file to test with `valgrind`.
Valgrind support is in its initial development -- please report any issues.
[Valgrind]: https://valgrind.org/
[valgrind-soname]: https://valgrind.org/docs/manual/manual-core.html#opt.soname-synonyms
## ASAN
To build with the address sanitizer, use the `-DMI_TRACK_ASAN=ON` cmake option:
```
> cmake ../.. -DMI_TRACK_ASAN=ON
```
This can also be combined with secure mode or debug mode.
You can then run your programs as:'
```
> ASAN_OPTIONS=verbosity=1 <myprogram>
```
When you link a program with an address sanitizer build of mimalloc, you should
generally compile that program too with the address sanitizer enabled.
For example, assuming you build mimalloc in `out/debug`:
```
clang -g -o test-wrong -Iinclude test/test-wrong.c out/debug/libmimalloc-asan-debug.a -lpthread -fsanitize=address -fsanitize-recover=address
```
Since the address sanitizer redirects the standard allocation functions, on some platforms (macOSX for example)
it is required to compile mimalloc with `-DMI_OVERRIDE=OFF`.
Adress sanitizer support is in its initial development -- please report any issues.
[asan]: https://github.com/google/sanitizers/wiki/AddressSanitizer
## ETW
Event tracing for Windows ([ETW]) provides a high performance way to capture all allocations though
mimalloc and analyze them later. To build with ETW support, use the `-DMI_TRACK_ETW=ON` cmake option.
You can then capture an allocation trace using the Windows performance recorder (WPR), using the
`src/prim/windows/etw-mimalloc.wprp` profile. In an admin prompt, you can use:
```
> wpr -start src\prim\windows\etw-mimalloc.wprp -filemode
> <my_mimalloc_program>
> wpr -stop <my_mimalloc_program>.etl
```
and then open `<my_mimalloc_program>.etl` in the Windows Performance Analyzer (WPA), or
use a tool like [TraceControl] that is specialized for analyzing mimalloc traces.
[ETW]: https://learn.microsoft.com/en-us/windows-hardware/test/wpt/event-tracing-for-windows
[TraceControl]: https://github.com/xinglonghe/TraceControl
# Performance
Last update: 2021-01-30
We tested _mimalloc_ against many other top allocators over a wide
range of benchmarks, ranging from various real world programs to
synthetic benchmarks that see how the allocator behaves under more
extreme circumstances. In our benchmark suite, _mimalloc_ outperforms other leading
allocators (_jemalloc_, _tcmalloc_, _Hoard_, etc), and has a similar memory footprint. A nice property is that it
does consistently well over the wide range of benchmarks.
General memory allocators are interesting as there exists no algorithm that is
optimal -- for a given allocator one can usually construct a workload
where it does not do so well. The goal is thus to find an allocation
strategy that performs well over a wide range of benchmarks without
suffering from (too much) underperformance in less common situations.
As always, interpret these results with care since some benchmarks test synthetic
or uncommon situations that may never apply to your workloads. For example, most
allocators do not do well on `xmalloc-testN` but that includes even the best
industrial allocators like _jemalloc_ and _tcmalloc_ that are used in some of
the world's largest systems (like Chrome or FreeBSD).
Also, the benchmarks here do not measure the behaviour on very large and long-running server workloads,
or worst-case latencies of allocation. Much work has gone into `mimalloc` to work well on such
workloads (for example, to reduce virtual memory fragmentation on long-running services)
but such optimizations are not always reflected in the current benchmark suite.
We show here only an overview -- for
more specific details and further benchmarks we refer to the
[technical report](https://www.microsoft.com/en-us/research/publication/mimalloc-free-list-sharding-in-action).
The benchmark suite is automated and available separately
as [mimalloc-bench](https://github.com/daanx/mimalloc-bench).
## Benchmark Results on a 16-core AMD 5950x (Zen3)
Testing on the 16-core AMD 5950x processor at 3.4Ghz (4.9Ghz boost), with
with 32GiB memory at 3600Mhz, running Ubuntu 20.04 with glibc 2.31 and GCC 9.3.0.
We measure three versions of _mimalloc_: the main version `mi` (tag:v1.7.0),
the new v2.0 beta version as `xmi` (tag:v2.0.0), and the main version in secure mode as `smi` (tag:v1.7.0).
The other allocators are
Google's [_tcmalloc_](https://github.com/gperftools/gperftools) (`tc`, tag:gperftools-2.8.1) used in Chrome,
Facebook's [_jemalloc_](https://github.com/jemalloc/jemalloc) (`je`, tag:5.2.1) by Jason Evans used in Firefox and FreeBSD,
the Intel thread building blocks [allocator](https://github.com/intel/tbb) (`tbb`, tag:v2020.3),
[rpmalloc](https://github.com/mjansson/rpmalloc) (`rp`,tag:1.4.1) by Mattias Jansson,
the original scalable [_Hoard_](https://github.com/emeryberger/Hoard) (git:d880f72) allocator by Emery Berger \[1],
the memory compacting [_Mesh_](https://github.com/plasma-umass/Mesh) (git:67ff31a) allocator by
Bobby Powers _et al_ \[8],
and finally the default system allocator (`glibc`, 2.31) (based on _PtMalloc2_).
<img width="90%" src="doc/bench-2021/bench-amd5950x-2021-01-30-a.svg"/>
<img width="90%" src="doc/bench-2021/bench-amd5950x-2021-01-30-b.svg"/>
Any benchmarks ending in `N` run on all 32 logical cores in parallel.
Results are averaged over 10 runs and reported relative
to mimalloc (where 1.2 means it took 1.2× longer to run).
The legend also contains the _overall relative score_ between the
allocators where 100 points is the maximum if an allocator is fastest on
all benchmarks.
The single threaded _cfrac_ benchmark by Dave Barrett is an implementation of
continued fraction factorization which uses many small short-lived allocations.
All allocators do well on such common usage, where _mimalloc_ is just a tad
faster than _tcmalloc_ and
_jemalloc_.
The _leanN_ program is interesting as a large realistic and
concurrent workload of the [Lean](https://github.com/leanprover/lean)
theorem prover compiling its own standard library, and there is a 13%
speedup over _tcmalloc_. This is
quite significant: if Lean spends 20% of its time in the
allocator that means that _mimalloc_ is 1.6× faster than _tcmalloc_
here. (This is surprising as that is not measured in a pure
allocation benchmark like _alloc-test_. We conjecture that we see this
outsized improvement here because _mimalloc_ has better locality in
the allocation which improves performance for the *other* computations
in a program as well).
The single threaded _redis_ benchmark again show that most allocators do well on such workloads.
The _larsonN_ server benchmark by Larson and Krishnan \[2] allocates and frees between threads. They observed this
behavior (which they call _bleeding_) in actual server applications, and the benchmark simulates this.
Here, _mimalloc_ is quite a bit faster than _tcmalloc_ and _jemalloc_ probably due to the object migration between different threads.
The _mstressN_ workload performs many allocations and re-allocations,
and migrates objects between threads (as in _larsonN_). However, it also
creates and destroys the _N_ worker threads a few times keeping some objects
alive beyond the life time of the allocating thread. We observed this
behavior in many larger server applications.
The [_rptestN_](https://github.com/mjansson/rpmalloc-benchmark) benchmark
by Mattias Jansson is a allocator test originally designed
for _rpmalloc_, and tries to simulate realistic allocation patterns over
multiple threads. Here the differences between allocators become more apparent.
The second benchmark set tests specific aspects of the allocators and
shows even more extreme differences between them.
The _alloc-test_, by
[OLogN Technologies AG](http://ithare.com/testing-memory-allocators-ptmalloc2-tcmalloc-hoard-jemalloc-while-trying-to-simulate-real-world-loads/), is a very allocation intensive benchmark doing millions of
allocations in various size classes. The test is scaled such that when an
allocator performs almost identically on _alloc-test1_ as _alloc-testN_ it
means that it scales linearly.
The _sh6bench_ and _sh8bench_ benchmarks are
developed by [MicroQuill](http://www.microquill.com/) as part of SmartHeap.
In _sh6bench_ _mimalloc_ does much
better than the others (more than 2.5× faster than _jemalloc_).
We cannot explain this well but believe it is
caused in part by the "reverse" free-ing pattern in _sh6bench_.
The _sh8bench_ is a variation with object migration
between threads; whereas _tcmalloc_ did well on _sh6bench_, the addition of object migration causes it to be 10× slower than before.
The _xmalloc-testN_ benchmark by Lever and Boreham \[5] and Christian Eder, simulates an asymmetric workload where
some threads only allocate, and others only free -- they observed this pattern in
larger server applications. Here we see that
the _mimalloc_ technique of having non-contended sharded thread free
lists pays off as it outperforms others by a very large margin. Only _rpmalloc_, _tbb_, and _glibc_ also scale well on this benchmark.
The _cache-scratch_ benchmark by Emery Berger \[1], and introduced with
the Hoard allocator to test for _passive-false_ sharing of cache lines.
With a single thread they all
perform the same, but when running with multiple threads the potential allocator
induced false sharing of the cache lines can cause large run-time differences.
Crundal \[6] describes in detail why the false cache line sharing occurs in the _tcmalloc_ design, and also discusses how this
can be avoided with some small implementation changes.
Only the _tbb_, _rpmalloc_ and _mesh_ allocators also avoid the
cache line sharing completely, while _Hoard_ and _glibc_ seem to mitigate
the effects. Kukanov and Voss \[7] describe in detail
how the design of _tbb_ avoids the false cache line sharing.
## On a 36-core Intel Xeon
For completeness, here are the results on a big Amazon
[c5.18xlarge](https://aws.amazon.com/ec2/instance-types/#Compute_Optimized) instance
consisting of a 2×18-core Intel Xeon (Cascade Lake) at 3.4GHz (boost 3.5GHz)
with 144GiB ECC memory, running Ubuntu 20.04 with glibc 2.31, GCC 9.3.0, and
Clang 10.0.0. This time, the mimalloc allocators (mi, xmi, and smi) were
compiled with the Clang compiler instead of GCC.
The results are similar to the AMD results but it is interesting to
see the differences in the _larsonN_, _mstressN_, and _xmalloc-testN_ benchmarks.
<img width="90%" src="doc/bench-2021/bench-c5-18xlarge-2021-01-30-a.svg"/>
<img width="90%" src="doc/bench-2021/bench-c5-18xlarge-2021-01-30-b.svg"/>
## Peak Working Set
The following figure shows the peak working set (rss) of the allocators
on the benchmarks (on the c5.18xlarge instance).
<img width="90%" src="doc/bench-2021/bench-c5-18xlarge-2021-01-30-rss-a.svg"/>
<img width="90%" src="doc/bench-2021/bench-c5-18xlarge-2021-01-30-rss-b.svg"/>
Note that the _xmalloc-testN_ memory usage should be disregarded as it
allocates more the faster the program runs. Similarly, memory usage of
_larsonN_, _mstressN_, _rptestN_ and _sh8bench_ can vary depending on scheduling and
speed. Nevertheless, we hope to improve the memory usage on _mstressN_
and _rptestN_ (just as _cfrac_, _larsonN_ and _sh8bench_ have a small working set which skews the results).
<!--
# Previous Benchmarks
Todo: should we create a separate page for this?
## Benchmark Results on 36-core Intel: 2020-01-20
Testing on a big Amazon EC2 compute instance
([c5.18xlarge](https://aws.amazon.com/ec2/instance-types/#Compute_Optimized))
consisting of a 72 processor Intel Xeon at 3GHz
with 144GiB ECC memory, running Ubuntu 18.04.1 with glibc 2.27 and GCC 7.4.0.
The measured allocators are _mimalloc_ (xmi, tag:v1.4.0, page reset enabled)
and its secure build as _smi_,
Google's [_tcmalloc_](https://github.com/gperftools/gperftools) (tc, tag:gperftools-2.7) used in Chrome,
Facebook's [_jemalloc_](https://github.com/jemalloc/jemalloc) (je, tag:5.2.1) by Jason Evans used in Firefox and FreeBSD,
the Intel thread building blocks [allocator](https://github.com/intel/tbb) (tbb, tag:2020),
[rpmalloc](https://github.com/mjansson/rpmalloc) (rp,tag:1.4.0) by Mattias Jansson,
the original scalable [_Hoard_](https://github.com/emeryberger/Hoard) (tag:3.13) allocator by Emery Berger \[1],
the memory compacting [_Mesh_](https://github.com/plasma-umass/Mesh) (git:51222e7) allocator by
Bobby Powers _et al_ \[8],
and finally the default system allocator (glibc, 2.27) (based on _PtMalloc2_).
<img width="90%" src="doc/bench-2020/bench-c5-18xlarge-2020-01-20-a.svg"/>
<img width="90%" src="doc/bench-2020/bench-c5-18xlarge-2020-01-20-b.svg"/>
The following figure shows the peak working set (rss) of the allocators
on the benchmarks (on the c5.18xlarge instance).
<img width="90%" src="doc/bench-2020/bench-c5-18xlarge-2020-01-20-rss-a.svg"/>
<img width="90%" src="doc/bench-2020/bench-c5-18xlarge-2020-01-20-rss-b.svg"/>
## On 24-core AMD Epyc, 2020-01-16
For completeness, here are the results on a
[r5a.12xlarge](https://aws.amazon.com/ec2/instance-types/#Memory_Optimized) instance
having a 48 processor AMD Epyc 7000 at 2.5GHz with 384GiB of memory.
The results are similar to the Intel results but it is interesting to
see the differences in the _larsonN_, _mstressN_, and _xmalloc-testN_ benchmarks.
<img width="90%" src="doc/bench-2020/bench-r5a-12xlarge-2020-01-16-a.svg"/>
<img width="90%" src="doc/bench-2020/bench-r5a-12xlarge-2020-01-16-b.svg"/>
-->
# References
- \[1] Emery D. Berger, Kathryn S. McKinley, Robert D. Blumofe, and Paul R. Wilson.
_Hoard: A Scalable Memory Allocator for Multithreaded Applications_
the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX). Cambridge, MA, November 2000.
[pdf](http://www.cs.utexas.edu/users/mckinley/papers/asplos-2000.pdf)
- \[2] P. Larson and M. Krishnan. _Memory allocation for long-running server applications_.
In ISMM, Vancouver, B.C., Canada, 1998. [pdf](http://citeseer.ist.psu.edu/viewdoc/download?doi=10.1.1.45.1947&rep=rep1&type=pdf)
- \[3] D. Grunwald, B. Zorn, and R. Henderson.
_Improving the cache locality of memory allocation_. In R. Cartwright, editor,
Proceedings of the Conference on Programming Language Design and Implementation, pages 177–186, New York, NY, USA, June 1993. [pdf](http://citeseer.ist.psu.edu/viewdoc/download?doi=10.1.1.43.6621&rep=rep1&type=pdf)
- \[4] J. Barnes and P. Hut. _A hierarchical O(n*log(n)) force-calculation algorithm_. Nature, 324:446-449, 1986.
- \[5] C. Lever, and D. Boreham. _Malloc() Performance in a Multithreaded Linux Environment._
In USENIX Annual Technical Conference, Freenix Session. San Diego, CA. Jun. 2000.
Available at <https://github.com/kuszmaul/SuperMalloc/tree/master/tests>
- \[6] Timothy Crundal. _Reducing Active-False Sharing in TCMalloc_. 2016. CS16S1 project at the Australian National University. [pdf](http://courses.cecs.anu.edu.au/courses/CSPROJECTS/16S1/Reports/Timothy_Crundal_Report.pdf)
- \[7] Alexey Kukanov, and Michael J Voss.
_The Foundations for Scalable Multi-Core Software in Intel Threading Building Blocks._
Intel Technology Journal 11 (4). 2007
- \[8] Bobby Powers, David Tench, Emery D. Berger, and Andrew McGregor.
_Mesh: Compacting Memory Management for C/C++_
In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI'19), June 2019, pages 333-–346.
<!--
- \[9] Paul Liétar, Theodore Butler, Sylvan Clebsch, Sophia Drossopoulou, Juliana Franco, Matthew J Parkinson,
Alex Shamis, Christoph M Wintersteiger, and David Chisnall.
_Snmalloc: A Message Passing Allocator._
In Proceedings of the 2019 ACM SIGPLAN International Symposium on Memory Management, 122–135. ACM. 2019.
-->
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
# Older Release Notes
* 2021-11-14, `v1.7.3`, `v2.0.3` (beta): improved WASM support, improved macOS support and performance (including
M1), improved performance for v2 for large objects, Python integration improvements, more standard
installation directories, various small fixes.
* 2021-06-17, `v1.7.2`, `v2.0.2` (beta): support M1, better installation layout on Linux, fix
thread_id on Android, prefer 2-6TiB area for aligned allocation to work better on pre-windows 8, various small fixes.
* 2021-04-06, `v1.7.1`, `v2.0.1` (beta): fix bug in arena allocation for huge pages, improved aslr on large allocations, initial M1 support (still experimental).
* 2021-01-31, `v2.0.0`: beta release 2.0: new slice algorithm for managing internal mimalloc pages.
* 2021-01-31, `v1.7.0`: stable release 1.7: support explicit user provided memory regions, more precise statistics,
improve macOS overriding, initial support for Apple M1, improved DragonFly support, faster memcpy on Windows, various small fixes.
* 2020-09-24, `v1.6.7`: stable release 1.6: using standard C atomics, passing tsan testing, improved
handling of failing to commit on Windows, add [`mi_process_info`](https://github.com/microsoft/mimalloc/blob/master/include/mimalloc.h#L156) api call.
* 2020-08-06, `v1.6.4`: stable release 1.6: improved error recovery in low-memory situations,
support for IllumOS and Haiku, NUMA support for Vista/XP, improved NUMA detection for AMD Ryzen, ubsan support.
* 2020-05-05, `v1.6.3`: stable release 1.6: improved behavior in out-of-memory situations, improved malloc zones on macOS,
build PIC static libraries by default, add option to abort on out-of-memory, line buffered statistics.
* 2020-04-20, `v1.6.2`: stable release 1.6: fix compilation on Android, MingW, Raspberry, and Conda,
stability fix for Windows 7, fix multiple mimalloc instances in one executable, fix `strnlen` overload,
fix aligned debug padding.
* 2020-02-17, `v1.6.1`: stable release 1.6: minor updates (build with clang-cl, fix alignment issue for small objects).
* 2020-02-09, `v1.6.0`: stable release 1.6: fixed potential memory leak, improved overriding
and thread local support on FreeBSD, NetBSD, DragonFly, and macOSX. New byte-precise
heap block overflow detection in debug mode (besides the double-free detection and free-list
corruption detection). Add `nodiscard` attribute to most allocation functions.
Enable `MIMALLOC_PAGE_RESET` by default. New reclamation strategy for abandoned heap pages
for better memory footprint.
* 2020-02-09, `v1.5.0`: stable release 1.5: improved free performance, small bug fixes.
* 2020-01-22, `v1.4.0`: stable release 1.4: improved performance for delayed OS page reset,
more eager concurrent free, addition of STL allocator, fixed potential memory leak.
* 2020-01-15, `v1.3.0`: stable release 1.3: bug fixes, improved randomness and [stronger
free list encoding](https://github.com/microsoft/mimalloc/blob/783e3377f79ee82af43a0793910a9f2d01ac7863/include/mimalloc-internal.h#L396) in secure mode.
* 2019-12-22, `v1.2.2`: stable release 1.2: minor updates.
* 2019-11-22, `v1.2.0`: stable release 1.2: bug fixes, improved secure mode (free list corruption checks, double free mitigation). Improved dynamic overriding on Windows.
* 2019-10-07, `v1.1.0`: stable release 1.1.
* 2019-09-01, `v1.0.8`: pre-release 8: more robust windows dynamic overriding, initial huge page support.
* 2019-08-10, `v1.0.6`: pre-release 6: various performance improvements. | {
"source": "yandex/perforator",
"title": "contrib/libs/mimalloc/readme.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/mimalloc/readme.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 48533
} |
# PCRE2 - Perl-Compatible Regular Expressions
The PCRE2 library is a set of C functions that implement regular expression
pattern matching using the same syntax and semantics as Perl 5. PCRE2 has its
own native API, as well as a set of wrapper functions that correspond to the
POSIX regular expression API. The PCRE2 library is free, even for building
proprietary software. It comes in three forms, for processing 8-bit, 16-bit,
or 32-bit code units, in either literal or UTF encoding.
PCRE2 was first released in 2015 to replace the API in the original PCRE
library, which is now obsolete and no longer maintained. As well as a more
flexible API, the code of PCRE2 has been much improved since the fork.
## Download
As well as downloading from the
[GitHub site](https://github.com/PCRE2Project/pcre2), you can download PCRE2
or the older, unmaintained PCRE1 library from an
[*unofficial* mirror](https://sourceforge.net/projects/pcre/files/) at SourceForge.
You can check out the PCRE2 source code via Git or Subversion:
git clone https://github.com/PCRE2Project/pcre2.git
svn co https://github.com/PCRE2Project/pcre2.git
## Contributed Ports
If you just need the command-line PCRE2 tools on Windows, precompiled binary
versions are available at this
[Rexegg page](http://www.rexegg.com/pcregrep-pcretest.html).
A PCRE2 port for z/OS, a mainframe operating system which uses EBCDIC as its
default character encoding, can be found at
[http://www.cbttape.org](http://www.cbttape.org/) (File 939).
## Documentation
You can read the PCRE2 documentation
[here](https://PCRE2Project.github.io/pcre2/doc/html/index.html).
Comparisons to Perl's regular expression semantics can be found in the
community authored Wikipedia entry for PCRE.
There is a curated summary of changes for each PCRE release, copies of
documentation from older releases, and other useful information from the third
party authored
[RexEgg PCRE Documentation and Change Log page](http://www.rexegg.com/pcre-documentation.html).
## Contact
To report a problem with the PCRE2 library, or to make a feature request, please
use the PCRE2 GitHub issues tracker. There is a mailing list for discussion of
PCRE2 issues and development at [email protected], which is where any
announcements will be made. You can browse the
[list archives](https://groups.google.com/g/pcre2-dev). | {
"source": "yandex/perforator",
"title": "contrib/libs/pcre2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/pcre2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2383
} |
# [PFR](https://apolukhin.github.io/pfr_non_boost/)
This is a C++14 library for very basic reflection that gives you access to structure elements by index and provides other `std::tuple` like methods for user defined types without any macro or boilerplate code.
[Boost.PFR](https://boost.org/libs/pfr) is a part of the [Boost C++ Libraries](https://github.com/boostorg). However, Boost.PFR is a header only library that does not depend on Boost. You can just copy the content of the "include" folder from the github into your project, and the library will work fine.
For a version of the library without `boost::` namespace see [PFR](https://github.com/apolukhin/pfr_non_boost).
### Test results
Branches | Build | Tests coverage | More info
----------------|-------------- | -------------- |-----------
Develop: | [](https://github.com/boostorg/pfr/actions/workflows/ci.yml) [](https://ci.appveyor.com/project/apolukhin/pfr/branch/develop) | [](https://coveralls.io/github/apolukhin/magic_get?branch=develop) | [details...](https://www.boost.org/development/tests/develop/developer/pfr.html)
Master: | [](https://github.com/boostorg/pfr/actions/workflows/ci.yml) [](https://ci.appveyor.com/project/apolukhin/pfr/branch/master) | [](https://coveralls.io/github/apolukhin/magic_get?branch=master) | [details...](https://www.boost.org/development/tests/master/developer/pfr.html)
[Latest developer documentation](https://www.boost.org/doc/libs/develop/doc/html/boost_pfr.html)
### Motivating Example #0
```c++
#include <iostream>
#include <fstream>
#include <string>
#include "pfr.hpp"
struct some_person {
std::string name;
unsigned birth_year;
};
int main(int argc, const char* argv[]) {
some_person val{"Edgar Allan Poe", 1809};
std::cout << pfr::get<0>(val) // No macro!
<< " was born in " << pfr::get<1>(val); // Works with any aggregate initializables!
if (argc > 1) {
std::ofstream ofs(argv[1]);
ofs << pfr::io(val); // File now contains: {"Edgar Allan Poe", 1809}
}
}
```
Outputs:
```
Edgar Allan Poe was born in 1809
```
[Run the above sample](https://godbolt.org/z/PfYsWKb7v)
### Motivating Example #1
```c++
#include <iostream>
#include "pfr.hpp"
struct my_struct { // no ostream operator defined!
int i;
char c;
double d;
};
int main() {
my_struct s{100, 'H', 3.141593};
std::cout << "my_struct has " << pfr::tuple_size<my_struct>::value
<< " fields: " << pfr::io(s) << "\n";
}
```
Outputs:
```
my_struct has 3 fields: {100, H, 3.14159}
```
### Motivating Example #2
```c++
#include <iostream>
#include "pfr.hpp"
struct my_struct { // no ostream operator defined!
std::string s;
int i;
};
int main() {
my_struct s{{"Das ist fantastisch!"}, 100};
std::cout << "my_struct has " << pfr::tuple_size<my_struct>::value
<< " fields: " << pfr::io(s) << "\n";
}
```
Outputs:
```
my_struct has 2 fields: {"Das ist fantastisch!", 100}
```
### Motivating Example #3
```c++
#include <iostream>
#include <string>
#include <boost/config/warning_disable.hpp>
#include <boost/spirit/home/x3.hpp>
#include <boost/fusion/include/adapt_boost_pfr.hpp>
#include "pfr/io.hpp"
namespace x3 = boost::spirit::x3;
struct ast_employee { // No BOOST_FUSION_ADAPT_STRUCT defined
int age;
std::string forename;
std::string surname;
double salary;
};
auto const quoted_string = x3::lexeme['"' >> +(x3::ascii::char_ - '"') >> '"'];
x3::rule<class employee, ast_employee> const employee = "employee";
auto const employee_def =
x3::lit("employee")
>> '{'
>> x3::int_ >> ','
>> quoted_string >> ','
>> quoted_string >> ','
>> x3::double_
>> '}'
;
BOOST_SPIRIT_DEFINE(employee);
int main() {
std::string str = R"(employee{34, "Chip", "Douglas", 2500.00})";
ast_employee emp;
x3::phrase_parse(str.begin(),
str.end(),
employee,
x3::ascii::space,
emp);
std::cout << pfr::io(emp) << std::endl;
}
```
Outputs:
```
(34 Chip Douglas 2500)
```
### Requirements and Limitations
[See docs](https://www.boost.org/doc/libs/develop/doc/html/boost_pfr.html).
### License
Distributed under the [Boost Software License, Version 1.0](https://boost.org/LICENSE_1_0.txt). | {
"source": "yandex/perforator",
"title": "contrib/libs/pfr/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/pfr/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4935
} |
This project is governed by
[Protobuf's Code of Conduct](https://github.com/protocolbuffers/.github/blob/main/profile/CODE_OF_CONDUCT.md). | {
"source": "yandex/perforator",
"title": "contrib/libs/protobuf/CODE_OF_CONDUCT.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/protobuf/CODE_OF_CONDUCT.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 138
} |
# Contributing to Protocol Buffers
We welcome some types of contributions to protocol buffers. This doc describes the
process to contribute patches to protobuf and the general guidelines we
expect contributors to follow.
## What We Accept
* Bug fixes with unit tests demonstrating the problem are very welcome.
We also appreciate bug reports, even when they don't come with a patch.
Bug fixes without tests are usually not accepted.
* New APIs and features with adequate test coverage and documentation
may be accepted if they do not compromise backwards
compatibility. However there's a fairly high bar of usefulness a new public
method must clear before it will be accepted. Features that are fine in
isolation are often rejected because they don't have enough impact to justify the
conceptual burden and ongoing maintenance cost. It's best to file an issue
and get agreement from maintainers on the value of a new feature before
working on a PR.
* Performance optimizations may be accepted if they have convincing benchmarks that demonstrate
an improvement and they do not significantly increase complexity.
* Changes to existing APIs are almost never accepted. Stability and
backwards compatibility are paramount. In the unlikely event a breaking change
is required, it must usually be implemented in google3 first.
* Changes to the wire and text formats are never accepted. Any breaking change
to these formats would have to be implemented as a completely new format.
We cannot begin generating protos that cannot be parsed by existing code.
## Before You Start
We accept patches in the form of github pull requests. If you are new to
github, please read [How to create github pull requests](https://help.github.com/articles/about-pull-requests/)
first.
### Contributor License Agreements
Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution,
this simply gives us permission to use and redistribute your contributions
as part of the project.
* If you are an individual writing original source code and you're sure you
own the intellectual property, then you'll need to sign an [individual CLA](https://cla.developers.google.com/about/google-individual?csw=1).
* If you work for a company that wants to allow you to contribute your work,
then you'll need to sign a [corporate CLA](https://cla.developers.google.com/about/google-corporate?csw=1).
### Coding Style
This project follows [Google’s Coding Style Guides](https://github.com/google/styleguide).
Before sending out your pull request, please familiarize yourself with the
corresponding style guides and make sure the proposed code change is style
conforming.
## Contributing Process
Most pull requests should go to the main branch and the change will be
included in the next major/minor version release (e.g., 3.6.0 release). If you
need to include a bug fix in a patch release (e.g., 3.5.2), make sure it’s
already merged to main, and then create a pull request cherry-picking the
commits from main branch to the release branch (e.g., branch 3.5.x).
For each pull request, a protobuf team member will be assigned to review the
pull request. For minor cleanups, the pull request may be merged right away
after an initial review. For larger changes, you will likely receive multiple
rounds of comments and it may take some time to complete. We will try to keep
our response time within 7-days but if you don’t get any response in a few
days, feel free to comment on the threads to get our attention. We also expect
you to respond to our comments within a reasonable amount of time. If we don’t
hear from you for 2 weeks or longer, we may close the pull request. You can
still send the pull request again once you have time to work on it.
Once a pull request is merged, we will take care of the rest and get it into
the final release.
## Pull Request Guidelines
* If you are a Googler, it is preferable to first create an internal CL and
have it reviewed and submitted. The code propagation process will deliver the
change to GitHub.
* Create small PRs that are narrowly focused on addressing a single concern.
We often receive PRs that are trying to fix several things at a time, but if
only one fix is considered acceptable, nothing gets merged and both author's
& reviewer's time is wasted. Create more PRs to address different concerns and
everyone will be happy.
* For speculative changes, consider opening an issue and discussing it first.
If you are suggesting a behavioral or API change, make sure you get explicit
support from a protobuf team member before sending us the pull request.
* Provide a good PR description as a record of what change is being made and
why it was made. Link to a GitHub issue if it exists.
* Don't fix code style and formatting unless you are already changing that
line to address an issue. PRs with irrelevant changes won't be merged. If
you do want to fix formatting or style, do that in a separate PR.
* Unless your PR is trivial, you should expect there will be reviewer comments
that you'll need to address before merging. We expect you to be reasonably
responsive to those comments, otherwise the PR will be closed after 2-3 weeks
of inactivity.
* Maintain clean commit history and use meaningful commit messages. PRs with
messy commit history are difficult to review and won't be merged. Use rebase
-i upstream/main to curate your commit history and/or to bring in latest
changes from main (but avoid rebasing in the middle of a code review).
* Keep your PR up to date with upstream/main (if there are merge conflicts,
we can't really merge your change).
* All tests need to be passing before your change can be merged. We recommend
you run tests locally before creating your PR to catch breakages early on.
Ultimately, the green signal will be provided by our testing infrastructure.
The reviewer will help you if there are test failures that seem not related
to the change you are making.
## Reviewer Guidelines
* Make sure that all tests are passing before approval.
* Apply the "release notes: yes" label if the pull request's description should
be included in the next release (e.g., any new feature / bug fix).
Apply the "release notes: no" label if the pull request's description should
not be included in the next release (e.g., refactoring changes that does not
change behavior, integration from Google internal, updating tests, etc.).
* Apply the appropriate language label (e.g., C++, Java, Python, etc.) to the
pull request. This will make it easier to identify which languages the pull
request affects, allowing us to better identify appropriate reviewer, create
a better release note, and make it easier to identify issues in the future. | {
"source": "yandex/perforator",
"title": "contrib/libs/protobuf/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/protobuf/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 6868
} |
Protocol Buffers - Google's data interchange format
===================================================
Copyright 2008 Google Inc.
[Protocol Buffers documentation](https://developers.google.com/protocol-buffers/)
Overview
--------
Protocol Buffers (a.k.a., protobuf) are Google's language-neutral,
platform-neutral, extensible mechanism for serializing structured data. You
can find [protobuf's documentation on the Google Developers site](https://developers.google.com/protocol-buffers/).
This README file contains protobuf installation instructions. To install
protobuf, you need to install the protocol compiler (used to compile .proto
files) and the protobuf runtime for your chosen programming language.
Protocol Compiler Installation
------------------------------
The protocol compiler is written in C++. If you are using C++, please follow
the [C++ Installation Instructions](src/README.md) to install protoc along
with the C++ runtime.
For non-C++ users, the simplest way to install the protocol compiler is to
download a pre-built binary from our [GitHub release page](https://github.com/protocolbuffers/protobuf/releases).
In the downloads section of each release, you can find pre-built binaries in
zip packages: `protoc-$VERSION-$PLATFORM.zip`. It contains the protoc binary
as well as a set of standard `.proto` files distributed along with protobuf.
If you are looking for an old version that is not available in the release
page, check out the [Maven repository](https://repo1.maven.org/maven2/com/google/protobuf/protoc/).
These pre-built binaries are only provided for released versions. If you want
to use the github main version at HEAD, or you need to modify protobuf code,
or you are using C++, it's recommended to build your own protoc binary from
source.
If you would like to build protoc binary from source, see the [C++ Installation Instructions](src/README.md).
Protobuf Runtime Installation
-----------------------------
Protobuf supports several different programming languages. For each programming
language, you can find instructions in the corresponding source directory about
how to install protobuf runtime for that specific language:
| Language | Source |
|--------------------------------------|-------------------------------------------------------------|
| C++ (include C++ runtime and protoc) | [src](src) |
| Java | [java](java) |
| Python | [python](python) |
| Objective-C | [objectivec](objectivec) |
| C# | [csharp](csharp) |
| Ruby | [ruby](ruby) |
| Go | [protocolbuffers/protobuf-go](https://github.com/protocolbuffers/protobuf-go)|
| PHP | [php](php) |
| Dart | [dart-lang/protobuf](https://github.com/dart-lang/protobuf) |
| Javascript | [protocolbuffers/protobuf-javascript](https://github.com/protocolbuffers/protobuf-javascript)|
Quick Start
-----------
The best way to learn how to use protobuf is to follow the [tutorials in our
developer guide](https://developers.google.com/protocol-buffers/docs/tutorials).
If you want to learn from code examples, take a look at the examples in the
[examples](examples) directory.
Documentation
-------------
The complete documentation is available via the [Protocol Buffers documentation](https://developers.google.com/protocol-buffers/).
Developer Community
-------------------
To be alerted to upcoming changes in Protocol Buffers and connect with protobuf developers and users,
[join the Google Group](https://groups.google.com/g/protobuf). | {
"source": "yandex/perforator",
"title": "contrib/libs/protobuf/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/protobuf/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4130
} |
To report security concerns or vulnerabilities within protobuf, please use
Google's official channel for reporting these.
https://www.google.com/appserve/security-bugs/m2/new | {
"source": "yandex/perforator",
"title": "contrib/libs/protobuf/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/protobuf/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 175
} |
# Contributing to Protocol Buffers
We welcome some types of contributions to protocol buffers. This doc describes the
process to contribute patches to protobuf and the general guidelines we
expect contributors to follow.
## What We Accept
* Bug fixes with unit tests demonstrating the problem are very welcome.
We also appreciate bug reports, even when they don't come with a patch.
Bug fixes without tests are usually not accepted.
* New APIs and features with adequate test coverage and documentation
may be accepted if they do not compromise backwards
compatibility. However there's a fairly high bar of usefulness a new public
method must clear before it will be accepted. Features that are fine in
isolation are often rejected because they don't have enough impact to justify the
conceptual burden and ongoing maintenance cost. It's best to file an issue
and get agreement from maintainers on the value of a new feature before
working on a PR.
* Performance optimizations may be accepted if they have convincing benchmarks that demonstrate
an improvement and they do not significantly increase complexity.
* Changes to existing APIs are almost never accepted. Stability and
backwards compatibility are paramount. In the unlikely event a breaking change
is required, it must usually be implemented in google3 first.
* Changes to the wire and text formats are never accepted. Any breaking change
to these formats would have to be implemented as a completely new format.
We cannot begin generating protos that cannot be parsed by existing code.
## Before You Start
We accept patches in the form of github pull requests. If you are new to
github, please read [How to create github pull requests](https://help.github.com/articles/about-pull-requests/)
first.
### Contributor License Agreements
Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution,
this simply gives us permission to use and redistribute your contributions
as part of the project.
* If you are an individual writing original source code and you're sure you
own the intellectual property, then you'll need to sign an [individual CLA](https://cla.developers.google.com/about/google-individual?csw=1).
* If you work for a company that wants to allow you to contribute your work,
then you'll need to sign a [corporate CLA](https://cla.developers.google.com/about/google-corporate?csw=1).
### Coding Style
This project follows [Google’s Coding Style Guides](https://github.com/google/styleguide).
Before sending out your pull request, please familiarize yourself with the
corresponding style guides and make sure the proposed code change is style
conforming.
## Contributing Process
Most pull requests should go to the master branch and the change will be
included in the next major/minor version release (e.g., 3.6.0 release). If you
need to include a bug fix in a patch release (e.g., 3.5.2), make sure it’s
already merged to master, and then create a pull request cherry-picking the
commits from master branch to the release branch (e.g., branch 3.5.x).
For each pull request, a protobuf team member will be assigned to review the
pull request. For minor cleanups, the pull request may be merged right away
after an initial review. For larger changes, you will likely receive multiple
rounds of comments and it may take some time to complete. We will try to keep
our response time within 7-days but if you don’t get any response in a few
days, feel free to comment on the threads to get our attention. We also expect
you to respond to our comments within a reasonable amount of time. If we don’t
hear from you for 2 weeks or longer, we may close the pull request. You can
still send the pull request again once you have time to work on it.
Once a pull request is merged, we will take care of the rest and get it into
the final release.
## Pull Request Guidelines
* If you are a Googler, it is preferable to first create an internal CL and
have it reviewed and submitted. The code propagation process will deliver the
change to GitHub.
* Create small PRs that are narrowly focused on addressing a single concern.
We often receive PRs that are trying to fix several things at a time, but if
only one fix is considered acceptable, nothing gets merged and both author's
& reviewer's time is wasted. Create more PRs to address different concerns and
everyone will be happy.
* For speculative changes, consider opening an issue and discussing it first.
If you are suggesting a behavioral or API change, make sure you get explicit
support from a protobuf team member before sending us the pull request.
* Provide a good PR description as a record of what change is being made and
why it was made. Link to a GitHub issue if it exists.
* Don't fix code style and formatting unless you are already changing that
line to address an issue. PRs with irrelevant changes won't be merged. If
you do want to fix formatting or style, do that in a separate PR.
* Unless your PR is trivial, you should expect there will be reviewer comments
that you'll need to address before merging. We expect you to be reasonably
responsive to those comments, otherwise the PR will be closed after 2-3 weeks
of inactivity.
* Maintain clean commit history and use meaningful commit messages. PRs with
messy commit history are difficult to review and won't be merged. Use rebase
-i upstream/master to curate your commit history and/or to bring in latest
changes from master (but avoid rebasing in the middle of a code review).
* Keep your PR up to date with upstream/master (if there are merge conflicts,
we can't really merge your change).
* All tests need to be passing before your change can be merged. We recommend
you run tests locally before creating your PR to catch breakages early on.
Ultimately, the green signal will be provided by our testing infrastructure.
The reviewer will help you if there are test failures that seem not related
to the change you are making.
## Reviewer Guidelines
* Make sure that all tests are passing before approval.
* Apply the "release notes: yes" label if the pull request's description should
be included in the next release (e.g., any new feature / bug fix).
Apply the "release notes: no" label if the pull request's description should
not be included in the next release (e.g., refactoring changes that does not
change behavior, integration from Google internal, updating tests, etc.).
* Apply the appropriate language label (e.g., C++, Java, Python, etc.) to the
pull request. This will make it easier to identify which languages the pull
request affects, allowing us to better identify appropriate reviewer, create
a better release note, and make it easier to identify issues in the future. | {
"source": "yandex/perforator",
"title": "contrib/libs/protobuf_old/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/protobuf_old/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 6880
} |
Protocol Buffers - Google's data interchange format
===================================================
Copyright 2008 Google Inc.
https://developers.google.com/protocol-buffers/
Overview
--------
Protocol Buffers (a.k.a., protobuf) are Google's language-neutral,
platform-neutral, extensible mechanism for serializing structured data. You
can find [protobuf's documentation on the Google Developers site](https://developers.google.com/protocol-buffers/).
This README file contains protobuf installation instructions. To install
protobuf, you need to install the protocol compiler (used to compile .proto
files) and the protobuf runtime for your chosen programming language.
Protocol Compiler Installation
------------------------------
The protocol compiler is written in C++. If you are using C++, please follow
the [C++ Installation Instructions](src/README.md) to install protoc along
with the C++ runtime.
For non-C++ users, the simplest way to install the protocol compiler is to
download a pre-built binary from our release page:
[https://github.com/protocolbuffers/protobuf/releases](https://github.com/protocolbuffers/protobuf/releases)
In the downloads section of each release, you can find pre-built binaries in
zip packages: protoc-$VERSION-$PLATFORM.zip. It contains the protoc binary
as well as a set of standard .proto files distributed along with protobuf.
If you are looking for an old version that is not available in the release
page, check out the maven repo here:
[https://repo1.maven.org/maven2/com/google/protobuf/protoc/](https://repo1.maven.org/maven2/com/google/protobuf/protoc/)
These pre-built binaries are only provided for released versions. If you want
to use the github master version at HEAD, or you need to modify protobuf code,
or you are using C++, it's recommended to build your own protoc binary from
source.
If you would like to build protoc binary from source, see the [C++ Installation
Instructions](src/README.md).
Protobuf Runtime Installation
-----------------------------
Protobuf supports several different programming languages. For each programming
language, you can find instructions in the corresponding source directory about
how to install protobuf runtime for that specific language:
| Language | Source |
|--------------------------------------|-------------------------------------------------------------|
| C++ (include C++ runtime and protoc) | [src](src) |
| Java | [java](java) |
| Python | [python](python) |
| Objective-C | [objectivec](objectivec) |
| C# | [csharp](csharp) |
| JavaScript | [js](js) |
| Ruby | [ruby](ruby) |
| Go | [protocolbuffers/protobuf-go](https://github.com/protocolbuffers/protobuf-go)|
| PHP | [php](php) |
| Dart | [dart-lang/protobuf](https://github.com/dart-lang/protobuf) |
Quick Start
-----------
The best way to learn how to use protobuf is to follow the tutorials in our
developer guide:
https://developers.google.com/protocol-buffers/docs/tutorials
If you want to learn from code examples, take a look at the examples in the
[examples](examples) directory.
Documentation
-------------
The complete documentation for Protocol Buffers is available via the
web at:
https://developers.google.com/protocol-buffers/ | {
"source": "yandex/perforator",
"title": "contrib/libs/protobuf_old/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/protobuf_old/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 3957
} |
To report security concerns or vulnerabilities within protobuf, please use
Google's official channel for reporting these.
https://www.google.com/appserve/security-bugs/m2/new | {
"source": "yandex/perforator",
"title": "contrib/libs/protobuf_old/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/protobuf_old/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 175
} |
This include-only library enables simultaneous bindings into Python2 and Python3 with single build.
It provides the following:
- Let dependencies to headers from both Pythons be seen at once during ya make dependency computation. This makes depenency graph more stable.
- Steers build to proper Python headers depending on mode in which binding is built.
- Adds proper Python library to link.
Headers are automatically generated from Python2 and Python3 headers using gen_includes.py script | {
"source": "yandex/perforator",
"title": "contrib/libs/python/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/python/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 492
} |
# Change Log
All notable changes to this project will be documented in this file.
This project adheres to [Semantic Versioning](http://semver.org/).
## [Unreleased]
## 1.1.0 - 2016-08-25
### Added
* Add GenericDocument ctor overload to specify JSON type (#369)
* Add FAQ (#372, #373, #374, #376)
* Add forward declaration header `fwd.h`
* Add @PlatformIO Library Registry manifest file (#400)
* Implement assignment operator for BigInteger (#404)
* Add comments support (#443)
* Adding coapp definition (#460)
* documenttest.cpp: EXPECT_THROW when checking empty allocator (470)
* GenericDocument: add implicit conversion to ParseResult (#480)
* Use <wchar.h> with C++ linkage on Windows ARM (#485)
* Detect little endian for Microsoft ARM targets
* Check Nan/Inf when writing a double (#510)
* Add JSON Schema Implementation (#522)
* Add iostream wrapper (#530)
* Add Jsonx example for converting JSON into JSONx (a XML format) (#531)
* Add optional unresolvedTokenIndex parameter to Pointer::Get() (#532)
* Add encoding validation option for Writer/PrettyWriter (#534)
* Add Writer::SetMaxDecimalPlaces() (#536)
* Support {0, } and {0, m} in Regex (#539)
* Add Value::Get/SetFloat(), Value::IsLossLessFloat/Double() (#540)
* Add stream position check to reader unit tests (#541)
* Add Templated accessors and range-based for (#542)
* Add (Pretty)Writer::RawValue() (#543)
* Add Document::Parse(std::string), Document::Parse(const char*, size_t length) and related APIs. (#553)
* Add move constructor for GenericSchemaDocument (#554)
* Add VS2010 and VS2015 to AppVeyor CI (#555)
* Add parse-by-parts example (#556, #562)
* Support parse number as string (#564, #589)
* Add kFormatSingleLineArray for PrettyWriter (#577)
* Added optional support for trailing commas (#584)
* Added filterkey and filterkeydom examples (#615)
* Added npm docs (#639)
* Allow options for writing and parsing NaN/Infinity (#641)
* Add std::string overload to PrettyWriter::Key() when RAPIDJSON_HAS_STDSTRING is defined (#698)
### Fixed
* Fix gcc/clang/vc warnings (#350, #394, #397, #444, #447, #473, #515, #582, #589, #595, #667)
* Fix documentation (#482, #511, #550, #557, #614, #635, #660)
* Fix emscripten alignment issue (#535)
* Fix missing allocator to uses of AddMember in document (#365)
* CMake will no longer complain that the minimum CMake version is not specified (#501)
* Make it usable with old VC8 (VS2005) (#383)
* Prohibit C++11 move from Document to Value (#391)
* Try to fix incorrect 64-bit alignment (#419)
* Check return of fwrite to avoid warn_unused_result build failures (#421)
* Fix UB in GenericDocument::ParseStream (#426)
* Keep Document value unchanged on parse error (#439)
* Add missing return statement (#450)
* Fix Document::Parse(const Ch*) for transcoding (#478)
* encodings.h: fix typo in preprocessor condition (#495)
* Custom Microsoft headers are necessary only for Visual Studio 2012 and lower (#559)
* Fix memory leak for invalid regex (26e69ffde95ba4773ab06db6457b78f308716f4b)
* Fix a bug in schema minimum/maximum keywords for 64-bit integer (e7149d665941068ccf8c565e77495521331cf390)
* Fix a crash bug in regex (#605)
* Fix schema "required" keyword cannot handle duplicated keys (#609)
* Fix cmake CMP0054 warning (#612)
* Added missing include guards in istreamwrapper.h and ostreamwrapper.h (#634)
* Fix undefined behaviour (#646)
* Fix buffer overrun using PutN (#673)
* Fix rapidjson::value::Get<std::string>() may returns wrong data (#681)
* Add Flush() for all value types (#689)
* Handle malloc() fail in PoolAllocator (#691)
* Fix builds on x32 platform. #703
### Changed
* Clarify problematic JSON license (#392)
* Move Travis to container based infrastructure (#504, #558)
* Make whitespace array more compact (#513)
* Optimize Writer::WriteString() with SIMD (#544)
* x86-64 48-bit pointer optimization for GenericValue (#546)
* Define RAPIDJSON_HAS_CXX11_RVALUE_REFS directly in clang (#617)
* Make GenericSchemaDocument constructor explicit (#674)
* Optimize FindMember when use std::string (#690)
## [1.0.2] - 2015-05-14
### Added
* Add Value::XXXMember(...) overloads for std::string (#335)
### Fixed
* Include rapidjson.h for all internal/error headers.
* Parsing some numbers incorrectly in full-precision mode (`kFullPrecisionParseFlag`) (#342)
* Fix some numbers parsed incorrectly (#336)
* Fix alignment of 64bit platforms (#328)
* Fix MemoryPoolAllocator::Clear() to clear user-buffer (0691502573f1afd3341073dd24b12c3db20fbde4)
### Changed
* CMakeLists for include as a thirdparty in projects (#334, #337)
* Change Document::ParseStream() to use stack allocator for Reader (ffbe38614732af8e0b3abdc8b50071f386a4a685)
## [1.0.1] - 2015-04-25
### Added
* Changelog following [Keep a CHANGELOG](https://github.com/olivierlacan/keep-a-changelog) suggestions.
### Fixed
* Parsing of some numbers (e.g. "1e-00011111111111") causing assertion (#314).
* Visual C++ 32-bit compilation error in `diyfp.h` (#317).
## [1.0.0] - 2015-04-22
### Added
* 100% [Coverall](https://coveralls.io/r/miloyip/rapidjson?branch=master) coverage.
* Version macros (#311)
### Fixed
* A bug in trimming long number sequence (4824f12efbf01af72b8cb6fc96fae7b097b73015).
* Double quote in unicode escape (#288).
* Negative zero roundtrip (double only) (#289).
* Standardize behavior of `memcpy()` and `malloc()` (0c5c1538dcfc7f160e5a4aa208ddf092c787be5a, #305, 0e8bbe5e3ef375e7f052f556878be0bd79e9062d).
### Removed
* Remove an invalid `Document::ParseInsitu()` API (e7f1c6dd08b522cfcf9aed58a333bd9a0c0ccbeb).
## 1.0-beta - 2015-04-8
### Added
* RFC 7159 (#101)
* Optional Iterative Parser (#76)
* Deep-copy values (#20)
* Error code and message (#27)
* ASCII Encoding (#70)
* `kParseStopWhenDoneFlag` (#83)
* `kParseFullPrecisionFlag` (881c91d696f06b7f302af6d04ec14dd08db66ceb)
* Add `Key()` to handler concept (#134)
* C++11 compatibility and support (#128)
* Optimized number-to-string and vice versa conversions (#137, #80)
* Short-String Optimization (#131)
* Local stream optimization by traits (#32)
* Travis & Appveyor Continuous Integration, with Valgrind verification (#24, #242)
* Redo all documentation (English, Simplified Chinese)
### Changed
* Copyright ownership transfered to THL A29 Limited (a Tencent company).
* Migrating from Premake to CMAKE (#192)
* Resolve all warning reports
### Removed
* Remove other JSON libraries for performance comparison (#180)
## 0.11 - 2012-11-16
## 0.1 - 2011-11-18
[Unreleased]: https://github.com/miloyip/rapidjson/compare/v1.1.0...HEAD
[1.1.0]: https://github.com/miloyip/rapidjson/compare/v1.0.2...v1.1.0
[1.0.2]: https://github.com/miloyip/rapidjson/compare/v1.0.1...v1.0.2
[1.0.1]: https://github.com/miloyip/rapidjson/compare/v1.0.0...v1.0.1
[1.0.0]: https://github.com/miloyip/rapidjson/compare/v1.0-beta...v1.0.0 | {
"source": "yandex/perforator",
"title": "contrib/libs/rapidjson/CHANGELOG.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/rapidjson/CHANGELOG.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 6816
} |


## A fast JSON parser/generator for C++ with both SAX/DOM style API
Tencent is pleased to support the open source community by making RapidJSON available.
Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip. All rights reserved.
* [RapidJSON GitHub](https://github.com/miloyip/rapidjson/)
* RapidJSON Documentation
* [English](http://rapidjson.org/)
* [简体中文](http://rapidjson.org/zh-cn/)
* [GitBook](https://www.gitbook.com/book/miloyip/rapidjson/) with downloadable PDF/EPUB/MOBI, without API reference.
## Build status
| [Linux][lin-link] | [Windows][win-link] | [Coveralls][cov-link] |
| :---------------: | :-----------------: | :-------------------: |
| ![lin-badge] | ![win-badge] | ![cov-badge] |
[lin-badge]: https://travis-ci.org/miloyip/rapidjson.png?branch=master "Travis build status"
[lin-link]: https://travis-ci.org/miloyip/rapidjson "Travis build status"
[win-badge]: https://ci.appveyor.com/api/projects/status/u658dcuwxo14a8m9/branch/master "AppVeyor build status"
[win-link]: https://ci.appveyor.com/project/miloyip/rapidjson/branch/master "AppVeyor build status"
[cov-badge]: https://coveralls.io/repos/miloyip/rapidjson/badge.png?branch=master
[cov-link]: https://coveralls.io/r/miloyip/rapidjson?branch=master
## Introduction
RapidJSON is a JSON parser and generator for C++. It was inspired by [RapidXml](http://rapidxml.sourceforge.net/).
* RapidJSON is **small** but **complete**. It supports both SAX and DOM style API. The SAX parser is only a half thousand lines of code.
* RapidJSON is **fast**. Its performance can be comparable to `strlen()`. It also optionally supports SSE2/SSE4.2 for acceleration.
* RapidJSON is **self-contained** and **header-only**. It does not depend on external libraries such as BOOST. It even does not depend on STL.
* RapidJSON is **memory-friendly**. Each JSON value occupies exactly 16 bytes for most 32/64-bit machines (excluding text string). By default it uses a fast memory allocator, and the parser allocates memory compactly during parsing.
* RapidJSON is **Unicode-friendly**. It supports UTF-8, UTF-16, UTF-32 (LE & BE), and their detection, validation and transcoding internally. For example, you can read a UTF-8 file and let RapidJSON transcode the JSON strings into UTF-16 in the DOM. It also supports surrogates and "\u0000" (null character).
More features can be read [here](doc/features.md).
JSON(JavaScript Object Notation) is a light-weight data exchange format. RapidJSON should be in fully compliance with RFC7159/ECMA-404, with optional support of relaxed syntax. More information about JSON can be obtained at
* [Introducing JSON](http://json.org/)
* [RFC7159: The JavaScript Object Notation (JSON) Data Interchange Format](http://www.ietf.org/rfc/rfc7159.txt)
* [Standard ECMA-404: The JSON Data Interchange Format](http://www.ecma-international.org/publications/standards/Ecma-404.htm)
## Highlights in v1.1 (2016-8-25)
* Added [JSON Pointer](doc/pointer.md)
* Added [JSON Schema](doc/schema.md)
* Added [relaxed JSON syntax](doc/dom.md) (comment, trailing comma, NaN/Infinity)
* Iterating array/object with [C++11 Range-based for loop](doc/tutorial.md)
* Reduce memory overhead of each `Value` from 24 bytes to 16 bytes in x86-64 architecture.
For other changes please refer to [change log](CHANGELOG.md).
## Compatibility
RapidJSON is cross-platform. Some platform/compiler combinations which have been tested are shown as follows.
* Visual C++ 2008/2010/2013 on Windows (32/64-bit)
* GNU C++ 3.8.x on Cygwin
* Clang 3.4 on Mac OS X (32/64-bit) and iOS
* Clang 3.4 on Android NDK
Users can build and run the unit tests on their platform/compiler.
## Installation
RapidJSON is a header-only C++ library. Just copy the `include/rapidjson` folder to system or project's include path.
RapidJSON uses following software as its dependencies:
* [CMake](https://cmake.org/) as a general build tool
* (optional)[Doxygen](http://www.doxygen.org) to build documentation
* (optional)[googletest](https://github.com/google/googletest) for unit and performance testing
To generate user documentation and run tests please proceed with the steps below:
1. Execute `git submodule update --init` to get the files of thirdparty submodules (google test).
2. Create directory called `build` in rapidjson source directory.
3. Change to `build` directory and run `cmake ..` command to configure your build. Windows users can do the same with cmake-gui application.
4. On Windows, build the solution found in the build directory. On Linux, run `make` from the build directory.
On successfull build you will find compiled test and example binaries in `bin`
directory. The generated documentation will be available in `doc/html`
directory of the build tree. To run tests after finished build please run `make
test` or `ctest` from your build tree. You can get detailed output using `ctest
-V` command.
It is possible to install library system-wide by running `make install` command
from the build tree with administrative privileges. This will install all files
according to system preferences. Once RapidJSON is installed, it is possible
to use it from other CMake projects by adding `find_package(RapidJSON)` line to
your CMakeLists.txt.
## Usage at a glance
This simple example parses a JSON string into a document (DOM), make a simple modification of the DOM, and finally stringify the DOM to a JSON string.
~~~~~~~~~~cpp
// rapidjson/example/simpledom/simpledom.cpp`
#include "rapidjson/document.h"
#include "rapidjson/writer.h"
#include "rapidjson/stringbuffer.h"
#include <iostream>
using namespace rapidjson;
int main() {
// 1. Parse a JSON string into DOM.
const char* json = "{\"project\":\"rapidjson\",\"stars\":10}";
Document d;
d.Parse(json);
// 2. Modify it by DOM.
Value& s = d["stars"];
s.SetInt(s.GetInt() + 1);
// 3. Stringify the DOM
StringBuffer buffer;
Writer<StringBuffer> writer(buffer);
d.Accept(writer);
// Output {"project":"rapidjson","stars":11}
std::cout << buffer.GetString() << std::endl;
return 0;
}
~~~~~~~~~~
Note that this example did not handle potential errors.
The following diagram shows the process.

More [examples](https://github.com/miloyip/rapidjson/tree/master/example) are available:
* DOM API
* [tutorial](https://github.com/miloyip/rapidjson/blob/master/example/tutorial/tutorial.cpp): Basic usage of DOM API.
* SAX API
* [simplereader](https://github.com/miloyip/rapidjson/blob/master/example/simplereader/simplereader.cpp): Dumps all SAX events while parsing a JSON by `Reader`.
* [condense](https://github.com/miloyip/rapidjson/blob/master/example/condense/condense.cpp): A command line tool to rewrite a JSON, with all whitespaces removed.
* [pretty](https://github.com/miloyip/rapidjson/blob/master/example/pretty/pretty.cpp): A command line tool to rewrite a JSON with indents and newlines by `PrettyWriter`.
* [capitalize](https://github.com/miloyip/rapidjson/blob/master/example/capitalize/capitalize.cpp): A command line tool to capitalize strings in JSON.
* [messagereader](https://github.com/miloyip/rapidjson/blob/master/example/messagereader/messagereader.cpp): Parse a JSON message with SAX API.
* [serialize](https://github.com/miloyip/rapidjson/blob/master/example/serialize/serialize.cpp): Serialize a C++ object into JSON with SAX API.
* [jsonx](https://github.com/miloyip/rapidjson/blob/master/example/jsonx/jsonx.cpp): Implements a `JsonxWriter` which stringify SAX events into [JSONx](https://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.1.0/com.ibm.dp.doc/json_jsonx.html) (a kind of XML) format. The example is a command line tool which converts input JSON into JSONx format.
* Schema
* [schemavalidator](https://github.com/miloyip/rapidjson/blob/master/example/schemavalidator/schemavalidator.cpp) : A command line tool to validate a JSON with a JSON schema.
* Advanced
* [prettyauto](https://github.com/miloyip/rapidjson/blob/master/example/prettyauto/prettyauto.cpp): A modified version of [pretty](https://github.com/miloyip/rapidjson/blob/master/example/pretty/pretty.cpp) to automatically handle JSON with any UTF encodings.
* [parsebyparts](https://github.com/miloyip/rapidjson/blob/master/example/parsebyparts/parsebyparts.cpp): Implements an `AsyncDocumentParser` which can parse JSON in parts, using C++11 thread.
* [filterkey](https://github.com/miloyip/rapidjson/blob/master/example/filterkey/filterkey.cpp): A command line tool to remove all values with user-specified key.
* [filterkeydom](https://github.com/miloyip/rapidjson/blob/master/example/filterkeydom/filterkeydom.cpp): Same tool as above, but it demonstrates how to use a generator to populate a `Document`. | {
"source": "yandex/perforator",
"title": "contrib/libs/rapidjson/readme.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/rapidjson/readme.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 8904
} |
RE2 uses Gerrit instead of GitHub pull requests.
See the [Contribute](https://github.com/google/re2/wiki/Contribute) wiki page. | {
"source": "yandex/perforator",
"title": "contrib/libs/re2/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/re2/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 127
} |
To report a security issue, please use https://g.co/vulnz. We use
https://g.co/vulnz for our intake, and do coordination and disclosure here on
GitHub (including using GitHub Security Advisory). The Google Security Team will
respond within 5 working days of your report on https://g.co/vulnz. | {
"source": "yandex/perforator",
"title": "contrib/libs/re2/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/re2/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 292
} |
# How to Contribute
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.
## Contributor License Agreement
Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution;
this simply gives us permission to use and redistribute your contributions as
part of the project. Head over to <https://cla.developers.google.com/> to see
your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Code Reviews
All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
information on using pull requests.
See [the README](README.md#contributing-to-the-snappy-project) for areas
where we are likely to accept external contributions.
## Community Guidelines
This project follows [Google's Open Source Community
Guidelines](https://opensource.google/conduct/). | {
"source": "yandex/perforator",
"title": "contrib/libs/snappy/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/snappy/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1224
} |
Snappy, a fast compressor/decompressor.
[](https://github.com/google/snappy/actions/workflows/build.yml)
Introduction
============
Snappy is a compression/decompression library. It does not aim for maximum
compression, or compatibility with any other compression library; instead,
it aims for very high speeds and reasonable compression. For instance,
compared to the fastest mode of zlib, Snappy is an order of magnitude faster
for most inputs, but the resulting compressed files are anywhere from 20% to
100% bigger. (For more information, see "Performance", below.)
Snappy has the following properties:
* Fast: Compression speeds at 250 MB/sec and beyond, with no assembler code.
See "Performance" below.
* Stable: Over the last few years, Snappy has compressed and decompressed
petabytes of data in Google's production environment. The Snappy bitstream
format is stable and will not change between versions.
* Robust: The Snappy decompressor is designed not to crash in the face of
corrupted or malicious input.
* Free and open source software: Snappy is licensed under a BSD-type license.
For more information, see the included COPYING file.
Snappy has previously been called "Zippy" in some Google presentations
and the like.
Performance
===========
Snappy is intended to be fast. On a single core of a Core i7 processor
in 64-bit mode, it compresses at about 250 MB/sec or more and decompresses at
about 500 MB/sec or more. (These numbers are for the slowest inputs in our
benchmark suite; others are much faster.) In our tests, Snappy usually
is faster than algorithms in the same class (e.g. LZO, LZF, QuickLZ,
etc.) while achieving comparable compression ratios.
Typical compression ratios (based on the benchmark suite) are about 1.5-1.7x
for plain text, about 2-4x for HTML, and of course 1.0x for JPEGs, PNGs and
other already-compressed data. Similar numbers for zlib in its fastest mode
are 2.6-2.8x, 3-7x and 1.0x, respectively. More sophisticated algorithms are
capable of achieving yet higher compression rates, although usually at the
expense of speed. Of course, compression ratio will vary significantly with
the input.
Although Snappy should be fairly portable, it is primarily optimized
for 64-bit x86-compatible processors, and may run slower in other environments.
In particular:
- Snappy uses 64-bit operations in several places to process more data at
once than would otherwise be possible.
- Snappy assumes unaligned 32 and 64-bit loads and stores are cheap.
On some platforms, these must be emulated with single-byte loads
and stores, which is much slower.
- Snappy assumes little-endian throughout, and needs to byte-swap data in
several places if running on a big-endian platform.
Experience has shown that even heavily tuned code can be improved.
Performance optimizations, whether for 64-bit x86 or other platforms,
are of course most welcome; see "Contact", below.
Building
========
You need the CMake version specified in [CMakeLists.txt](./CMakeLists.txt)
or later to build:
```bash
git submodule update --init
mkdir build
cd build && cmake ../ && make
```
Usage
=====
Note that Snappy, both the implementation and the main interface,
is written in C++. However, several third-party bindings to other languages
are available; see the [home page](docs/README.md) for more information.
Also, if you want to use Snappy from C code, you can use the included C
bindings in snappy-c.h.
To use Snappy from your own C++ program, include the file "snappy.h" from
your calling file, and link against the compiled library.
There are many ways to call Snappy, but the simplest possible is
```c++
snappy::Compress(input.data(), input.size(), &output);
```
and similarly
```c++
snappy::Uncompress(input.data(), input.size(), &output);
```
where "input" and "output" are both instances of std::string.
There are other interfaces that are more flexible in various ways, including
support for custom (non-array) input sources. See the header file for more
information.
Tests and benchmarks
====================
When you compile Snappy, the following binaries are compiled in addition to the
library itself. You do not need them to use the compressor from your own
library, but they are useful for Snappy development.
* `snappy_benchmark` contains microbenchmarks used to tune compression and
decompression performance.
* `snappy_unittests` contains unit tests, verifying correctness on your machine
in various scenarios.
* `snappy_test_tool` can benchmark Snappy against a few other compression
libraries (zlib, LZO, LZF, and QuickLZ), if they were detected at configure
time. To benchmark using a given file, give the compression algorithm you want
to test Snappy against (e.g. --zlib) and then a list of one or more file names
on the command line.
If you want to change or optimize Snappy, please run the tests and benchmarks to
verify you have not broken anything.
The testdata/ directory contains the files used by the microbenchmarks, which
should provide a reasonably balanced starting point for benchmarking. (Note that
baddata[1-3].snappy are not intended as benchmarks; they are used to verify
correctness in the presence of corrupted data in the unit test.)
Contributing to the Snappy Project
==================================
In addition to the aims listed at the top of the [README](README.md) Snappy
explicitly supports the following:
1. C++11
2. Clang (gcc and MSVC are best-effort).
3. Low level optimizations (e.g. assembly or equivalent intrinsics) for:
1. [x86](https://en.wikipedia.org/wiki/X86)
2. [x86-64](https://en.wikipedia.org/wiki/X86-64)
3. ARMv7 (32-bit)
4. ARMv8 (AArch64)
4. Supports only the Snappy compression scheme as described in
[format_description.txt](format_description.txt).
5. CMake for building
Changes adding features or dependencies outside of the core area of focus listed
above might not be accepted. If in doubt post a message to the
[Snappy discussion mailing list](https://groups.google.com/g/snappy-compression).
We are unlikely to accept contributions to the build configuration files, such
as `CMakeLists.txt`. We are focused on maintaining a build configuration that
allows us to test that the project works in a few supported configurations
inside Google. We are not currently interested in supporting other requirements,
such as different operating systems, compilers, or build systems.
Contact
=======
Snappy is distributed through GitHub. For the latest version and other
information, see https://github.com/google/snappy. | {
"source": "yandex/perforator",
"title": "contrib/libs/snappy/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/snappy/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 6681
} |
The author disclaims copyright to this source code. In place of
a legal notice, here is a blessing:
* May you do good and not evil.
* May you find forgiveness for yourself and forgive others.
* May you share freely, never taking more than you give. | {
"source": "yandex/perforator",
"title": "contrib/libs/sqlite3/LICENSE.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/sqlite3/LICENSE.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 262
} |
<h1 align="center">SQLite Source Repository</h1>
This repository contains the complete source code for the
[SQLite database engine](https://sqlite.org/). Some test scripts
are also included. However, many other test scripts
and most of the documentation are managed separately.
## Version Control
SQLite sources are managed using the
[Fossil](https://www.fossil-scm.org/), a distributed version control system
that was specifically designed and written to support SQLite development.
The [Fossil repository](https://sqlite.org/src/timeline) contains the urtext.
If you are reading this on GitHub or some other Git repository or service,
then you are looking at a mirror. The names of check-ins and
other artifacts in a Git mirror are different from the official
names for those objects. The official names for check-ins are
found in a footer on the check-in comment for authorized mirrors.
The official check-in name can also be seen in the `manifest.uuid` file
in the root of the tree. Always use the official name, not the
Git-name, when communicating about an SQLite check-in.
If you pulled your SQLite source code from a secondary source and want to
verify its integrity, there are hints on how to do that in the
[Verifying Code Authenticity](#vauth) section below.
## Obtaining The Code
If you do not want to use Fossil, you can download tarballs or ZIP
archives or [SQLite archives](https://sqlite.org/cli.html#sqlar) as follows:
* Latest trunk check-in as
[Tarball](https://www.sqlite.org/src/tarball/sqlite.tar.gz),
[ZIP-archive](https://www.sqlite.org/src/zip/sqlite.zip), or
[SQLite-archive](https://www.sqlite.org/src/sqlar/sqlite.sqlar).
* Latest release as
[Tarball](https://www.sqlite.org/src/tarball/sqlite.tar.gz?r=release),
[ZIP-archive](https://www.sqlite.org/src/zip/sqlite.zip?r=release), or
[SQLite-archive](https://www.sqlite.org/src/sqlar/sqlite.sqlar?r=release).
* For other check-ins, substitute an appropriate branch name or
tag or hash prefix in place of "release" in the URLs of the previous
bullet. Or browse the [timeline](https://www.sqlite.org/src/timeline)
to locate the check-in desired, click on its information page link,
then click on the "Tarball" or "ZIP Archive" links on the information
page.
If you do want to use Fossil to check out the source tree,
first install Fossil version 2.0 or later.
(Source tarballs and precompiled binaries available
[here](https://www.fossil-scm.org/fossil/uv/download.html). Fossil is
a stand-alone program. To install, simply download or build the single
executable file and put that file someplace on your $PATH.)
Then run commands like this:
mkdir -p ~/sqlite ~/Fossils
cd ~/sqlite
fossil clone https://www.sqlite.org/src ~/Fossils/sqlite.fossil
fossil open ~/Fossils/sqlite.fossil
After setting up a repository using the steps above, you can always
update to the latest version using:
fossil update trunk ;# latest trunk check-in
fossil update release ;# latest official release
Or type "fossil ui" to get a web-based user interface.
## Compiling for Unix-like systems
First create a directory in which to place
the build products. It is recommended, but not required, that the
build directory be separate from the source directory. Cd into the
build directory and then from the build directory run the configure
script found at the root of the source tree. Then run "make".
For example:
tar xzf sqlite.tar.gz ;# Unpack the source tree into "sqlite"
mkdir bld ;# Build will occur in a sibling directory
cd bld ;# Change to the build directory
../sqlite/configure ;# Run the configure script
make ;# Run the makefile.
make sqlite3.c ;# Build the "amalgamation" source file
make test ;# Run some tests (requires Tcl)
See the makefile for additional targets.
The configure script uses autoconf 2.61 and libtool. If the configure
script does not work out for you, there is a generic makefile named
"Makefile.linux-gcc" in the top directory of the source tree that you
can copy and edit to suit your needs. Comments on the generic makefile
show what changes are needed.
## Using MSVC for Windows systems
On Windows, all applicable build products can be compiled with MSVC.
First open the command prompt window associated with the desired compiler
version (e.g. "Developer Command Prompt for VS2013"). Next, use NMAKE
with the provided "Makefile.msc" to build one of the supported targets.
For example, from the parent directory of the source subtree named "sqlite":
mkdir bld
cd bld
nmake /f ..\sqlite\Makefile.msc TOP=..\sqlite
nmake /f ..\sqlite\Makefile.msc sqlite3.c TOP=..\sqlite
nmake /f ..\sqlite\Makefile.msc sqlite3.dll TOP=..\sqlite
nmake /f ..\sqlite\Makefile.msc sqlite3.exe TOP=..\sqlite
nmake /f ..\sqlite\Makefile.msc test TOP=..\sqlite
There are several build options that can be set via the NMAKE command
line. For example, to build for WinRT, simply add "FOR_WINRT=1" argument
to the "sqlite3.dll" command line above. When debugging into the SQLite
code, adding the "DEBUG=1" argument to one of the above command lines is
recommended.
SQLite does not require [Tcl](http://www.tcl.tk/) to run, but a Tcl installation
is required by the makefiles (including those for MSVC). SQLite contains
a lot of generated code and Tcl is used to do much of that code generation.
## Source Code Tour
Most of the core source files are in the **src/** subdirectory. The
**src/** folder also contains files used to build the "testfixture" test
harness. The names of the source files used by "testfixture" all begin
with "test".
The **src/** also contains the "shell.c" file
which is the main program for the "sqlite3.exe"
[command-line shell](https://sqlite.org/cli.html) and
the "tclsqlite.c" file which implements the
[Tcl bindings](https://sqlite.org/tclsqlite.html) for SQLite.
(Historical note: SQLite began as a Tcl
extension and only later escaped to the wild as an independent library.)
Test scripts and programs are found in the **test/** subdirectory.
Additional test code is found in other source repositories.
See [How SQLite Is Tested](http://www.sqlite.org/testing.html) for
additional information.
The **ext/** subdirectory contains code for extensions. The
Full-text search engine is in **ext/fts3**. The R-Tree engine is in
**ext/rtree**. The **ext/misc** subdirectory contains a number of
smaller, single-file extensions, such as a REGEXP operator.
The **tool/** subdirectory contains various scripts and programs used
for building generated source code files or for testing or for generating
accessory programs such as "sqlite3_analyzer(.exe)".
### Generated Source Code Files
Several of the C-language source files used by SQLite are generated from
other sources rather than being typed in manually by a programmer. This
section will summarize those automatically-generated files. To create all
of the automatically-generated files, simply run "make target_source".
The "target_source" make target will create a subdirectory "tsrc/" and
fill it with all the source files needed to build SQLite, both
manually-edited files and automatically-generated files.
The SQLite interface is defined by the **sqlite3.h** header file, which is
generated from src/sqlite.h.in, ./manifest.uuid, and ./VERSION. The
[Tcl script](http://www.tcl.tk) at tool/mksqlite3h.tcl does the conversion.
The manifest.uuid file contains the SHA3 hash of the particular check-in
and is used to generate the SQLITE\_SOURCE\_ID macro. The VERSION file
contains the current SQLite version number. The sqlite3.h header is really
just a copy of src/sqlite.h.in with the source-id and version number inserted
at just the right spots. Note that comment text in the sqlite3.h file is
used to generate much of the SQLite API documentation. The Tcl scripts
used to generate that documentation are in a separate source repository.
The SQL language parser is **parse.c** which is generated from a grammar in
the src/parse.y file. The conversion of "parse.y" into "parse.c" is done
by the [lemon](./doc/lemon.html) LALR(1) parser generator. The source code
for lemon is at tool/lemon.c. Lemon uses the tool/lempar.c file as a
template for generating its parser.
Lemon also generates the **parse.h** header file, at the same time it
generates parse.c.
The **opcodes.h** header file contains macros that define the numbers
corresponding to opcodes in the "VDBE" virtual machine. The opcodes.h
file is generated by scanning the src/vdbe.c source file. The
Tcl script at ./mkopcodeh.tcl does this scan and generates opcodes.h.
A second Tcl script, ./mkopcodec.tcl, then scans opcodes.h to generate
the **opcodes.c** source file, which contains a reverse mapping from
opcode-number to opcode-name that is used for EXPLAIN output.
The **keywordhash.h** header file contains the definition of a hash table
that maps SQL language keywords (ex: "CREATE", "SELECT", "INDEX", etc.) into
the numeric codes used by the parse.c parser. The keywordhash.h file is
generated by a C-language program at tool mkkeywordhash.c.
The **pragma.h** header file contains various definitions used to parse
and implement the PRAGMA statements. The header is generated by a
script **tool/mkpragmatab.tcl**. If you want to add a new PRAGMA, edit
the **tool/mkpragmatab.tcl** file to insert the information needed by the
parser for your new PRAGMA, then run the script to regenerate the
**pragma.h** header file.
### The Amalgamation
All of the individual C source code and header files (both manually-edited
and automatically-generated) can be combined into a single big source file
**sqlite3.c** called "the amalgamation". The amalgamation is the recommended
way of using SQLite in a larger application. Combining all individual
source code files into a single big source code file allows the C compiler
to perform more cross-procedure analysis and generate better code. SQLite
runs about 5% faster when compiled from the amalgamation versus when compiled
from individual source files.
The amalgamation is generated from the tool/mksqlite3c.tcl Tcl script.
First, all of the individual source files must be gathered into the tsrc/
subdirectory (using the equivalent of "make target_source") then the
tool/mksqlite3c.tcl script is run to copy them all together in just the
right order while resolving internal "#include" references.
The amalgamation source file is more than 200K lines long. Some symbolic
debuggers (most notably MSVC) are unable to deal with files longer than 64K
lines. To work around this, a separate Tcl script, tool/split-sqlite3c.tcl,
can be run on the amalgamation to break it up into a single small C file
called **sqlite3-all.c** that does #include on about seven other files
named **sqlite3-1.c**, **sqlite3-2.c**, ..., **sqlite3-7.c**. In this way,
all of the source code is contained within a single translation unit so
that the compiler can do extra cross-procedure optimization, but no
individual source file exceeds 32K lines in length.
## How It All Fits Together
SQLite is modular in design.
See the [architectural description](http://www.sqlite.org/arch.html)
for details. Other documents that are useful in
(helping to understand how SQLite works include the
[file format](http://www.sqlite.org/fileformat2.html) description,
the [virtual machine](http://www.sqlite.org/opcode.html) that runs
prepared statements, the description of
[how transactions work](http://www.sqlite.org/atomiccommit.html), and
the [overview of the query planner](http://www.sqlite.org/optoverview.html).
Years of effort have gone into optimizing SQLite, both
for small size and high performance. And optimizations tend to result in
complex code. So there is a lot of complexity in the current SQLite
implementation. It will not be the easiest library in the world to hack.
Key files:
* **sqlite.h.in** - This file defines the public interface to the SQLite
library. Readers will need to be familiar with this interface before
trying to understand how the library works internally.
* **sqliteInt.h** - this header file defines many of the data objects
used internally by SQLite. In addition to "sqliteInt.h", some
subsystems have their own header files.
* **parse.y** - This file describes the LALR(1) grammar that SQLite uses
to parse SQL statements, and the actions that are taken at each step
in the parsing process.
* **vdbe.c** - This file implements the virtual machine that runs
prepared statements. There are various helper files whose names
begin with "vdbe". The VDBE has access to the vdbeInt.h header file
which defines internal data objects. The rest of SQLite interacts
with the VDBE through an interface defined by vdbe.h.
* **where.c** - This file (together with its helper files named
by "where*.c") analyzes the WHERE clause and generates
virtual machine code to run queries efficiently. This file is
sometimes called the "query optimizer". It has its own private
header file, whereInt.h, that defines data objects used internally.
* **btree.c** - This file contains the implementation of the B-Tree
storage engine used by SQLite. The interface to the rest of the system
is defined by "btree.h". The "btreeInt.h" header defines objects
used internally by btree.c and not published to the rest of the system.
* **pager.c** - This file contains the "pager" implementation, the
module that implements transactions. The "pager.h" header file
defines the interface between pager.c and the rest of the system.
* **os_unix.c** and **os_win.c** - These two files implement the interface
between SQLite and the underlying operating system using the run-time
pluggable VFS interface.
* **shell.c.in** - This file is not part of the core SQLite library. This
is the file that, when linked against sqlite3.a, generates the
"sqlite3.exe" command-line shell. The "shell.c.in" file is transformed
into "shell.c" as part of the build process.
* **tclsqlite.c** - This file implements the Tcl bindings for SQLite. It
is not part of the core SQLite library. But as most of the tests in this
repository are written in Tcl, the Tcl language bindings are important.
* **test*.c** - Files in the src/ folder that begin with "test" go into
building the "testfixture.exe" program. The testfixture.exe program is
an enhanced Tcl shell. The testfixture.exe program runs scripts in the
test/ folder to validate the core SQLite code. The testfixture program
(and some other test programs too) is built and run when you type
"make test".
* **ext/misc/json1.c** - This file implements the various JSON functions
that are built into SQLite.
There are many other source files. Each has a succinct header comment that
describes its purpose and role within the larger system.
<a name="vauth"></a>
## Verifying Code Authenticity
The `manifest` file at the root directory of the source tree
contains either a SHA3-256 hash (for newer files) or a SHA1 hash (for
older files) for every source file in the repository.
The name of the version of the entire source tree is just the
SHA3-256 hash of the `manifest` file itself, possibly with the
last line of that file omitted if the last line begins with
"`# Remove this line`".
The `manifest.uuid` file should contain the SHA3-256 hash of the
`manifest` file. If all of the above hash comparisons are correct, then
you can be confident that your source tree is authentic and unadulterated.
The format of the `manifest` file should be mostly self-explanatory, but
if you want details, they are available
[here](https://fossil-scm.org/fossil/doc/trunk/www/fileformat.wiki#manifest).
## Contacts
The main SQLite website is [http://www.sqlite.org/](http://www.sqlite.org/)
with geographically distributed backups at
[http://www2.sqlite.org/](http://www2.sqlite.org) and
[http://www3.sqlite.org/](http://www3.sqlite.org). | {
"source": "yandex/perforator",
"title": "contrib/libs/sqlite3/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/sqlite3/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 16325
} |
xxHash - Extremely fast hash algorithm
======================================
xxHash is an Extremely fast Hash algorithm, processing at RAM speed limits.
Code is highly portable, and produces hashes identical across all platforms (little / big endian).
The library includes the following algorithms :
- XXH32 : generates 32-bit hashes, using 32-bit arithmetic
- XXH64 : generates 64-bit hashes, using 64-bit arithmetic
- XXH3 (since `v0.8.0`): generates 64 or 128-bit hashes, using vectorized arithmetic.
The 128-bit variant is called XXH128.
All variants successfully complete the [SMHasher](https://code.google.com/p/smhasher/wiki/SMHasher) test suite
which evaluates the quality of hash functions (collision, dispersion and randomness).
Additional tests, which evaluate more thoroughly speed and collision properties of 64-bit hashes, [are also provided](https://github.com/Cyan4973/xxHash/tree/dev/tests).
|Branch |Status |
|------------|---------|
|release | [](https://github.com/Cyan4973/xxHash/actions?query=branch%3Arelease+) |
|dev | [](https://github.com/Cyan4973/xxHash/actions?query=branch%3Adev+) |
Benchmarks
-------------------------
The benchmarked reference system uses an Intel i7-9700K cpu, and runs Ubuntu x64 20.04.
The [open source benchmark program] is compiled with `clang` v10.0 using `-O3` flag.
| Hash Name | Width | Bandwidth (GB/s) | Small Data Velocity | Quality | Comment |
| --------- | ----- | ---------------- | ----- | --- | --- |
| __XXH3__ (SSE2) | 64 | 31.5 GB/s | 133.1 | 10
| __XXH128__ (SSE2) | 128 | 29.6 GB/s | 118.1 | 10
| _RAM sequential read_ | N/A | 28.0 GB/s | N/A | N/A | _for reference_
| City64 | 64 | 22.0 GB/s | 76.6 | 10
| T1ha2 | 64 | 22.0 GB/s | 99.0 | 9 | Slightly worse [collisions]
| City128 | 128 | 21.7 GB/s | 57.7 | 10
| __XXH64__ | 64 | 19.4 GB/s | 71.0 | 10
| SpookyHash | 64 | 19.3 GB/s | 53.2 | 10
| Mum | 64 | 18.0 GB/s | 67.0 | 9 | Slightly worse [collisions]
| __XXH32__ | 32 | 9.7 GB/s | 71.9 | 10
| City32 | 32 | 9.1 GB/s | 66.0 | 10
| Murmur3 | 32 | 3.9 GB/s | 56.1 | 10
| SipHash | 64 | 3.0 GB/s | 43.2 | 10
| FNV64 | 64 | 1.2 GB/s | 62.7 | 5 | Poor avalanche properties
| Blake2 | 256 | 1.1 GB/s | 5.1 | 10 | Cryptographic
| SHA1 | 160 | 0.8 GB/s | 5.6 | 10 | Cryptographic but broken
| MD5 | 128 | 0.6 GB/s | 7.8 | 10 | Cryptographic but broken
[open source benchmark program]: https://github.com/Cyan4973/xxHash/tree/release/tests/bench
[collisions]: https://github.com/Cyan4973/xxHash/wiki/Collision-ratio-comparison#collision-study
note 1: Small data velocity is a _rough_ evaluation of algorithm's efficiency on small data. For more detailed analysis, please refer to next paragraph.
note 2: some algorithms feature _faster than RAM_ speed. In which case, they can only reach their full speed potential when input is already in CPU cache (L3 or better). Otherwise, they max out on RAM speed limit.
### Small data
Performance on large data is only one part of the picture.
Hashing is also very useful in constructions like hash tables and bloom filters.
In these use cases, it's frequent to hash a lot of small data (starting at a few bytes).
Algorithm's performance can be very different for such scenarios, since parts of the algorithm,
such as initialization or finalization, become fixed cost.
The impact of branch mis-prediction also becomes much more present.
XXH3 has been designed for excellent performance on both long and small inputs,
which can be observed in the following graph:

For a more detailed analysis, please visit the wiki :
https://github.com/Cyan4973/xxHash/wiki/Performance-comparison#benchmarks-concentrating-on-small-data-
Quality
-------------------------
Speed is not the only property that matters.
Produced hash values must respect excellent dispersion and randomness properties,
so that any sub-section of it can be used to maximally spread out a table or index,
as well as reduce the amount of collisions to the minimal theoretical level, following the [birthday paradox].
`xxHash` has been tested with Austin Appleby's excellent SMHasher test suite,
and passes all tests, ensuring reasonable quality levels.
It also passes extended tests from [newer forks of SMHasher], featuring additional scenarios and conditions.
Finally, xxHash provides its own [massive collision tester](https://github.com/Cyan4973/xxHash/tree/dev/tests/collisions),
able to generate and compare billions of hashes to test the limits of 64-bit hash algorithms.
On this front too, xxHash features good results, in line with the [birthday paradox].
A more detailed analysis is documented [in the wiki](https://github.com/Cyan4973/xxHash/wiki/Collision-ratio-comparison).
[birthday paradox]: https://en.wikipedia.org/wiki/Birthday_problem
[newer forks of SMHasher]: https://github.com/rurban/smhasher
### Build modifiers
The following macros can be set at compilation time to modify `libxxhash`'s behavior. They are generally disabled by default.
- `XXH_INLINE_ALL`: Make all functions `inline`, implementation is directly included within `xxhash.h`.
Inlining functions is beneficial for speed, notably for small keys.
It's _extremely effective_ when key's length is expressed as _a compile time constant_,
with performance improvements observed in the +200% range .
See [this article](https://fastcompression.blogspot.com/2018/03/xxhash-for-small-keys-impressive-power.html) for details.
- `XXH_PRIVATE_API`: same outcome as `XXH_INLINE_ALL`. Still available for legacy support.
The name underlines that `XXH_*` symbol names will not be exported.
- `XXH_STATIC_LINKING_ONLY`: gives access to internal state declaration, required for static allocation.
Incompatible with dynamic linking, due to risks of ABI changes.
- `XXH_NAMESPACE`: Prefixes all symbols with the value of `XXH_NAMESPACE`.
This macro can only use compilable character set.
Useful to evade symbol naming collisions,
in case of multiple inclusions of xxHash's source code.
Client applications still use the regular function names,
as symbols are automatically translated through `xxhash.h`.
- `XXH_FORCE_ALIGN_CHECK`: Use a faster direct read path when input is aligned.
This option can result in dramatic performance improvement on architectures unable to load memory from unaligned addresses
when input to hash happens to be aligned on 32 or 64-bit boundaries.
It is (slightly) detrimental on platform with good unaligned memory access performance (same instruction for both aligned and unaligned accesses).
This option is automatically disabled on `x86`, `x64` and `aarch64`, and enabled on all other platforms.
- `XXH_FORCE_MEMORY_ACCESS`: The default method `0` uses a portable `memcpy()` notation.
Method `1` uses a gcc-specific `packed` attribute, which can provide better performance for some targets.
Method `2` forces unaligned reads, which is not standard compliant, but might sometimes be the only way to extract better read performance.
Method `3` uses a byteshift operation, which is best for old compilers which don't inline `memcpy()` or big-endian systems without a byteswap instruction.
- `XXH_CPU_LITTLE_ENDIAN`: By default, endianness is determined by a runtime test resolved at compile time.
If, for some reason, the compiler cannot simplify the runtime test, it can cost performance.
It's possible to skip auto-detection and simply state that the architecture is little-endian by setting this macro to 1.
Setting it to 0 states big-endian.
- `XXH_ENABLE_AUTOVECTORIZE`: Auto-vectorization may be triggered for XXH32 and XXH64, depending on cpu vector capabilities and compiler version.
Note: auto-vectorization tends to be triggered more easily with recent versions of `clang`.
For XXH32, SSE4.1 or equivalent (NEON) is enough, while XXH64 requires AVX512.
Unfortunately, auto-vectorization is generally detrimental to XXH performance.
For this reason, the xxhash source code tries to prevent auto-vectorization by default.
That being said, systems evolve, and this conclusion is not forthcoming.
For example, it has been reported that recent Zen4 cpus are more likely to improve performance with vectorization.
Therefore, should you prefer or want to test vectorized code, you can enable this flag:
it will remove the no-vectorization protection code, thus making it more likely for XXH32 and XXH64 to be auto-vectorized.
- `XXH32_ENDJMP`: Switch multi-branch finalization stage of XXH32 by a single jump.
This is generally undesirable for performance, especially when hashing inputs of random sizes.
But depending on exact architecture and compiler, a jump might provide slightly better performance on small inputs. Disabled by default.
- `XXH_IMPORT`: MSVC specific: should only be defined for dynamic linking, as it prevents linkage errors.
- `XXH_NO_STDLIB`: Disable invocation of `<stdlib.h>` functions, notably `malloc()` and `free()`.
`libxxhash`'s `XXH*_createState()` will always fail and return `NULL`.
But one-shot hashing (like `XXH32()`) or streaming using statically allocated states
still work as expected.
This build flag is useful for embedded environments without dynamic allocation.
- `XXH_DEBUGLEVEL` : When set to any value >= 1, enables `assert()` statements.
This (slightly) slows down execution, but may help finding bugs during debugging sessions.
#### Binary size control
- `XXH_NO_XXH3` : removes symbols related to `XXH3` (both 64 & 128 bits) from generated binary.
`XXH3` is by far the largest contributor to `libxxhash` size,
so it's useful to reduce binary size for applications which do not employ `XXH3`.
- `XXH_NO_LONG_LONG`: removes compilation of algorithms relying on 64-bit `long long` types
which include `XXH3` and `XXH64`.
Only `XXH32` will be compiled.
Useful for targets (architectures and compilers) without 64-bit support.
- `XXH_NO_STREAM`: Disables the streaming API, limiting the library to single shot variants only.
- `XXH_NO_INLINE_HINTS`: By default, xxHash uses `__attribute__((always_inline))` and `__forceinline` to improve performance at the cost of code size.
Defining this macro to 1 will mark all internal functions as `static`, allowing the compiler to decide whether to inline a function or not.
This is very useful when optimizing for smallest binary size,
and is automatically defined when compiling with `-O0`, `-Os`, `-Oz`, or `-fno-inline` on GCC and Clang.
It may also be required to successfully compile using `-Og`, depending on compiler version.
- `XXH_SIZE_OPT`: `0`: default, optimize for speed
`1`: default for `-Os` and `-Oz`: disables some speed hacks for size optimization
`2`: makes code as small as possible, performance may cry
#### Build modifiers specific for XXH3
- `XXH_VECTOR` : manually select a vector instruction set (default: auto-selected at compilation time). Available instruction sets are `XXH_SCALAR`, `XXH_SSE2`, `XXH_AVX2`, `XXH_AVX512`, `XXH_NEON` and `XXH_VSX`. Compiler may require additional flags to ensure proper support (for example, `gcc` on x86_64 requires `-mavx2` for `AVX2`, or `-mavx512f` for `AVX512`).
- `XXH_PREFETCH_DIST` : select prefetching distance. For close-to-metal adaptation to specific hardware platforms. XXH3 only.
- `XXH_NO_PREFETCH` : disable prefetching. Some platforms or situations may perform better without prefetching. XXH3 only.
#### Makefile variables
When compiling the Command Line Interface `xxhsum` using `make`, the following environment variables can also be set :
- `DISPATCH=1` : use `xxh_x86dispatch.c`, select at runtime between `scalar`, `sse2`, `avx2` or `avx512` instruction set. This option is only valid for `x86`/`x64` systems. It is enabled by default when target `x86`/`x64` is detected. It can be forcefully turned off using `DISPATCH=0`.
- `LIBXXH_DISPATCH=1` : same idea, implemented a runtime vector extension detector, but within `libxxhash`. This parameter is disabled by default. When enabled (only valid for `x86`/`x64` systems), new symbols published in `xxh_x86dispatch.h` become accessible. At the time of this writing, it's required to include `xxh_x86dispatch.h` in order to access the symbols with runtime vector extension detection.
- `XXH_1ST_SPEED_TARGET` : select an initial speed target, expressed in MB/s, for the first speed test in benchmark mode. Benchmark will adjust the target at subsequent iterations, but the first test is made "blindly" by targeting this speed. Currently conservatively set to 10 MB/s, to support very slow (emulated) platforms.
- `NODE_JS=1` : When compiling `xxhsum` for Node.js with Emscripten, this links the `NODERAWFS` library for unrestricted filesystem access and patches `isatty` to make the command line utility correctly detect the terminal. This does make the binary specific to Node.js.
### Building xxHash - Using vcpkg
You can download and install xxHash using the [vcpkg](https://github.com/Microsoft/vcpkg) dependency manager:
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install xxhash
The xxHash port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository.
### Example
The simplest example calls xxhash 64-bit variant as a one-shot function
generating a hash value from a single buffer, and invoked from a C/C++ program:
```C
#include "xxhash.h"
(...)
XXH64_hash_t hash = XXH64(buffer, size, seed);
}
```
Streaming variant is more involved, but makes it possible to provide data incrementally:
```C
#include "stdlib.h" /* abort() */
#include "xxhash.h"
XXH64_hash_t calcul_hash_streaming(FileHandler fh)
{
/* create a hash state */
XXH64_state_t* const state = XXH64_createState();
if (state==NULL) abort();
size_t const bufferSize = SOME_SIZE;
void* const buffer = malloc(bufferSize);
if (buffer==NULL) abort();
/* Initialize state with selected seed */
XXH64_hash_t const seed = 0; /* or any other value */
if (XXH64_reset(state, seed) == XXH_ERROR) abort();
/* Feed the state with input data, any size, any number of times */
(...)
while ( /* some data left */ ) {
size_t const length = get_more_data(buffer, bufferSize, fh);
if (XXH64_update(state, buffer, length) == XXH_ERROR) abort();
(...)
}
(...)
/* Produce the final hash value */
XXH64_hash_t const hash = XXH64_digest(state);
/* State could be re-used; but in this example, it is simply freed */
free(buffer);
XXH64_freeState(state);
return hash;
}
```
### License
The library files `xxhash.c` and `xxhash.h` are BSD licensed.
The utility `xxhsum` is GPL licensed.
### Other programming languages
Beyond the C reference version,
xxHash is also available from many different programming languages,
thanks to great contributors.
They are [listed here](http://www.xxhash.com/#other-languages).
### Packaging status
Many distributions bundle a package manager
which allows easy xxhash installation as both a `libxxhash` library
and `xxhsum` command line interface.
[](https://repology.org/project/xxhash/versions)
### Special Thanks
- Takayuki Matsuoka, aka @t-mat, for creating `xxhsum -c` and great support during early xxh releases
- Mathias Westerdahl, aka @JCash, for introducing the first version of `XXH64`
- Devin Hussey, aka @easyaspi314, for incredible low-level optimizations on `XXH3` and `XXH128` | {
"source": "yandex/perforator",
"title": "contrib/libs/xxhash/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/xxhash/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 17297
} |
# Security Policy
## Supported Versions
Security updates are applied only to the latest release.
## Reporting a Vulnerability
If you have discovered a security vulnerability in this project, please report it privately. **Do not disclose it as a public issue.** This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released.
Please disclose it at [security advisory](https://github.com/Cyan4973/xxHash/security/advisories/new).
This project is maintained by a team of volunteers on a reasonable-effort basis. As such, please give us at least 90 days to work on a fix before public exposure. | {
"source": "yandex/perforator",
"title": "contrib/libs/xxhash/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/xxhash/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 687
} |
# Code of Conduct
Facebook has adopted a Code of Conduct that we expect project participants to adhere to.
Please read the [full text](https://code.fb.com/codeofconduct/)
so that you can understand what actions will and will not be tolerated. | {
"source": "yandex/perforator",
"title": "contrib/libs/zstd/CODE_OF_CONDUCT.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/zstd/CODE_OF_CONDUCT.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 243
} |
# Contributing to Zstandard
We want to make contributing to this project as easy and transparent as
possible.
## Our Development Process
New versions are being developed in the "dev" branch,
or in their own feature branch.
When they are deemed ready for a release, they are merged into "release".
As a consequence, all contributions must stage first through "dev"
or their own feature branch.
## Pull Requests
We actively welcome your pull requests.
1. Fork the repo and create your branch from `dev`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the documentation.
4. Ensure the test suite passes.
5. Make sure your code lints.
6. If you haven't already, complete the Contributor License Agreement ("CLA").
## Contributor License Agreement ("CLA")
In order to accept your pull request, we need you to submit a CLA. You only need
to do this once to work on any of Facebook's open source projects.
Complete your CLA here: <https://code.facebook.com/cla>
## Workflow
Zstd uses a branch-based workflow for making changes to the codebase. Typically, zstd
will use a new branch per sizable topic. For smaller changes, it is okay to lump multiple
related changes into a branch.
Our contribution process works in three main stages:
1. Local development
* Update:
* Checkout your fork of zstd if you have not already
```
git checkout https://github.com/<username>/zstd
cd zstd
```
* Update your local dev branch
```
git pull https://github.com/facebook/zstd dev
git push origin dev
```
* Topic and development:
* Make a new branch on your fork about the topic you're developing for
```
# branch names should be concise but sufficiently informative
git checkout -b <branch-name>
git push origin <branch-name>
```
* Make commits and push
```
# make some changes =
git add -u && git commit -m <message>
git push origin <branch-name>
```
* Note: run local tests to ensure that your changes didn't break existing functionality
* Quick check
```
make check
```
* Longer check
```
make test
```
2. Code Review and CI tests
* Ensure CI tests pass:
* Before sharing anything to the community, create a pull request in your own fork against the dev branch
and make sure that all GitHub Actions CI tests pass. See the Continuous Integration section below for more information.
* Ensure that static analysis passes on your development machine. See the Static Analysis section
below to see how to do this.
* Create a pull request:
* When you are ready to share you changes to the community, create a pull request from your branch
to facebook:dev. You can do this very easily by clicking 'Create Pull Request' on your fork's home
page.
* From there, select the branch where you made changes as your source branch and facebook:dev
as the destination.
* Examine the diff presented between the two branches to make sure there is nothing unexpected.
* Write a good pull request description:
* While there is no strict template that our contributors follow, we would like them to
sufficiently summarize and motivate the changes they are proposing. We recommend all pull requests,
at least indirectly, address the following points.
* Is this pull request important and why?
* Is it addressing an issue? If so, what issue? (provide links for convenience please)
* Is this a new feature? If so, why is it useful and/or necessary?
* Are there background references and documents that reviewers should be aware of to properly assess this change?
* Note: make sure to point out any design and architectural decisions that you made and the rationale behind them.
* Note: if you have been working with a specific user and would like them to review your work, make sure you mention them using (@<username>)
* Submit the pull request and iterate with feedback.
3. Merge and Release
* Getting approval:
* You will have to iterate on your changes with feedback from other collaborators to reach a point
where your pull request can be safely merged.
* To avoid too many comments on style and convention, make sure that you have a
look at our style section below before creating a pull request.
* Eventually, someone from the zstd team will approve your pull request and not long after merge it into
the dev branch.
* Housekeeping:
* Most PRs are linked with one or more Github issues. If this is the case for your PR, make sure
the corresponding issue is mentioned. If your change 'fixes' or completely addresses the
issue at hand, then please indicate this by requesting that an issue be closed by commenting.
* Just because your changes have been merged does not mean the topic or larger issue is complete. Remember
that the change must make it to an official zstd release for it to be meaningful. We recommend
that contributors track the activity on their pull request and corresponding issue(s) page(s) until
their change makes it to the next release of zstd. Users will often discover bugs in your code or
suggest ways to refine and improve your initial changes even after the pull request is merged.
## Static Analysis
Static analysis is a process for examining the correctness or validity of a program without actually
executing it. It usually helps us find many simple bugs. Zstd uses clang's `scan-build` tool for
static analysis. You can install it by following the instructions for your OS on https://clang-analyzer.llvm.org/scan-build.
Once installed, you can ensure that our static analysis tests pass on your local development machine
by running:
```
make staticAnalyze
```
In general, you can use `scan-build` to static analyze any build script. For example, to static analyze
just `contrib/largeNbDicts` and nothing else, you can run:
```
scan-build make -C contrib/largeNbDicts largeNbDicts
```
### Pitfalls of static analysis
`scan-build` is part of our regular CI suite. Other static analyzers are not.
It can be useful to look at additional static analyzers once in a while (and we do), but it's not a good idea to multiply the nb of analyzers run continuously at each commit and PR. The reasons are :
- Static analyzers are full of false positive. The signal to noise ratio is actually pretty low.
- A good CI policy is "zero-warning tolerance". That means that all issues must be solved, including false positives. This quickly becomes a tedious workload.
- Multiple static analyzers will feature multiple kind of false positives, sometimes applying to the same code but in different ways leading to :
+ tortuous code, trying to please multiple constraints, hurting readability and therefore maintenance. Sometimes, such complexity introduce other more subtle bugs, that are just out of scope of the analyzers.
+ sometimes, these constraints are mutually exclusive : if one try to solve one, the other static analyzer will complain, they can't be both happy at the same time.
- As if that was not enough, the list of false positives change with each version. It's hard enough to follow one static analyzer, but multiple ones with their own update agenda, this quickly becomes a massive velocity reducer.
This is different from running a static analyzer once in a while, looking at the output, and __cherry picking__ a few warnings that seem helpful, either because they detected a genuine risk of bug, or because it helps expressing the code in a way which is more readable or more difficult to misuse. These kinds of reports can be useful, and are accepted.
## Continuous Integration
CI tests run every time a pull request (PR) is created or updated. The exact tests
that get run will depend on the destination branch you specify. Some tests take
longer to run than others. Currently, our CI is set up to run a short
series of tests when creating a PR to the dev branch and a longer series of tests
when creating a PR to the release branch. You can look in the configuration files
of the respective CI platform for more information on what gets run when.
Most people will just want to create a PR with the destination set to their local dev
branch of zstd. You can then find the status of the tests on the PR's page. You can also
re-run tests and cancel running tests from the PR page or from the respective CI's dashboard.
Almost all of zstd's CI runs on GitHub Actions (configured at `.github/workflows`), which will automatically run on PRs to your
own fork. A small number of tests run on other services (e.g. Travis CI, Circle CI, Appveyor).
These require work to set up on your local fork, and (at least for Travis CI) cost money.
Therefore, if the PR on your local fork passes GitHub Actions, feel free to submit a PR
against the main repo.
### Third-party CI
A small number of tests cannot run on GitHub Actions, or have yet to be migrated.
For these, we use a variety of third-party services (listed below). It is not necessary to set
these up on your fork in order to contribute to zstd; however, we do link to instructions for those
who want earlier signal.
| Service | Purpose | Setup Links | Config Path |
|-----------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------|
| Travis CI | Used for testing on non-x86 architectures such as PowerPC | https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci-using-github <br> https://github.com/marketplace/travis-ci | `.travis.yml` |
| AppVeyor | Used for some Windows testing (e.g. cygwin, mingw) | https://www.appveyor.com/blog/2018/10/02/github-apps-integration/ <br> https://github.com/marketplace/appveyor | `appveyor.yml` |
| Cirrus CI | Used for testing on FreeBSD | https://github.com/marketplace/cirrus-ci/ | `.cirrus.yml` |
| Circle CI | Historically was used to provide faster signal,<br/> but we may be able to migrate these to Github Actions | https://circleci.com/docs/2.0/getting-started/#setting-up-circleci <br> https://youtu.be/Js3hMUsSZ2c <br> https://circleci.com/docs/2.0/enable-checks/ | `.circleci/config.yml` |
Note: the instructions linked above mostly cover how to set up a repository with CI from scratch.
The general idea should be the same for setting up CI on your fork of zstd, but you may have to
follow slightly different steps. In particular, please ignore any instructions related to setting up
config files (since zstd already has configs for each of these services).
## Performance
Performance is extremely important for zstd and we only merge pull requests whose performance
landscape and corresponding trade-offs have been adequately analyzed, reproduced, and presented.
This high bar for performance means that every PR which has the potential to
impact performance takes a very long time for us to properly review. That being said, we
always welcome contributions to improve performance (or worsen performance for the trade-off of
something else). Please keep the following in mind before submitting a performance related PR:
1. Zstd isn't as old as gzip but it has been around for time now and its evolution is
very well documented via past Github issues and pull requests. It may be the case that your
particular performance optimization has already been considered in the past. Please take some
time to search through old issues and pull requests using keywords specific to your
would-be PR. Of course, just because a topic has already been discussed (and perhaps rejected
on some grounds) in the past, doesn't mean it isn't worth bringing up again. But even in that case,
it will be helpful for you to have context from that topic's history before contributing.
2. The distinction between noise and actual performance gains can unfortunately be very subtle
especially when microbenchmarking extremely small wins or losses. The only remedy to getting
something subtle merged is extensive benchmarking. You will be doing us a great favor if you
take the time to run extensive, long-duration, and potentially cross-(os, platform, process, etc)
benchmarks on your end before submitting a PR. Of course, you will not be able to benchmark
your changes on every single processor and os out there (and neither will we) but do that best
you can:) We've added some things to think about when benchmarking below in the Benchmarking
Performance section which might be helpful for you.
3. Optimizing performance for a certain OS, processor vendor, compiler, or network system is a perfectly
legitimate thing to do as long as it does not harm the overall performance health of Zstd.
This is a hard balance to strike but please keep in mind other aspects of Zstd when
submitting changes that are clang-specific, windows-specific, etc.
## Benchmarking Performance
Performance microbenchmarking is a tricky subject but also essential for Zstd. We value empirical
testing over theoretical speculation. This guide it not perfect but for most scenarios, it
is a good place to start.
### Stability
Unfortunately, the most important aspect in being able to benchmark reliably is to have a stable
benchmarking machine. A virtual machine, a machine with shared resources, or your laptop
will typically not be stable enough to obtain reliable benchmark results. If you can get your
hands on a desktop, this is usually a better scenario.
Of course, benchmarking can be done on non-hyper-stable machines as well. You will just have to
do a little more work to ensure that you are in fact measuring the changes you've made and not
noise. Here are some things you can do to make your benchmarks more stable:
1. The most simple thing you can do to drastically improve the stability of your benchmark is
to run it multiple times and then aggregate the results of those runs. As a general rule of
thumb, the smaller the change you are trying to measure, the more samples of benchmark runs
you will have to aggregate over to get reliable results. Here are some additional things to keep in
mind when running multiple trials:
* How you aggregate your samples are important. You might be tempted to use the mean of your
results. While this is certainly going to be a more stable number than a raw single sample
benchmark number, you might have more luck by taking the median. The mean is not robust to
outliers whereas the median is. Better still, you could simply take the fastest speed your
benchmark achieved on each run since that is likely the fastest your process will be
capable of running your code. In our experience, this (aggregating by just taking the sample
with the fastest running time) has been the most stable approach.
* The more samples you have, the more stable your benchmarks should be. You can verify
your improved stability by looking at the size of your confidence intervals as you
increase your sample count. These should get smaller and smaller. Eventually hopefully
smaller than the performance win you are expecting.
* Most processors will take some time to get `hot` when running anything. The observations
you collect during that time period will very different from the true performance number. Having
a very large number of sample will help alleviate this problem slightly but you can also
address is directly by simply not including the first `n` iterations of your benchmark in
your aggregations. You can determine `n` by simply looking at the results from each iteration
and then hand picking a good threshold after which the variance in results seems to stabilize.
2. You cannot really get reliable benchmarks if your host machine is simultaneously running
another cpu/memory-intensive application in the background. If you are running benchmarks on your
personal laptop for instance, you should close all applications (including your code editor and
browser) before running your benchmarks. You might also have invisible background applications
running. You can see what these are by looking at either Activity Monitor on Mac or Task Manager
on Windows. You will get more stable benchmark results of you end those processes as well.
* If you have multiple cores, you can even run your benchmark on a reserved core to prevent
pollution from other OS and user processes. There are a number of ways to do this depending
on your OS:
* On linux boxes, you have use https://github.com/lpechacek/cpuset.
* On Windows, you can "Set Processor Affinity" using https://www.thewindowsclub.com/processor-affinity-windows
* On Mac, you can try to use their dedicated affinity API https://developer.apple.com/library/archive/releasenotes/Performance/RN-AffinityAPI/#//apple_ref/doc/uid/TP40006635-CH1-DontLinkElementID_2
3. To benchmark, you will likely end up writing a separate c/c++ program that will link libzstd.
Dynamically linking your library will introduce some added variation (not a large amount but
definitely some). Statically linking libzstd will be more stable. Static libraries should
be enabled by default when building zstd.
4. Use a profiler with a good high resolution timer. See the section below on profiling for
details on this.
5. Disable frequency scaling, turbo boost and address space randomization (this will vary by OS)
6. Try to avoid storage. On some systems you can use tmpfs. Putting the program, inputs and outputs on
tmpfs avoids touching a real storage system, which can have a pretty big variability.
Also check our LLVM's guide on benchmarking here: https://llvm.org/docs/Benchmarking.html
### Zstd benchmark
The fastest signal you can get regarding your performance changes is via the in-build zstd cli
bench option. You can run Zstd as you typically would for your scenario using some set of options
and then additionally also specify the `-b#` option. Doing this will run our benchmarking pipeline
for that options you have just provided. If you want to look at the internals of how this
benchmarking script works, you can check out programs/benchzstd.c
For example: say you have made a change that you believe improves the speed of zstd level 1. The
very first thing you should use to assess whether you actually achieved any sort of improvement
is `zstd -b`. You might try to do something like this. Note: you can use the `-i` option to
specify a running time for your benchmark in seconds (default is 3 seconds).
Usually, the longer the running time, the more stable your results will be.
```
$ git checkout <commit-before-your-change>
$ make && cp zstd zstd-old
$ git checkout <commit-after-your-change>
$ make && cp zstd zstd-new
$ zstd-old -i5 -b1 <your-test-data>
1<your-test-data> : 8990 -> 3992 (2.252), 302.6 MB/s , 626.4 MB/s
$ zstd-new -i5 -b1 <your-test-data>
1<your-test-data> : 8990 -> 3992 (2.252), 302.8 MB/s , 628.4 MB/s
```
Unless your performance win is large enough to be visible despite the intrinsic noise
on your computer, benchzstd alone will likely not be enough to validate the impact of your
changes. For example, the results of the example above indicate that effectively nothing
changed but there could be a small <3% improvement that the noise on the host machine
obscured. So unless you see a large performance win (10-15% consistently) using just
this method of evaluation will not be sufficient.
### Profiling
There are a number of great profilers out there. We're going to briefly mention how you can
profile your code using `instruments` on mac, `perf` on linux and `visual studio profiler`
on Windows.
Say you have an idea for a change that you think will provide some good performance gains
for level 1 compression on Zstd. Typically this means, you have identified a section of
code that you think can be made to run faster.
The first thing you will want to do is make sure that the piece of code is actually taking up
a notable amount of time to run. It is usually not worth optimizing something which accounts for less than
0.0001% of the total running time. Luckily, there are tools to help with this.
Profilers will let you see how much time your code spends inside a particular function.
If your target code snippet is only part of a function, it might be worth trying to
isolate that snippet by moving it to its own function (this is usually not necessary but
might be).
Most profilers (including the profilers discussed below) will generate a call graph of
functions for you. Your goal will be to find your function of interest in this call graph
and then inspect the time spent inside of it. You might also want to look at the annotated
assembly which most profilers will provide you with.
#### Instruments
We will once again consider the scenario where you think you've identified a piece of code
whose performance can be improved upon. Follow these steps to profile your code using
Instruments.
1. Open Instruments
2. Select `Time Profiler` from the list of standard templates
3. Close all other applications except for your instruments window and your terminal
4. Run your benchmarking script from your terminal window
* You will want a benchmark that runs for at least a few seconds (5 seconds will
usually be long enough). This way the profiler will have something to work with
and you will have ample time to attach your profiler to this process:)
* I will just use benchzstd as my benchmarmking script for this example:
```
$ zstd -b1 -i5 <my-data> # this will run for 5 seconds
```
5. Once you run your benchmarking script, switch back over to instruments and attach your
process to the time profiler. You can do this by:
* Clicking on the `All Processes` drop down in the top left of the toolbar.
* Selecting your process from the dropdown. In my case, it is just going to be labeled
`zstd`
* Hitting the bright red record circle button on the top left of the toolbar
6. You profiler will now start collecting metrics from your benchmarking script. Once
you think you have collected enough samples (usually this is the case after 3 seconds of
recording), stop your profiler.
7. Make sure that in toolbar of the bottom window, `profile` is selected.
8. You should be able to see your call graph.
* If you don't see the call graph or an incomplete call graph, make sure you have compiled
zstd and your benchmarking script using debug flags. On mac and linux, this just means
you will have to supply the `-g` flag alone with your build script. You might also
have to provide the `-fno-omit-frame-pointer` flag
9. Dig down the graph to find your function call and then inspect it by double clicking
the list item. You will be able to see the annotated source code and the assembly side by
side.
#### Perf
This wiki has a pretty detailed tutorial on getting started working with perf so we'll
leave you to check that out of you're getting started:
https://perf.wiki.kernel.org/index.php/Tutorial
Some general notes on perf:
* Use `perf stat -r # <bench-program>` to quickly get some relevant timing and
counter statistics. Perf uses a high resolution timer and this is likely one
of the first things your team will run when assessing your PR.
* Perf has a long list of hardware counters that can be viewed with `perf --list`.
When measuring optimizations, something worth trying is to make sure the hardware
counters you expect to be impacted by your change are in fact being so. For example,
if you expect the L1 cache misses to decrease with your change, you can look at the
counter `L1-dcache-load-misses`
* Perf hardware counters will not work on a virtual machine.
#### Visual Studio
TODO
## Issues
We use GitHub issues to track public bugs. Please ensure your description is
clear and has sufficient instructions to be able to reproduce the issue.
Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
disclosure of security bugs. In those cases, please go through the process
outlined on that page and do not file a public issue.
## Coding Style
It's a pretty long topic, which is difficult to summarize in a single paragraph.
As a rule of thumbs, try to imitate the coding style of
similar lines of codes around your contribution.
The following is a non-exhaustive list of rules employed in zstd code base:
### C90
This code base is following strict C90 standard,
with 2 extensions : 64-bit `long long` types, and variadic macros.
This rule is applied strictly to code within `lib/` and `programs/`.
Sub-project in `contrib/` are allowed to use other conventions.
### C++ direct compatibility : symbol mangling
All public symbol declarations must be wrapped in `extern “C” { … }`,
so that this project can be compiled as C++98 code,
and linked into C++ applications.
### Minimal Frugal
This design requirement is fundamental to preserve the portability of the code base.
#### Dependencies
- Reduce dependencies to the minimum possible level.
Any dependency should be considered “bad” by default,
and only tolerated because it provides a service in a better way than can be achieved locally.
The only external dependencies this repository tolerates are
standard C libraries, and in rare cases, system level headers.
- Within `lib/`, this policy is even more drastic.
The only external dependencies allowed are `<assert.h>`, `<stdlib.h>`, `<string.h>`,
and even then, not directly.
In particular, no function shall ever allocate on heap directly,
and must use instead `ZSTD_malloc()` and equivalent.
Other accepted non-symbol headers are `<stddef.h>` and `<limits.h>`.
- Within the project, there is a strict hierarchy of dependencies that must be respected.
`programs/` is allowed to depend on `lib/`, but only its public API.
Within `lib/`, `lib/common` doesn't depend on any other directory.
`lib/compress` and `lib/decompress` shall not depend on each other.
`lib/dictBuilder` can depend on `lib/common` and `lib/compress`, but not `lib/decompress`.
#### Resources
- Functions in `lib/` must use very little stack space,
several dozens of bytes max.
Everything larger must use the heap allocator,
or require a scratch buffer to be emplaced manually.
### Naming
* All public symbols are prefixed with `ZSTD_`
+ private symbols, with a scope limited to their own unit, are free of this restriction.
However, since `libzstd` source code can be amalgamated,
each symbol name must attempt to be (and remain) unique.
Avoid too generic names that could become ground for future collisions.
This generally implies usage of some form of prefix.
* For symbols (functions and variables), naming convention is `PREFIX_camelCase`.
+ In some advanced cases, one can also find :
- `PREFIX_prefix2_camelCase`
- `PREFIX_camelCase_extendedQualifier`
* Multi-words names generally consist of an action followed by object:
- for example : `ZSTD_createCCtx()`
* Prefer positive actions
- `goBackward` rather than `notGoForward`
* Type names (`struct`, etc.) follow similar convention,
except that they are allowed and even invited to start by an Uppercase letter.
Example : `ZSTD_CCtx`, `ZSTD_CDict`
* Macro names are all Capital letters.
The same composition rules (`PREFIX_NAME_QUALIFIER`) apply.
* File names are all lowercase letters.
The convention is `snake_case`.
File names **must** be unique across the entire code base,
even when they stand in clearly separated directories.
### Qualifiers
* This code base is `const` friendly, if not `const` fanatical.
Any variable that can be `const` (aka. read-only) **must** be `const`.
Any pointer which content will not be modified must be `const`.
This property is then controlled at compiler level.
`const` variables are an important signal to readers that this variable isn't modified.
Conversely, non-const variables are a signal to readers to watch out for modifications later on in the function.
* If a function must be inlined, mention it explicitly,
using project's own portable macros, such as `FORCE_INLINE_ATTR`,
defined in `lib/common/compiler.h`.
### Debugging
* **Assertions** are welcome, and should be used very liberally,
to control any condition the code expects for its correct execution.
These assertion checks will be run in debug builds, and disabled in production.
* For traces, this project provides its own debug macros,
in particular `DEBUGLOG(level, ...)`, defined in `lib/common/debug.h`.
### Code documentation
* Avoid code documentation that merely repeats what the code is already stating.
Whenever applicable, prefer employing the code as the primary way to convey explanations.
Example 1 : `int nbTokens = n;` instead of `int i = n; /* i is a nb of tokens *./`.
Example 2 : `assert(size > 0);` instead of `/* here, size should be positive */`.
* At declaration level, the documentation explains how to use the function or variable
and when applicable why it's needed, of the scenarios where it can be useful.
* At implementation level, the documentation explains the general outline of the algorithm employed,
and when applicable why this specific choice was preferred.
### General layout
* 4 spaces for indentation rather than tabs
* Code documentation shall directly precede function declaration or implementation
* Function implementations and its code documentation should be preceded and followed by an empty line
## License
By contributing to Zstandard, you agree that your contributions will be licensed
under both the [LICENSE](LICENSE) file and the [COPYING](COPYING) file in the root directory of this source tree. | {
"source": "yandex/perforator",
"title": "contrib/libs/zstd/CONTRIBUTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/zstd/CONTRIBUTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 30770
} |
<p align="center"><img src="https://raw.githubusercontent.com/facebook/zstd/dev/doc/images/zstd_logo86.png" alt="Zstandard"></p>
__Zstandard__, or `zstd` as short version, is a fast lossless compression algorithm,
targeting real-time compression scenarios at zlib-level and better compression ratios.
It's backed by a very fast entropy stage, provided by [Huff0 and FSE library](https://github.com/Cyan4973/FiniteStateEntropy).
Zstandard's format is stable and documented in [RFC8878](https://datatracker.ietf.org/doc/html/rfc8878). Multiple independent implementations are already available.
This repository represents the reference implementation, provided as an open-source dual [BSD](LICENSE) OR [GPLv2](COPYING) licensed **C** library,
and a command line utility producing and decoding `.zst`, `.gz`, `.xz` and `.lz4` files.
Should your project require another programming language,
a list of known ports and bindings is provided on [Zstandard homepage](https://facebook.github.io/zstd/#other-languages).
**Development branch status:**
[![Build Status][travisDevBadge]][travisLink]
[![Build status][CircleDevBadge]][CircleLink]
[![Build status][CirrusDevBadge]][CirrusLink]
[![Fuzzing Status][OSSFuzzBadge]][OSSFuzzLink]
[travisDevBadge]: https://api.travis-ci.com/facebook/zstd.svg?branch=dev "Continuous Integration test suite"
[travisLink]: https://travis-ci.com/facebook/zstd
[CircleDevBadge]: https://circleci.com/gh/facebook/zstd/tree/dev.svg?style=shield "Short test suite"
[CircleLink]: https://circleci.com/gh/facebook/zstd
[CirrusDevBadge]: https://api.cirrus-ci.com/github/facebook/zstd.svg?branch=dev
[CirrusLink]: https://cirrus-ci.com/github/facebook/zstd
[OSSFuzzBadge]: https://oss-fuzz-build-logs.storage.googleapis.com/badges/zstd.svg
[OSSFuzzLink]: https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:zstd
## Benchmarks
For reference, several fast compression algorithms were tested and compared
on a desktop featuring a Core i7-9700K CPU @ 4.9GHz
and running Ubuntu 20.04 (`Linux ubu20 5.15.0-101-generic`),
using [lzbench], an open-source in-memory benchmark by @inikep
compiled with [gcc] 9.4.0,
on the [Silesia compression corpus].
[lzbench]: https://github.com/inikep/lzbench
[Silesia compression corpus]: https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia
[gcc]: https://gcc.gnu.org/
| Compressor name | Ratio | Compression| Decompress.|
| --------------- | ------| -----------| ---------- |
| **zstd 1.5.6 -1** | 2.887 | 510 MB/s | 1580 MB/s |
| [zlib] 1.2.11 -1 | 2.743 | 95 MB/s | 400 MB/s |
| brotli 1.0.9 -0 | 2.702 | 395 MB/s | 430 MB/s |
| **zstd 1.5.6 --fast=1** | 2.437 | 545 MB/s | 1890 MB/s |
| **zstd 1.5.6 --fast=3** | 2.239 | 650 MB/s | 2000 MB/s |
| quicklz 1.5.0 -1 | 2.238 | 525 MB/s | 750 MB/s |
| lzo1x 2.10 -1 | 2.106 | 650 MB/s | 825 MB/s |
| [lz4] 1.9.4 | 2.101 | 700 MB/s | 4000 MB/s |
| lzf 3.6 -1 | 2.077 | 420 MB/s | 830 MB/s |
| snappy 1.1.9 | 2.073 | 530 MB/s | 1660 MB/s |
[zlib]: https://www.zlib.net/
[lz4]: https://lz4.github.io/lz4/
The negative compression levels, specified with `--fast=#`,
offer faster compression and decompression speed
at the cost of compression ratio.
Zstd can also offer stronger compression ratios at the cost of compression speed.
Speed vs Compression trade-off is configurable by small increments.
Decompression speed is preserved and remains roughly the same at all settings,
a property shared by most LZ compression algorithms, such as [zlib] or lzma.
The following tests were run
on a server running Linux Debian (`Linux version 4.14.0-3-amd64`)
with a Core i7-6700K CPU @ 4.0GHz,
using [lzbench], an open-source in-memory benchmark by @inikep
compiled with [gcc] 7.3.0,
on the [Silesia compression corpus].
Compression Speed vs Ratio | Decompression Speed
---------------------------|--------------------
 | 
A few other algorithms can produce higher compression ratios at slower speeds, falling outside of the graph.
For a larger picture including slow modes, [click on this link](doc/images/DCspeed5.png).
## The case for Small Data compression
Previous charts provide results applicable to typical file and stream scenarios (several MB). Small data comes with different perspectives.
The smaller the amount of data to compress, the more difficult it is to compress. This problem is common to all compression algorithms, and reason is, compression algorithms learn from past data how to compress future data. But at the beginning of a new data set, there is no "past" to build upon.
To solve this situation, Zstd offers a __training mode__, which can be used to tune the algorithm for a selected type of data.
Training Zstandard is achieved by providing it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression.
Using this dictionary, the compression ratio achievable on small data improves dramatically.
The following example uses the `github-users` [sample set](https://github.com/facebook/zstd/releases/tag/v1.1.3), created from [github public API](https://developer.github.com/v3/users/#get-all-users).
It consists of roughly 10K records weighing about 1KB each.
Compression Ratio | Compression Speed | Decompression Speed
------------------|-------------------|--------------------
 |  | 
These compression gains are achieved while simultaneously providing _faster_ compression and decompression speeds.
Training works if there is some correlation in a family of small data samples. The more data-specific a dictionary is, the more efficient it is (there is no _universal dictionary_).
Hence, deploying one dictionary per type of data will provide the greatest benefits.
Dictionary gains are mostly effective in the first few KB. Then, the compression algorithm will gradually use previously decoded content to better compress the rest of the file.
### Dictionary compression How To:
1. Create the dictionary
`zstd --train FullPathToTrainingSet/* -o dictionaryName`
2. Compress with dictionary
`zstd -D dictionaryName FILE`
3. Decompress with dictionary
`zstd -D dictionaryName --decompress FILE.zst`
## Build instructions
`make` is the officially maintained build system of this project.
All other build systems are "compatible" and 3rd-party maintained,
they may feature small differences in advanced options.
When your system allows it, prefer using `make` to build `zstd` and `libzstd`.
### Makefile
If your system is compatible with standard `make` (or `gmake`),
invoking `make` in root directory will generate `zstd` cli in root directory.
It will also create `libzstd` into `lib/`.
Other available options include:
- `make install` : create and install zstd cli, library and man pages
- `make check` : create and run `zstd`, test its behavior on local platform
The `Makefile` follows the [GNU Standard Makefile conventions](https://www.gnu.org/prep/standards/html_node/Makefile-Conventions.html),
allowing staged install, standard flags, directory variables and command variables.
For advanced use cases, specialized compilation flags which control binary generation
are documented in [`lib/README.md`](lib/README.md#modular-build) for the `libzstd` library
and in [`programs/README.md`](programs/README.md#compilation-variables) for the `zstd` CLI.
### cmake
A `cmake` project generator is provided within `build/cmake`.
It can generate Makefiles or other build scripts
to create `zstd` binary, and `libzstd` dynamic and static libraries.
By default, `CMAKE_BUILD_TYPE` is set to `Release`.
#### Support for Fat (Universal2) Output
`zstd` can be built and installed with support for both Apple Silicon (M1/M2) as well as Intel by using CMake's Universal2 support.
To perform a Fat/Universal2 build and install use the following commands:
```bash
cmake -B build-cmake-debug -S build/cmake -G Ninja -DCMAKE_OSX_ARCHITECTURES="x86_64;x86_64h;arm64"
cd build-cmake-debug
ninja
sudo ninja install
```
### Meson
A Meson project is provided within [`build/meson`](build/meson). Follow
build instructions in that directory.
You can also take a look at [`.travis.yml`](.travis.yml) file for an
example about how Meson is used to build this project.
Note that default build type is **release**.
### VCPKG
You can build and install zstd [vcpkg](https://github.com/Microsoft/vcpkg/) dependency manager:
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install zstd
The zstd port in vcpkg is kept up to date by Microsoft team members and community contributors.
If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository.
### Conan
You can install pre-built binaries for zstd or build it from source using [Conan](https://conan.io/). Use the following command:
```bash
conan install --requires="zstd/[*]" --build=missing
```
The zstd Conan recipe is kept up to date by Conan maintainers and community contributors.
If the version is out of date, please [create an issue or pull request](https://github.com/conan-io/conan-center-index) on the ConanCenterIndex repository.
### Visual Studio (Windows)
Going into `build` directory, you will find additional possibilities:
- Projects for Visual Studio 2005, 2008 and 2010.
+ VS2010 project is compatible with VS2012, VS2013, VS2015 and VS2017.
- Automated build scripts for Visual compiler by [@KrzysFR](https://github.com/KrzysFR), in `build/VS_scripts`,
which will build `zstd` cli and `libzstd` library without any need to open Visual Studio solution.
### Buck
You can build the zstd binary via buck by executing: `buck build programs:zstd` from the root of the repo.
The output binary will be in `buck-out/gen/programs/`.
### Bazel
You easily can integrate zstd into your Bazel project by using the module hosted on the [Bazel Central Repository](https://registry.bazel.build/modules/zstd).
## Testing
You can run quick local smoke tests by running `make check`.
If you can't use `make`, execute the `playTest.sh` script from the `src/tests` directory.
Two env variables `$ZSTD_BIN` and `$DATAGEN_BIN` are needed for the test script to locate the `zstd` and `datagen` binary.
For information on CI testing, please refer to `TESTING.md`.
## Status
Zstandard is currently deployed within Facebook and many other large cloud infrastructures.
It is run continuously to compress large amounts of data in multiple formats and use cases.
Zstandard is considered safe for production environments.
## License
Zstandard is dual-licensed under [BSD](LICENSE) OR [GPLv2](COPYING).
## Contributing
The `dev` branch is the one where all contributions are merged before reaching `release`.
If you plan to propose a patch, please commit into the `dev` branch, or its own feature branch.
Direct commit to `release` are not permitted.
For more information, please read [CONTRIBUTING](CONTRIBUTING.md). | {
"source": "yandex/perforator",
"title": "contrib/libs/zstd/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/zstd/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 11486
} |
# Reporting and Fixing Security Issues
Please do not open GitHub issues or pull requests - this makes the problem immediately visible to everyone, including malicious actors. Security issues in this open source project can be safely reported via the Meta Bug Bounty program:
https://www.facebook.com/whitehat
Meta's security team will triage your report and determine whether or not is it eligible for a bounty under our program.
# Receiving Vulnerability Notifications
In the case that a significant security vulnerability is reported to us or discovered by us---without being publicly known---we will, at our discretion, notify high-profile, high-exposure users of Zstandard ahead of our public disclosure of the issue and associated fix.
If you believe your project would benefit from inclusion in this list, please reach out to one of the maintainers.
<!-- Note to maintainers: this list is kept [here](https://fburl.com/wiki/cgc1l62x). --> | {
"source": "yandex/perforator",
"title": "contrib/libs/zstd/SECURITY.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/zstd/SECURITY.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 951
} |
Testing
=======
Zstandard CI testing is split up into three sections:
short, medium, and long tests.
Short Tests
-----------
Short tests run on CircleCI for new commits on every branch and pull request.
They consist of the following tests:
- Compilation on all supported targets (x86, x86_64, ARM, AArch64, PowerPC, and PowerPC64)
- Compilation on various versions of gcc, clang, and g++
- `tests/playTests.sh` on x86_64, without the tests on long data (CLI tests)
- Small tests (`tests/legacy.c`, `tests/longmatch.c`) on x64_64
Medium Tests
------------
Medium tests run on every commit and pull request to `dev` branch, on TravisCI.
They consist of the following tests:
- The following tests run with UBsan and Asan on x86_64 and x86, as well as with
Msan on x86_64
- `tests/playTests.sh --test-large-data`
- Fuzzer tests: `tests/fuzzer.c`, `tests/zstreamtest.c`, and `tests/decodecorpus.c`
- `tests/zstreamtest.c` under Tsan (streaming mode, including multithreaded mode)
- Valgrind Test (`make -C tests test-valgrind`) (testing CLI and fuzzer under `valgrind`)
- Fuzzer tests (see above) on ARM, AArch64, PowerPC, and PowerPC64
Long Tests
----------
Long tests run on all commits to `release` branch,
and once a day on the current version of `dev` branch,
on TravisCI.
They consist of the following tests:
- Entire test suite (including fuzzers and some other specialized tests) on:
- x86_64 and x86 with UBsan and Asan
- x86_64 with Msan
- ARM, AArch64, PowerPC, and PowerPC64
- Streaming mode fuzzer with Tsan (for the `zstdmt` testing)
- ZlibWrapper tests, including under valgrind
- Versions test (ensuring `zstd` can decode files from all previous versions)
- `pzstd` with asan and tsan, as well as in 32-bits mode
- Testing `zstd` with legacy mode off
- Entire test suite and make install on macOS | {
"source": "yandex/perforator",
"title": "contrib/libs/zstd/TESTING.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/zstd/TESTING.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1823
} |
ASTTokens
=========
.. image:: https://img.shields.io/pypi/v/asttokens.svg
:target: https://pypi.python.org/pypi/asttokens/
.. image:: https://img.shields.io/pypi/pyversions/asttokens.svg
:target: https://pypi.python.org/pypi/asttokens/
.. image:: https://github.com/gristlabs/asttokens/actions/workflows/build-and-test.yml/badge.svg
:target: https://github.com/gristlabs/asttokens/actions/workflows/build-and-test.yml
.. image:: https://readthedocs.org/projects/asttokens/badge/?version=latest
:target: http://asttokens.readthedocs.io/en/latest/index.html
.. image:: https://coveralls.io/repos/github/gristlabs/asttokens/badge.svg
:target: https://coveralls.io/github/gristlabs/asttokens
.. Start of user-guide
The ``asttokens`` module annotates Python abstract syntax trees (ASTs) with the positions of tokens
and text in the source code that generated them.
It makes it possible for tools that work with logical AST nodes to find the particular text that
resulted in those nodes, for example for automated refactoring or highlighting.
Installation
------------
asttokens is available on PyPI: https://pypi.python.org/pypi/asttokens/::
pip install asttokens
The code is on GitHub: https://github.com/gristlabs/asttokens.
The API Reference is here: http://asttokens.readthedocs.io/en/latest/api-index.html.
Usage
-----
ASTTokens can annotate both trees built by `ast <https://docs.python.org/2/library/ast.html>`_,
AND those built by `astroid <https://github.com/PyCQA/astroid>`_.
Here's an example:
.. code-block:: python
import asttokens, ast
source = "Robot('blue').walk(steps=10*n)"
atok = asttokens.ASTTokens(source, parse=True)
Once the tree has been marked, nodes get ``.first_token``, ``.last_token`` attributes, and
the ``ASTTokens`` object offers helpful methods:
.. code-block:: python
attr_node = next(n for n in ast.walk(atok.tree) if isinstance(n, ast.Attribute))
print(atok.get_text(attr_node))
start, end = attr_node.last_token.startpos, attr_node.last_token.endpos
print(atok.text[:start] + 'RUN' + atok.text[end:])
Which produces this output:
.. code-block:: text
Robot('blue').walk
Robot('blue').RUN(steps=10*n)
The ``ASTTokens`` object also offers methods to walk and search the list of tokens that make up
the code (or a particular AST node), which is more useful and powerful than dealing with the text
directly.
Contribute
----------
To contribute:
1. Fork this repository, and clone your fork.
2. Install the package with test dependencies (ideally in a virtualenv) with::
pip install -e '.[test]'
3. Run tests in your current interpreter with the command ``pytest`` or ``python -m pytest``.
4. Run tests across all supported interpreters with the ``tox`` command. You will need to have the interpreters installed separately. We recommend ``pyenv`` for that. Use ``tox -p auto`` to run the tests in parallel.
5. By default certain tests which take a very long time to run are skipped, but they are run in CI.
These are marked using the ``pytest`` marker ``slow`` and can be run on their own with ``pytest -m slow`` or as part of the full suite with ``pytest -m ''``. | {
"source": "yandex/perforator",
"title": "contrib/python/asttokens/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/asttokens/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 3184
} |
# executing
[](https://github.com/alexmojaki/executing/actions) [](https://coveralls.io/github/alexmojaki/executing?branch=master) [](https://pypi.python.org/pypi/executing)
This mini-package lets you get information about what a frame is currently doing, particularly the AST node being executed.
* [Usage](#usage)
* [Getting the AST node](#getting-the-ast-node)
* [Getting the source code of the node](#getting-the-source-code-of-the-node)
* [Getting the `__qualname__` of the current function](#getting-the-__qualname__-of-the-current-function)
* [The Source class](#the-source-class)
* [Installation](#installation)
* [How does it work?](#how-does-it-work)
* [Is it reliable?](#is-it-reliable)
* [Which nodes can it identify?](#which-nodes-can-it-identify)
* [Projects that use this](#projects-that-use-this)
## Usage
### Getting the AST node
```python
import executing
node = executing.Source.executing(frame).node
```
Then `node` will be an AST node (from the `ast` standard library module) or None if the node couldn't be identified (which may happen often and should always be checked).
`node` will always be the same instance for multiple calls with frames at the same point of execution.
If you have a traceback object, pass it directly to `Source.executing()` rather than the `tb_frame` attribute to get the correct node.
### Getting the source code of the node
For this you will need to separately install the [`asttokens`](https://github.com/gristlabs/asttokens) library, then obtain an `ASTTokens` object:
```python
executing.Source.executing(frame).source.asttokens()
```
or:
```python
executing.Source.for_frame(frame).asttokens()
```
or use one of the convenience methods:
```python
executing.Source.executing(frame).text()
executing.Source.executing(frame).text_range()
```
### Getting the `__qualname__` of the current function
```python
executing.Source.executing(frame).code_qualname()
```
or:
```python
executing.Source.for_frame(frame).code_qualname(frame.f_code)
```
### The `Source` class
Everything goes through the `Source` class. Only one instance of the class is created for each filename. Subclassing it to add more attributes on creation or methods is recommended. The classmethods such as `executing` will respect this. See the source code and docstrings for more detail.
## Installation
pip install executing
If you don't like that you can just copy the file `executing.py`, there are no dependencies (but of course you won't get updates).
## How does it work?
Suppose the frame is executing this line:
```python
self.foo(bar.x)
```
and in particular it's currently obtaining the attribute `self.foo`. Looking at the bytecode, specifically `frame.f_code.co_code[frame.f_lasti]`, we can tell that it's loading an attribute, but it's not obvious which one. We can narrow down the statement being executed using `frame.f_lineno` and find the two `ast.Attribute` nodes representing `self.foo` and `bar.x`. How do we find out which one it is, without recreating the entire compiler in Python?
The trick is to modify the AST slightly for each candidate expression and observe the changes in the bytecode instructions. We change the AST to this:
```python
(self.foo ** 'longuniqueconstant')(bar.x)
```
and compile it, and the bytecode will be almost the same but there will be two new instructions:
LOAD_CONST 'longuniqueconstant'
BINARY_POWER
and just before that will be a `LOAD_ATTR` instruction corresponding to `self.foo`. Seeing that it's in the same position as the original instruction lets us know we've found our match.
## Is it reliable?
Yes - if it identifies a node, you can trust that it's identified the correct one. The tests are very thorough - in addition to unit tests which check various situations directly, there are property tests against a large number of files (see the filenames printed in [this build](https://travis-ci.org/alexmojaki/executing/jobs/557970457)) with real code. Specifically, for each file, the tests:
1. Identify as many nodes as possible from all the bytecode instructions in the file, and assert that they are all distinct
2. Find all the nodes that should be identifiable, and assert that they were indeed identified somewhere
In other words, it shows that there is a one-to-one mapping between the nodes and the instructions that can be handled. This leaves very little room for a bug to creep in.
Furthermore, `executing` checks that the instructions compiled from the modified AST exactly match the original code save for a few small known exceptions. This accounts for all the quirks and optimisations in the interpreter.
## Which nodes can it identify?
Currently it works in almost all cases for the following `ast` nodes:
- `Call`, e.g. `self.foo(bar)`
- `Attribute`, e.g. `point.x`
- `Subscript`, e.g. `lst[1]`
- `BinOp`, e.g. `x + y` (doesn't include `and` and `or`)
- `UnaryOp`, e.g. `-n` (includes `not` but only works sometimes)
- `Compare` e.g. `a < b` (not for chains such as `0 < p < 1`)
The plan is to extend to more operations in the future.
## Projects that use this
### My Projects
- **[`stack_data`](https://github.com/alexmojaki/stack_data)**: Extracts data from stack frames and tracebacks, particularly to display more useful tracebacks than the default. Also uses another related library of mine: **[`pure_eval`](https://github.com/alexmojaki/pure_eval)**.
- **[`futurecoder`](https://futurecoder.io/)**: Highlights the executing node in tracebacks using `executing` via `stack_data`, and provides debugging with `snoop`.
- **[`snoop`](https://github.com/alexmojaki/snoop)**: A feature-rich and convenient debugging library. Uses `executing` to show the operation which caused an exception and to allow the `pp` function to display the source of its arguments.
- **[`heartrate`](https://github.com/alexmojaki/heartrate)**: A simple real time visualisation of the execution of a Python program. Uses `executing` to highlight currently executing operations, particularly in each frame of the stack trace.
- **[`sorcery`](https://github.com/alexmojaki/sorcery)**: Dark magic delights in Python. Uses `executing` to let special callables called spells know where they're being called from.
### Projects I've contributed to
- **[`IPython`](https://github.com/ipython/ipython/pull/12150)**: Highlights the executing node in tracebacks using `executing` via [`stack_data`](https://github.com/alexmojaki/stack_data).
- **[`icecream`](https://github.com/gruns/icecream)**: 🍦 Sweet and creamy print debugging. Uses `executing` to identify where `ic` is called and print its arguments.
- **[`friendly_traceback`](https://github.com/friendly-traceback/friendly-traceback)**: Uses `stack_data` and `executing` to pinpoint the cause of errors and provide helpful explanations.
- **[`python-devtools`](https://github.com/samuelcolvin/python-devtools)**: Uses `executing` for print debugging similar to `icecream`.
- **[`sentry_sdk`](https://github.com/getsentry/sentry-python)**: Add the integration `sentry_sdk.integrations.executingExecutingIntegration()` to show the function `__qualname__` in each frame in sentry events.
- **[`varname`](https://github.com/pwwang/python-varname)**: Dark magics about variable names in python. Uses `executing` to find where its various magical functions like `varname` and `nameof` are called from. | {
"source": "yandex/perforator",
"title": "contrib/python/executing/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/executing/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 7705
} |
iniconfig: brain-dead simple parsing of ini files
=======================================================
iniconfig is a small and simple INI-file parser module
having a unique set of features:
* maintains order of sections and entries
* supports multi-line values with or without line-continuations
* supports "#" comments everywhere
* raises errors with proper line-numbers
* no bells and whistles like automatic substitutions
* iniconfig raises an Error if two sections have the same name.
If you encounter issues or have feature wishes please report them to:
https://github.com/RonnyPfannschmidt/iniconfig/issues
Basic Example
===================================
If you have an ini file like this:
.. code-block:: ini
# content of example.ini
[section1] # comment
name1=value1 # comment
name1b=value1,value2 # comment
[section2]
name2=
line1
line2
then you can do:
.. code-block:: pycon
>>> import iniconfig
>>> ini = iniconfig.IniConfig("example.ini")
>>> ini['section1']['name1'] # raises KeyError if not exists
'value1'
>>> ini.get('section1', 'name1b', [], lambda x: x.split(","))
['value1', 'value2']
>>> ini.get('section1', 'notexist', [], lambda x: x.split(","))
[]
>>> [x.name for x in list(ini)]
['section1', 'section2']
>>> list(list(ini)[0].items())
[('name1', 'value1'), ('name1b', 'value1,value2')]
>>> 'section1' in ini
True
>>> 'inexistendsection' in ini
False | {
"source": "yandex/perforator",
"title": "contrib/python/iniconfig/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/iniconfig/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1501
} |
.. image:: https://img.shields.io/pypi/v/jaraco.collections.svg
:target: https://pypi.org/project/jaraco.collections
.. image:: https://img.shields.io/pypi/pyversions/jaraco.collections.svg
.. image:: https://github.com/jaraco/jaraco.collections/actions/workflows/main.yml/badge.svg
:target: https://github.com/jaraco/jaraco.collections/actions?query=workflow%3A%22tests%22
:alt: tests
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json
:target: https://github.com/astral-sh/ruff
:alt: Ruff
.. image:: https://readthedocs.org/projects/jaracocollections/badge/?version=latest
:target: https://jaracocollections.readthedocs.io/en/latest/?badge=latest
.. image:: https://img.shields.io/badge/skeleton-2024-informational
:target: https://blog.jaraco.com/skeleton
.. image:: https://tidelift.com/badges/package/pypi/jaraco.collections
:target: https://tidelift.com/subscription/pkg/pypi-jaraco.collections?utm_source=pypi-jaraco.collections&utm_medium=readme
Models and classes to supplement the stdlib 'collections' module.
See the docs, linked above, for descriptions and usage examples.
Highlights include:
- RangeMap: A mapping that accepts a range of values for keys.
- Projection: A subset over an existing mapping.
- KeyTransformingDict: Generalized mapping with keys transformed by a function.
- FoldedCaseKeyedDict: A dict whose string keys are case-insensitive.
- BijectiveMap: A map where keys map to values and values back to their keys.
- ItemsAsAttributes: A mapping mix-in exposing items as attributes.
- IdentityOverrideMap: A map whose keys map by default to themselves unless overridden.
- FrozenDict: A hashable, immutable map.
- Enumeration: An object whose keys are enumerated.
- Everything: A container that contains all things.
- Least, Greatest: Objects that are always less than or greater than any other.
- pop_all: Return all items from the mutable sequence and remove them from that sequence.
- DictStack: A stack of dicts, great for sharing scopes.
- WeightedLookup: A specialized RangeMap for selecting an item by weights.
For Enterprise
==============
Available as part of the Tidelift Subscription.
This project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.
`Learn more <https://tidelift.com/subscription/pkg/pypi-jaraco.collections?utm_source=pypi-jaraco.collections&utm_medium=referral&utm_campaign=github>`_. | {
"source": "yandex/perforator",
"title": "contrib/python/jaraco.collections/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/jaraco.collections/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2567
} |
.. image:: https://img.shields.io/pypi/v/jaraco.context.svg
:target: https://pypi.org/project/jaraco.context
.. image:: https://img.shields.io/pypi/pyversions/jaraco.context.svg
.. image:: https://github.com/jaraco/jaraco.context/actions/workflows/main.yml/badge.svg
:target: https://github.com/jaraco/jaraco.context/actions?query=workflow%3A%22tests%22
:alt: tests
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json
:target: https://github.com/astral-sh/ruff
:alt: Ruff
.. image:: https://readthedocs.org/projects/jaracocontext/badge/?version=latest
:target: https://jaracocontext.readthedocs.io/en/latest/?badge=latest
.. image:: https://img.shields.io/badge/skeleton-2024-informational
:target: https://blog.jaraco.com/skeleton
.. image:: https://tidelift.com/badges/package/pypi/jaraco.context
:target: https://tidelift.com/subscription/pkg/pypi-jaraco.context?utm_source=pypi-jaraco.context&utm_medium=readme
Highlights
==========
See the docs linked from the badge above for the full details, but here are some features that may be of interest.
- ``ExceptionTrap`` provides a general-purpose wrapper for trapping exceptions and then acting on the outcome. Includes ``passes`` and ``raises`` decorators to replace the result of a wrapped function by a boolean indicating the outcome of the exception trap. See `this keyring commit <https://github.com/jaraco/keyring/commit/a85a7cbc6c909f8121660ed1f7b487f99a1c2bf7>`_ for an example of it in production.
- ``suppress`` simply enables ``contextlib.suppress`` as a decorator.
- ``on_interrupt`` is a decorator used by CLI entry points to affect the handling of a ``KeyboardInterrupt``. Inspired by `Lucretiel/autocommand#18 <https://github.com/Lucretiel/autocommand/issues/18>`_.
- ``pushd`` is similar to pytest's ``monkeypatch.chdir`` or path's `default context <https://path.readthedocs.io/en/latest/api.html>`_, changes the current working directory for the duration of the context.
- ``tarball`` will download a tarball, extract it, change directory, yield, then clean up after. Convenient when working with web assets.
- ``null`` is there for those times when one code branch needs a context and the other doesn't; this null context provides symmetry across those branches.
For Enterprise
==============
Available as part of the Tidelift Subscription.
This project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.
`Learn more <https://tidelift.com/subscription/pkg/pypi-jaraco.context?utm_source=pypi-jaraco.context&utm_medium=referral&utm_campaign=github>`_. | {
"source": "yandex/perforator",
"title": "contrib/python/jaraco.context/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/jaraco.context/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2744
} |
.. image:: https://img.shields.io/pypi/v/jaraco.text.svg
:target: https://pypi.org/project/jaraco.text
.. image:: https://img.shields.io/pypi/pyversions/jaraco.text.svg
.. image:: https://github.com/jaraco/jaraco.text/actions/workflows/main.yml/badge.svg
:target: https://github.com/jaraco/jaraco.text/actions?query=workflow%3A%22tests%22
:alt: tests
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json
:target: https://github.com/astral-sh/ruff
:alt: Ruff
.. image:: https://readthedocs.org/projects/jaracotext/badge/?version=latest
:target: https://jaracotext.readthedocs.io/en/latest/?badge=latest
.. image:: https://img.shields.io/badge/skeleton-2024-informational
:target: https://blog.jaraco.com/skeleton
.. image:: https://tidelift.com/badges/package/pypi/jaraco.text
:target: https://tidelift.com/subscription/pkg/pypi-jaraco.text?utm_source=pypi-jaraco.text&utm_medium=readme
This package provides handy routines for dealing with text, such as
wrapping, substitution, trimming, stripping, prefix and suffix removal,
line continuation, indentation, comment processing, identifier processing,
values parsing, case insensitive comparison, and more. See the docs
(linked in the badge above) for the detailed documentation and examples.
Layouts
=======
One of the features of this package is the layouts module, which
provides a simple example of translating keystrokes from one keyboard
layout to another::
echo qwerty | python -m jaraco.text.to-dvorak
',.pyf
echo "',.pyf" | python -m jaraco.text.to-qwerty
qwerty
Newline Reporting
=================
Need to know what newlines appear in a file?
::
$ python -m jaraco.text.show-newlines README.rst
newline is '\n'
For Enterprise
==============
Available as part of the Tidelift Subscription.
This project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.
`Learn more <https://tidelift.com/subscription/pkg/pypi-jaraco.text?utm_source=pypi-jaraco.text&utm_medium=referral&utm_campaign=github>`_. | {
"source": "yandex/perforator",
"title": "contrib/python/jaraco.text/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/jaraco.text/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2203
} |
# Matplotlib Inline Back-end for IPython and Jupyter
This package provides support for matplotlib to display figures directly inline in the Jupyter notebook and related clients, as shown below.
## Installation
With conda:
```bash
conda install -c conda-forge matplotlib-inline
```
With pip:
```bash
pip install matplotlib-inline
```
## Usage
Note that in current versions of JupyterLab and Jupyter Notebook, the explicit use of the `%matplotlib inline` directive is not needed anymore, though other third-party clients may still require it.
This will produce a figure immediately below:
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 3*np.pi, 500)
plt.plot(x, np.sin(x**2))
plt.title('A simple chirp');
```
## License
Licensed under the terms of the BSD 3-Clause License, by the IPython Development Team (see `LICENSE` file). | {
"source": "yandex/perforator",
"title": "contrib/python/matplotlib-inline/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/matplotlib-inline/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 891
} |
mypy-protobuf: Generate mypy stub files from protobuf specs
[](https://github.com/nipunn1313/mypy-protobuf/actions?query=branch%3Amain)
[](https://pypi.org/project/mypy-protobuf/)
[](https://github.com/nipunn1313/mypy-protobuf/blob/main/LICENSE)
===========================================================
2.10 is the last version of mypy-protobuf which supports targeting python 2.7.
Built originally with love at [Dropbox](https://github.com/dropbox)
See [Changelog](CHANGELOG.md) for recent changes.
## Requirements to run mypy-protobuf
Earlier releases might work, but aren't tested
- [protoc >= 23.4](https://github.com/protocolbuffers/protobuf/releases)
- [python-protobuf >= 4.25.3](https://pypi.org/project/protobuf/) - matching protoc release
- [python >= 3.8](https://www.python.org/downloads/source/) - for running mypy-protobuf plugin.
## Requirements to run typecheckers on stubs generated by mypy-protobuf
Earlier releases might work, but aren't tested
- [mypy >= v1.4.1](https://pypi.org/project/mypy) or [pyright >= 1.1.206](https://github.com/microsoft/pyright)
- [python-protobuf >= 4.25.3](https://pypi.org/project/protobuf/) - matching protoc release
- [types-protobuf >= 4.24](https://pypi.org/project/types-protobuf/) - for stubs from the google.protobuf library
### To run typecheckers on code generated with grpc plugin - you'll additionally need
Earlier releases might work, but aren't tested
- [grpcio>=1.56.2](https://pypi.org/project/grpcio/)
- [grpcio-tools>=1.56.2](https://pypi.org/project/grpcio-tools/)
- [grpc-stubs>=1.53.0.2](https://pypi.org/project/grpc-stubs/)
Other configurations may work, but are not continuously tested currently.
We would be open to expanding this list - file an issue on the issue tracker.
## Installation
The plugin can be installed with
```
pip3 install mypy-protobuf
```
To install unreleased
```
REV=main # or whichever unreleased git rev you'd like
pip3 install git+https://github.com/nipunn1313/mypy-protobuf.git@$REV
# For older (1.x) versions of mypy protobuf - you may need
pip3 install git+https://github.com/nipunn1313/mypy-protobuf.git@$REV#subdirectory=python
```
In order to run mypy on the generated code, you'll need to install
```
pip3 install mypy>=0.910 types-protobuf>=0.1.14
```
# Usage
On posix, protoc-gen-mypy is installed to python's executable bin. Assuming that's
on your $PATH, you can run
```
protoc --python_out=output/location --mypy_out=output/location
```
Alternately, you can explicitly provide the path:
```
protoc --plugin=protoc-gen-mypy=path/to/protoc-gen-mypy --python_out=output/location --mypy_out=output/location
```
Check the version number with
```
> protoc-gen-mypy --version
```
## Implementation
The implementation of the plugin is in `mypy_protobuf/main.py`, which installs to
an executable protoc-gen-mypy. On windows it installs to `protoc-gen-mypy.exe`
## Features
See [Changelog](CHANGELOG.md) for full listing
### Bring comments from .proto files to docstrings in .pyi files
Comments in the .proto files on messages, fields, enums, enum variants, extensions, services, and methods
will appear as docstrings in .pyi files. Useful in IDEs for showing completions with comments.
### Types enum int values more strongly
Enum int values produce stubs which wrap the int values in NewType
```proto
enum MyEnum {
HELLO = 0;
WORLD = 1;
}
```
Will yield an [enum type wrapper](https://github.com/python/typeshed/blob/16ae4c61201cd8b96b8b22cdfb2ab9e89ba5bcf2/stubs/protobuf/google/protobuf/internal/enum_type_wrapper.pyi) whose methods type to `MyEnum.ValueType` (a `NewType(int)` rather than `int`.
This allows mypy to catch bugs where the wrong enum value is being used.
Calling code may be typed as follows.
In python >= 3.7
```python
# May need [PEP 563](https://www.python.org/dev/peps/pep-0563/) to postpone evaluation of annotations
# from __future__ import annotations # Not needed with python>=3.11 or protobuf>=3.20.0
def f(x: MyEnum.ValueType):
print(x)
f(MyEnum.Value("HELLO"))
```
With protobuf <= 3.20.0, for usages of cast, the type of `x` must be quoted
After protobuf >= 3.20.0 - `ValueType` exists in the python code and quotes aren't needed
until [upstream protobuf](https://github.com/protocolbuffers/protobuf/pull/8182) includes `ValueType`
```python
cast('MyEnum.ValueType', x)
```
Similarly, for type aliases with protobuf < 3.20.0, you must either quote the type or hide it behind `TYPE_CHECKING`
```python
from typing import Tuple, TYPE_CHECKING
HELLO = Tuple['MyEnum.ValueType', 'MyEnum.ValueType']
if TYPE_CHECKING:
HELLO = Tuple[MyEnum.ValueType, MyEnum.ValueType]
```
#### Enum int impl details
mypy-protobuf autogenerates an instance of the EnumTypeWrapper as follows.
```python
class _MyEnum:
ValueType = typing.NewType('ValueType', builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _MyEnumEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_MyEnum.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
HELLO: _MyEnum.ValueType # 0
WORLD: _MyEnum.ValueType # 1
class MyEnum(_MyEnum, metaclass=_MyEnumEnumTypeWrapper):
pass
HELLO: MyEnum.ValueType # 0
WORLD: MyEnum.ValueType # 1
```
`_MyEnumEnumTypeWrapper` extends the EnumTypeWrapper to take/return MyEnum.ValueType rather than int
`MyEnum` is an instance of the `EnumTypeWrapper`.
- Use `_MyEnum` and of metaclass is an implementation detail to make MyEnum.ValueType a valid type w/o a circular dependency
- `V` is supported as an alias of `ValueType` for backward compatibility
### Supports generating type wrappers for fields and maps
M.proto
```proto
message M {
uint32 user_id = 1 [(mypy_protobuf.options).casttype="mymod.UserId"];
map<uint32, string> email_by_uid = 2 [
(mypy_protobuf.options).keytype="path/to/mymod.UserId",
(mypy_protobuf.options).valuetype="path/to/mymod.Email"
];
}
```
mymod.py
```python
UserId = NewType("UserId", int)
Email = NewType("Email", Text)
```
### `py_generic_services`
If `py_generic_services` is set in your proto file, then mypy-protobuf will
generate service stubs. If you want GRPC stubs instead - use the GRPC instructions.
### `readable_stubs`
If `readable_stubs` is set, mypy-protobuf will generate easier-to-read stubs. The downside
to this approach - is that it's possible to generate stubs which do not pass mypy - particularly
in the case of name collisions. mypy-protobuf defaults to generating stubs with fully qualified
imports and mangled global-level identifiers to defend against name collisions between global
identifiers and field names.
If you're ok with this risk, try it out!
```
protoc --python_out=output/location --mypy_out=readable_stubs:output/location
```
### `relax_strict_optional_primitives`
If you are using proto3, then primitives cannot be represented as NULL on the wire -
only as their zero value. By default mypy-protobuf types message constructors to have
non-nullable primitives (eg `int` instead of `Optional[int]`). python-protobuf itself will
internally convert None -> zero value. If you intentionally want to use this behavior,
set this flag! We recommend avoiding this, as it can lead to developer error - confusing
NULL and 0 as distinct on the wire.
However, it may be helpful when migrating existing proto2 code, where the distinction is meaningful
```
protoc --python_out=output/location --mypy_out=relax_strict_optional_primitives:output/location
```
### Output suppression
To suppress output, you can run
```
protoc --python_out=output/location --mypy_out=quiet:output/location
```
### GRPC
This plugin provides stubs generation for grpcio generated code.
```
protoc \
--python_out=output/location \
--mypy_out=output/location \
--grpc_out=output/location \
--mypy_grpc_out=output/location
```
Note that generated code for grpc will work only together with code for python and locations should be the same.
If you need stubs for grpc internal code we suggest using this package https://github.com/shabbyrobe/grpc-stubs
### Targeting python2 support
mypy-protobuf's drops support for targeting python2 with version 3.0. If you still need python2 support -
```
python3 -m pip install mypy_protobuf==2.10
protoc --python_out=output/location --mypy_out=output/location
mypy --target-version=2.7 {files}
```
## Contributing
Contributions to the implementation are welcome. Please run tests using `./run_test.sh`.
Ensure code is formatted using black.
```
pip3 install black
black .
```
## Contributors
- [@nipunn1313](https://github.com/nipunn1313)
- [@dzbarsky](https://github.com/dzbarsky)
- [@gvanrossum](https://github.com/gvanrossum)
- [@peterlvilim](https://github.com/peterlvilim)
- [@msullivan](https://github.com/msullivan)
- [@bradenaw](https://github.com/bradenaw)
- [@ilevkivskyi](https://github.com/ilevkivskyi)
- [@Ketouem](https://github.com/Ketouem)
- [@nmiculinic](https://github.com/nmiculinic)
- [@onto](https://github.com/onto)
- [@jcppkkk](https://github.com/jcppkkk)
- [@drather19](https://github.com/drather19)
- [@smessmer](https://github.com/smessmer)
- [@pcorpet](https://github.com/pcorpet)
- [@zozoens31](https://github.com/zozoens31)
- [@abhishekrb19](https://github.com/abhishekrb19)
- [@jaens](https://github.com/jaens)
- [@arussellsaw](https://github.com/arussellsaw)
- [@shabbyrobe](https://github.com/shabbyrobe)
- [@reorx](https://github.com/reorx)
- [@zifter](https://github.com/zifter)
- [@juzna](https://github.com/juzna)
- [@mikolajz](https://github.com/mikolajz)
- [@chadrik](https://github.com/chadrik)
- [@EPronovost](https://github.com/EPronovost)
- [@chrislawlor](https://github.com/chrislawlor)
- [@henribru](https://github.com/henribru)
- [@Evgenus](https://github.com/Evgenus)
- [@MHDante](https://github.com/MHDante)
- [@nelfin](https://github.com/nelfin)
- [@alkasm](https://github.com/alkasm)
- [@tarmath](https://github.com/tarmath)
- [@jaredkhan](https://github.com/jaredkhan)
- [@sodul](https://github.com/sodul)
- [@miaachan](https://github.com/miaachan)
- [@Alphadelta14](https://github.com/Alphadelta14)
- [@fergyfresh](https://github.com/fergyfresh)
- [@AlexWaygood](https://github.com/AlexWaygood)
## Licence etc.
1. License: Apache 2.0.
2. Copyright attribution: Copyright (c) 2022 Nipunn Koorapati | {
"source": "yandex/perforator",
"title": "contrib/python/mypy-protobuf/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/mypy-protobuf/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 10560
} |
The problem
===========
.. image:: https://badge.fury.io/py/platformdirs.svg
:target: https://badge.fury.io/py/platformdirs
.. image:: https://img.shields.io/pypi/pyversions/platformdirs.svg
:target: https://pypi.python.org/pypi/platformdirs/
.. image:: https://github.com/tox-dev/platformdirs/actions/workflows/check.yaml/badge.svg
:target: https://github.com/platformdirs/platformdirs/actions
.. image:: https://static.pepy.tech/badge/platformdirs/month
:target: https://pepy.tech/project/platformdirs
When writing desktop application, finding the right location to store user data
and configuration varies per platform. Even for single-platform apps, there
may by plenty of nuances in figuring out the right location.
For example, if running on macOS, you should use::
~/Library/Application Support/<AppName>
If on Windows (at least English Win) that should be::
C:\Documents and Settings\<User>\Application Data\Local Settings\<AppAuthor>\<AppName>
or possibly::
C:\Documents and Settings\<User>\Application Data\<AppAuthor>\<AppName>
for `roaming profiles <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-vista/cc766489(v=ws.10)>`_ but that is another story.
On Linux (and other Unices), according to the `XDG Basedir Spec`_, it should be::
~/.local/share/<AppName>
.. _XDG Basedir Spec: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
``platformdirs`` to the rescue
==============================
This kind of thing is what the ``platformdirs`` package is for.
``platformdirs`` will help you choose an appropriate:
- user data dir (``user_data_dir``)
- user config dir (``user_config_dir``)
- user cache dir (``user_cache_dir``)
- site data dir (``site_data_dir``)
- site config dir (``site_config_dir``)
- user log dir (``user_log_dir``)
- user documents dir (``user_documents_dir``)
- user downloads dir (``user_downloads_dir``)
- user pictures dir (``user_pictures_dir``)
- user videos dir (``user_videos_dir``)
- user music dir (``user_music_dir``)
- user desktop dir (``user_desktop_dir``)
- user runtime dir (``user_runtime_dir``)
And also:
- Is slightly opinionated on the directory names used. Look for "OPINION" in
documentation and code for when an opinion is being applied.
Example output
==============
On macOS:
.. code-block:: pycon
>>> from platformdirs import *
>>> appname = "SuperApp"
>>> appauthor = "Acme"
>>> user_data_dir(appname, appauthor)
'/Users/trentm/Library/Application Support/SuperApp'
>>> site_data_dir(appname, appauthor)
'/Library/Application Support/SuperApp'
>>> user_cache_dir(appname, appauthor)
'/Users/trentm/Library/Caches/SuperApp'
>>> user_log_dir(appname, appauthor)
'/Users/trentm/Library/Logs/SuperApp'
>>> user_documents_dir()
'/Users/trentm/Documents'
>>> user_downloads_dir()
'/Users/trentm/Downloads'
>>> user_pictures_dir()
'/Users/trentm/Pictures'
>>> user_videos_dir()
'/Users/trentm/Movies'
>>> user_music_dir()
'/Users/trentm/Music'
>>> user_desktop_dir()
'/Users/trentm/Desktop'
>>> user_runtime_dir(appname, appauthor)
'/Users/trentm/Library/Caches/TemporaryItems/SuperApp'
On Windows:
.. code-block:: pycon
>>> from platformdirs import *
>>> appname = "SuperApp"
>>> appauthor = "Acme"
>>> user_data_dir(appname, appauthor)
'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp'
>>> user_data_dir(appname, appauthor, roaming=True)
'C:\\Users\\trentm\\AppData\\Roaming\\Acme\\SuperApp'
>>> user_cache_dir(appname, appauthor)
'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Cache'
>>> user_log_dir(appname, appauthor)
'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Logs'
>>> user_documents_dir()
'C:\\Users\\trentm\\Documents'
>>> user_downloads_dir()
'C:\\Users\\trentm\\Downloads'
>>> user_pictures_dir()
'C:\\Users\\trentm\\Pictures'
>>> user_videos_dir()
'C:\\Users\\trentm\\Videos'
>>> user_music_dir()
'C:\\Users\\trentm\\Music'
>>> user_desktop_dir()
'C:\\Users\\trentm\\Desktop'
>>> user_runtime_dir(appname, appauthor)
'C:\\Users\\trentm\\AppData\\Local\\Temp\\Acme\\SuperApp'
On Linux:
.. code-block:: pycon
>>> from platformdirs import *
>>> appname = "SuperApp"
>>> appauthor = "Acme"
>>> user_data_dir(appname, appauthor)
'/home/trentm/.local/share/SuperApp'
>>> site_data_dir(appname, appauthor)
'/usr/local/share/SuperApp'
>>> site_data_dir(appname, appauthor, multipath=True)
'/usr/local/share/SuperApp:/usr/share/SuperApp'
>>> user_cache_dir(appname, appauthor)
'/home/trentm/.cache/SuperApp'
>>> user_log_dir(appname, appauthor)
'/home/trentm/.local/state/SuperApp/log'
>>> user_config_dir(appname)
'/home/trentm/.config/SuperApp'
>>> user_documents_dir()
'/home/trentm/Documents'
>>> user_downloads_dir()
'/home/trentm/Downloads'
>>> user_pictures_dir()
'/home/trentm/Pictures'
>>> user_videos_dir()
'/home/trentm/Videos'
>>> user_music_dir()
'/home/trentm/Music'
>>> user_desktop_dir()
'/home/trentm/Desktop'
>>> user_runtime_dir(appname, appauthor)
'/run/user/{os.getuid()}/SuperApp'
>>> site_config_dir(appname)
'/etc/xdg/SuperApp'
>>> os.environ["XDG_CONFIG_DIRS"] = "/etc:/usr/local/etc"
>>> site_config_dir(appname, multipath=True)
'/etc/SuperApp:/usr/local/etc/SuperApp'
On Android::
>>> from platformdirs import *
>>> appname = "SuperApp"
>>> appauthor = "Acme"
>>> user_data_dir(appname, appauthor)
'/data/data/com.myApp/files/SuperApp'
>>> user_cache_dir(appname, appauthor)
'/data/data/com.myApp/cache/SuperApp'
>>> user_log_dir(appname, appauthor)
'/data/data/com.myApp/cache/SuperApp/log'
>>> user_config_dir(appname)
'/data/data/com.myApp/shared_prefs/SuperApp'
>>> user_documents_dir()
'/storage/emulated/0/Documents'
>>> user_downloads_dir()
'/storage/emulated/0/Downloads'
>>> user_pictures_dir()
'/storage/emulated/0/Pictures'
>>> user_videos_dir()
'/storage/emulated/0/DCIM/Camera'
>>> user_music_dir()
'/storage/emulated/0/Music'
>>> user_desktop_dir()
'/storage/emulated/0/Desktop'
>>> user_runtime_dir(appname, appauthor)
'/data/data/com.myApp/cache/SuperApp/tmp'
Note: Some android apps like Termux and Pydroid are used as shells. These
apps are used by the end user to emulate Linux environment. Presence of
``SHELL`` environment variable is used by Platformdirs to differentiate
between general android apps and android apps used as shells. Shell android
apps also support ``XDG_*`` environment variables.
``PlatformDirs`` for convenience
================================
.. code-block:: pycon
>>> from platformdirs import PlatformDirs
>>> dirs = PlatformDirs("SuperApp", "Acme")
>>> dirs.user_data_dir
'/Users/trentm/Library/Application Support/SuperApp'
>>> dirs.site_data_dir
'/Library/Application Support/SuperApp'
>>> dirs.user_cache_dir
'/Users/trentm/Library/Caches/SuperApp'
>>> dirs.user_log_dir
'/Users/trentm/Library/Logs/SuperApp'
>>> dirs.user_documents_dir
'/Users/trentm/Documents'
>>> dirs.user_downloads_dir
'/Users/trentm/Downloads'
>>> dirs.user_pictures_dir
'/Users/trentm/Pictures'
>>> dirs.user_videos_dir
'/Users/trentm/Movies'
>>> dirs.user_music_dir
'/Users/trentm/Music'
>>> dirs.user_desktop_dir
'/Users/trentm/Desktop'
>>> dirs.user_runtime_dir
'/Users/trentm/Library/Caches/TemporaryItems/SuperApp'
Per-version isolation
=====================
If you have multiple versions of your app in use that you want to be
able to run side-by-side, then you may want version-isolation for these
dirs::
>>> from platformdirs import PlatformDirs
>>> dirs = PlatformDirs("SuperApp", "Acme", version="1.0")
>>> dirs.user_data_dir
'/Users/trentm/Library/Application Support/SuperApp/1.0'
>>> dirs.site_data_dir
'/Library/Application Support/SuperApp/1.0'
>>> dirs.user_cache_dir
'/Users/trentm/Library/Caches/SuperApp/1.0'
>>> dirs.user_log_dir
'/Users/trentm/Library/Logs/SuperApp/1.0'
>>> dirs.user_documents_dir
'/Users/trentm/Documents'
>>> dirs.user_downloads_dir
'/Users/trentm/Downloads'
>>> dirs.user_pictures_dir
'/Users/trentm/Pictures'
>>> dirs.user_videos_dir
'/Users/trentm/Movies'
>>> dirs.user_music_dir
'/Users/trentm/Music'
>>> dirs.user_desktop_dir
'/Users/trentm/Desktop'
>>> dirs.user_runtime_dir
'/Users/trentm/Library/Caches/TemporaryItems/SuperApp/1.0'
Be wary of using this for configuration files though; you'll need to handle
migrating configuration files manually.
Why this Fork?
==============
This repository is a friendly fork of the wonderful work started by
`ActiveState <https://github.com/ActiveState/appdirs>`_ who created
``appdirs``, this package's ancestor.
Maintaining an open source project is no easy task, particularly
from within an organization, and the Python community is indebted
to ``appdirs`` (and to Trent Mick and Jeff Rouse in particular) for
creating an incredibly useful simple module, as evidenced by the wide
number of users it has attracted over the years.
Nonetheless, given the number of long-standing open issues
and pull requests, and no clear path towards `ensuring
that maintenance of the package would continue or grow
<https://github.com/ActiveState/appdirs/issues/79>`_, this fork was
created.
Contributions are most welcome. | {
"source": "yandex/perforator",
"title": "contrib/python/platformdirs/README.rst",
"url": "https://github.com/yandex/perforator/blob/main/contrib/python/platformdirs/README.rst",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 9672
} |
Subsets and Splits