text
stringlengths
55
456k
metadata
dict
# SmolLM evaluation scripts We're using the [LightEval](https://github.com/huggingface/lighteval/) library to benchmark our models. Check out the [quick tour](https://github.com/huggingface/lighteval/wiki/Quicktour) to configure it to your own hardware and tasks. ## Setup Use conda/venv with `python>=3.10`. Adjust the pytorch installation according to your environment: ```bash pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu121 ``` For reproducibility, we recommend fixed versions of the libraries: ```bash pip install -r requirements.txt ``` ## Running the evaluations ### SmolLM2 base models ```bash lighteval accelerate \ --model_args "pretrained=HuggingFaceTB/SmolLM2-1.7B,revision=main,dtype=bfloat16,vllm,gpu_memory_utilisation=0.8,max_model_length=2048" \ --custom_tasks "tasks.py" --tasks "smollm2_base.txt" --output_dir "./evals" --save_details ``` ### SmolLM2 instruction-tuned models (note the `--use_chat_template` flag) ```bash lighteval accelerate \ --model_args "pretrained=HuggingFaceTB/SmolLM2-1.7B-Instruct,revision=main,dtype=bfloat16,vllm,gpu_memory_utilisation=0.8,max_model_length=2048" \ --custom_tasks "tasks.py" --tasks "smollm2_instruct.txt" --use_chat_template --output_dir "./evals" --save_details ``` ### FineMath dataset ablations See the collection for model names: https://huggingface.co/collections/HuggingFaceTB/finemath-6763fb8f71b6439b653482c2 ```bash lighteval accelerate \ --model_args "pretrained=HuggingFaceTB/finemath-ablation-4plus-160B,revision=main,dtype=bfloat16,vllm,gpu_memory_utilisation=0.7,max_model_length=4096" \ --custom_tasks "tasks.py" --tasks "custom|math|4|1,custom|gsm8k|5|1,custom|arc:challenge|0|1,custom|mmlu_pro|0|1,custom|hellaswag|0|1" --output_dir "./evals" --save_details ```
{ "source": "huggingface/smollm", "title": "text/evaluation/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/evaluation/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 1795 }
# Fine-tuning ## SmolLM2 Instruct We build the SmolLM2 Instruct family by finetuning the base 1.7B on [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) and the base 360M and 135M models on [Smol-smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk) using `TRL` and the alignement handbook and then doing DPO on [UltraFeedBack](https://huggingface.co/datasets/openbmb/UltraFeedback). You can find the scipts and instructions for dohere: https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2#instructions-to-train-smollm2-17b-instruct ## Custom script Here, we provide a simple script for finetuning SmolLM2. In this case, we fine-tune the base 1.7B on python data. ### Setup Install `pytorch` [see documentation](https://pytorch.org/), and then install the requirements ```bash pip install -r requirements.txt ``` Before you run any of the scripts make sure you are logged in `wandb` and HuggingFace Hub to push the checkpoints, and you have `accelerate` configured: ```bash wandb login huggingface-cli login accelerate config ``` Now that everything is done, you can clone the repository and get into the corresponding directory. ```bash git clone https://github.com/huggingface/smollm cd smollm/finetune ``` ### Training To fine-tune efficiently with a low cost, we use [PEFT](https://github.com/huggingface/peft) library for Low-Rank Adaptation (LoRA) training. We also use the `SFTTrainer` from [TRL](https://github.com/huggingface/trl). For this example, we will fine-tune SmolLM1-1.7B on the `Python` subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol). This is just for illustration purposes. To launch the training: ```bash accelerate launch train.py \ --model_id "HuggingFaceTB/SmolLM2-1.7B" \ --dataset_name "bigcode/the-stack-smol" \ --subset "data/python" \ --dataset_text_field "content" \ --split "train" \ --max_seq_length 2048 \ --max_steps 5000 \ --micro_batch_size 1 \ --gradient_accumulation_steps 8 \ --learning_rate 3e-4 \ --warmup_steps 100 \ --num_proc "$(nproc)" ``` If you want to fine-tune on other text datasets, you need to change `dataset_text_field` argument to the name of the column containing the code/text you want to train on.
{ "source": "huggingface/smollm", "title": "text/finetuning/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/finetuning/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 2357 }
# Pretraining We use [nanotron](https://github.com/huggingface/nanotron/) library for training SmolLM and SmolLM2 base models. The scripts for training SmolLM v1 can be found in the `smollm1` folder, and those for training SmolLM2 can be found in the `smollm2` folder, we will add the details for the data mixture soon. SmolLM2 uses similar architecture as SmolLM but uses an improved data mixture and significantly longer training periods (11 trillion tokens for the 1.7B, 4 trillion for the 360M and 2 trillion for the 135M). ## Setup Please refer to [nanotron](https://github.com/huggingface/nanotron/) for detailed instructions on setting up your training environment and launching jobs. After setting up the environment and tokenizing the training datasets with [datatrove](https://github.com/huggingface/datatrove) (instructions available [here](https://github.com/huggingface/nanotron/blob/main/docs/nanoset.md#nanosets)), you can modify the configurations to match your number of nodes and local paths. Below is an example of launching SmolLM1 135M training on 1 node (you can change the DP value to 8 in the config and adjust the batch size) and run: ```bash git clone https://github.com/huggingface/nanotron cd nanotron # follow installation CUDA_DEVICE_MAX_CONNECTIONS=1 torchrun --nproc_per_node=8 run_train.py --config-file smollm1/config_smollm1_135M.yaml ``` If you are working on a slurm cluster, you can modify the `launch.slurm` and launch the training with: ```bash sbatch launch.slurm ``` > [!NOTE] > Don't forget to create the logs directory before launching the job: ## Continual pre-training The nanotron checkpoints for SmolLM2 models are available at: https://huggingface.co/HuggingFaceTB/SmolLM2-nanotron-ckpt You can find an example of continual pre-training in the [continual-pretraining](./continual-pretraining) folder.
{ "source": "huggingface/smollm", "title": "text/pretraining/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/pretraining/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 1861 }
# smol-tools A collection of lightweight AI-powered tools built with LLaMA.cpp and small language models. These tools are designed to run locally on your machine without requiring expensive GPU resources. They can also run offline, without any internet connection. ## Features ### SmolSummarizer - Quick text summarization using SmolLM2-1.7B Instruct - Maintains key points while providing concise summaries - Able to reply to follow-up questions ### SmolRewriter - Rewrites text to be more professional and approachable - Maintains the original message's intent and key points - Perfect for email and message drafting ### SmolAgent - An AI agent that can perform various tasks through tool integration - Built-in tools include: - Weather lookup - Random number generation - Current time - Web browser control - Extensible tool system for adding new capabilities ## Installation 1. Clone the repository: ```bash git clone https://github.com/huggingface/smollm.git cd smollm/smol_tools ``` 2. Install dependencies: ```bash uv venv --python 3.11 source .venv/bin/activate uv pip install -r requirements.txt ``` If you're on mac, and you don't have tkinter installed, you can install it with: ```bash brew install [email protected] ``` For linux, you can install it with: ```bash sudo apt-get install python3-tk ``` On Windows, when you install python you need to check the option to also install the tkinter library. ## Usage ### GUI Demo Run the Tkinter-based demo application: ```bash python demo_tkinter.py ``` The demo provides a user-friendly interface with the following shortcuts: - `F1`: Open SmolDraft interface - `F2`: Summarize selected text - `F5`: Open SmolChat interface - `F10`: Open SmolAgent interface ### Programmatic Usage ```python from smol_tools.summarizer import SmolSummarizer from smol_tools.rewriter import SmolRewriter from smol_tools.agent import SmolToolAgent # Initialize tools summarizer = SmolSummarizer() rewriter = SmolRewriter() agent = SmolToolAgent() # Generate a summary for summary in summarizer.process("Your text here"): print(summary) # Rewrite text for improved in rewriter.process("Your text here"): print(improved) # Use the agent for response in agent.process("What's the weather in London?"): print(response) ``` ## Models The tools use the following models: - SmolSummarizer: SmolLM2-1.7B Instruct All models are quantized to 16-bit floating-point (F16) for efficient inference. Training was done on BF16, but in our tests, this format provides slower inference on Mac M-series chips. ## License This project is licensed under the Apache License 2.0 - see the LICENSE file for details. ## Contributing Contributions are welcome! Please feel free to submit a Pull Request.
{ "source": "huggingface/smollm", "title": "tools/smol_tools/README.md", "url": "https://github.com/huggingface/smollm/blob/main/tools/smol_tools/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 2763 }
# SmolLM local inference You can use SmolLM2 models locally with frameworks like Transformers.js, llama.cpp, MLX and MLC. Here you can find the code for running SmolLM locally using each of these libraries. You can also find the conversions of SmolLM & SmolLM2 in these collections: [SmolLM1](https://huggingface.co/collections/HuggingFaceTB/local-smollms-66c0f3b2a15b4eed7fb198d0) and [SmolLM2](https://huggingface.co/collections/HuggingFaceTB/smollm2-6723884218bcda64b34d7db9). Please first install each library by following its documentation: - [Transformers.js](https://github.com/huggingface/transformers.js) - [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) - [MLX](https://github.com/ml-explore/mlx) - [MLC](https://github.com/mlc-ai/web-llm) ## Demos Below are some demos we built for running SmolLM models on-device. ### In-browser chat assistants - [WebGPU chat demo](https://huggingface.co/spaces/HuggingFaceTB/SmolLM2-1.7B-Instruct-WebGPU) of SmolLM2 1.7B Instruct powered by Transformers.js and ONNX Runtime Web. - [Instant SmolLM](https://huggingface.co/spaces/HuggingFaceTB/instant-smollm) powered by MLC for real-time generations of SmolLM-360M-Instruct. The models are also available on [Ollama](https://ollama.com/library/smollm2) and [PocketPal-AI](https://github.com/a-ghorbani/pocketpal-ai). ### Other use cases #### Text extraction - [Github Issue Generator running locally w/ SmolLM2 & WebGPU](https://huggingface.co/spaces/reach-vb/github-issue-generator-webgpu) showcases how to use SmolLM2 1.7B for structured text extraction to convert complaints to structured GitHub issues. The demo leverages MLC WebLLM and [XGrammar](https://github.com/mlc-ai/xgrammar) for structured generation. You can define a JSON schema, input free text and get structured data in your browser. #### Function calling - [Bunny B1](https://github.com/dottxt-ai/demos/tree/main/its-a-smol-world) mapping natural language requests to local aplication calls using function calling and structured generation by [outlines](https://github.com/dottxt-ai/outlines). - You can also leverage function calling (without structured generation) by following the instructions in the [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct#function-calling) or using SmolAgent from [smol-tools](../smol_tools/) #### Rewriting and Summarization - Check the rewriting and summarization tools in [smol-tools](../smol_tools/) using llama.cpp
{ "source": "huggingface/smollm", "title": "tools/smollm_local_inference/README.md", "url": "https://github.com/huggingface/smollm/blob/main/tools/smollm_local_inference/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 2465 }
# SmolVLM local inference ## Usage SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation. To fine-tune SmolVLM on a specific task, you can follow this [fine-tuning tutorial](../../vision/finetuning/Smol_VLM_FT.ipynb) ## Inference with transformers You can use transformers to load, infer and fine-tune SmolVLM. ```python import torch from PIL import Image from transformers import AutoProcessor, AutoModelForVision2Seq from transformers.image_utils import load_image DEVICE = "cuda" if torch.cuda.is_available() else "cpu" # Load images image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg") image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg") # Initialize processor and model processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Instruct") model = AutoModelForVision2Seq.from_pretrained( "HuggingFaceTB/SmolVLM-Instruct", torch_dtype=torch.bfloat16, _attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager", ).to(DEVICE) # Create input messages messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "image"}, {"type": "text", "text": "Can you describe the two images?"} ] }, ] # Prepare inputs prompt = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt") inputs = inputs.to(DEVICE) # Generate outputs generated_ids = model.generate(**inputs, max_new_tokens=500) generated_texts = processor.batch_decode( generated_ids, skip_special_tokens=True, ) print(generated_texts[0]) """ Assistant: The first image shows a green statue of the Statue of Liberty standing on a stone pedestal in front of a body of water. The statue is holding a torch in its right hand and a tablet in its left hand. The water is calm and there are no boats or other objects visible. The sky is clear and there are no clouds. The second image shows a bee on a pink flower. The bee is black and yellow and is collecting pollen from the flower. The flower is surrounded by green leaves. """ ``` ## Inference with mlx-vlm You can also get fast generations for SmolVLM locally with mlx-vlm: ```bash pip install -U mlx-vlm python -m mlx_vlm.chat_ui --model mlx-community/SmolVLM-Instruct-8bit ``` ## Video inference Given SmolVLM's long context and the possibility of tweaking the internal frame resizing of the model, we explored its suitability as an accessible option for basic video analysis tasks, particularly when computational resources are limited. In our evaluation of SmolVLM's video understanding capabilities, we implemented a straightforward video processing pipeline code in `SmolVLM_video_inference.py`, extracting up to 50 evenly sampled frames from each video while avoiding internal frame resizing. This simple approach yielded surprisingly competitive results on the CinePile benchmark, with a score of 27.14%, a performance that positions the model between InterVL2 (2B) and Video LlaVa (7B).
{ "source": "huggingface/smollm", "title": "tools/smolvlm_local_inference/README.md", "url": "https://github.com/huggingface/smollm/blob/main/tools/smolvlm_local_inference/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 3412 }
# Data The scripts inside of datasets_processing_scripts are the ones we used to create all the datasets used for training smolvlm
{ "source": "huggingface/smollm", "title": "vision/data/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/data/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 131 }
# Evaluation We implemented the evaluations for SmolVLM in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit). They can be run by following the instructions in their repository. We also have our own internal evaluation scripts, they can be found in the experiments/evaluation folder. The code used for supporting those is in the m4 folder.
{ "source": "huggingface/smollm", "title": "vision/evaluation/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/evaluation/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 347 }
# Finetuning Here you can find a notebook to finetune SmolVLM on Visual Question Answering using Consumer GPU with QLoRA.
{ "source": "huggingface/smollm", "title": "vision/finetuning/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/finetuning/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 122 }
# Decontamination TODO: add code. Placeholder here: https://github.com/huggingface/cosmopedia/tree/main/decontamination
{ "source": "huggingface/smollm", "title": "text/data/decontamination/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/data/decontamination/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 120 }
# 📚 FineWeb-Edu pipeline <center> <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/wwRnEQydH9qdRtFofIE-A.png" alt="FineWeb-Edu: The finest collection of educational content the web has to offer"> </center> Here you can find the pipeline for training [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu/)'s [classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) and running the annotation on FineWeb. ### 1. Finetune a model for educational value regression * edit `train_edu_bert.slurm` ```bash --base_model_name="Snowflake/snowflake-arctic-embed-m" \ # BERT-like base model --dataset_name="HuggingFaceFW/fineweb-edu-llama3-annotations" \ # Llama3-annotated eduational value dataset --target_column="score" ``` * run the training script on a SLURM cluster: ```bash sbatch train_edu_bert.slurm ``` ### 2. Annotate a dataset with the educational scores predicted by the model ```bash sbatch run_edu_bert.slurm ```
{ "source": "huggingface/smollm", "title": "text/data/finemath/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/data/finemath/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 1008 }
# 📐 FineMath pipeline ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/0GAdY8wZx6bGtUzqX4Lvi.png) Here you can find the information on the curation of [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) and the code for training its math reasoning [classifier](https://huggingface.co/HuggingFaceTB/finemath-classifier). ## Dataset curation Recent language models like DeepSeekMath and MathStral have demonstrated strong mathematical capabilities, trained on specialized datasets that aren't publicly available. We developed a pipeline to identify and extract high-quality mathematical content from CommonCrawl, with several iterations of refinement to improve quality. ### Phase 1: Initial content extraction and classification We began by re-extracting pages from CommonCrawl WARCs using URLs from the FineWeb dataset, collecting both the latest and largest versions of each page to capture the evolution of pages across the years. Unlike FineWeb which uses Trafilatura, we employed Resiliparse for text extraction as it better preserves forum discussions and QA answers that often contain crucial reasoning steps and solutions. For initial quality assessment, we used [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) to generate annotations on a 3-point scale: 1. Contains general mathematical content 2. Shows logical reasoning in mathematical context 3. Contains clear step-by-step solutions at appropriate level A `multilingual-e5-small`-based classifier finetuned on these annotations was used to score the initial corpus. However, this first version performed below the OpenWebMath baseline, leading to several important refinements. ### Phase 2: Recalling more candidate pages Analysis revealed that FineWeb's C4 filter removes pages containing '{' characters, inadvertently filtering out content with LaTeX notation. To address this and expand coverage, we: 1. Identified promising website domains by selecting those where at least 10% of pages received a classifier score ≥ 2 2. Added URLs from OpenWebMath and InfiMM-WebMath datasets 3. Recovered URLs of pages filtered by FineWeb's '{' rule from its rejection logs 4. Re-extracted all content from scratch using the [OpenWebMath pipeline](https://github.com/keirp/OpenWebMath), which properly handles mathematical notation across various HTML markup formats and standardizes them to LaTeX ### Phase 3: Refined quality assessment The expanded corpus underwent a more fine-grained quality evaluation: Once again, we used LLama-3.1-70B-Instruct to score a sample of newly extracted pages on a 5-point scale (full prompt available in [here](assets/prompt.txt)): We finetuned a new [classifier](https://huggingface.co/HuggingFaceTB/finemath-classifier) on these annotations and scored the entire corpus. After leaving only pages with a score of 3 or higher, and deduplicating the samples using simple single-band MinHash-LSH, we obtained FineMath-3+ with 34B tokens. The same classifier was applied to InfiMM-WebMath's text content, focusing more on reasoning rather than advanced mathematics. Both datasets were additionally filtered using FineWeb's language classification pipeline to remove non-English content. ### Decontamination Following Qwen2.5-Math's approach, we removed samples with 13-gram overlaps against test sets from GSM8k, MATH, MMLU and ARC. Decontamination logs are available at [HuggingFaceTB/finemath_contamination_report](https://huggingface.co/datasets/HuggingFaceTB/finemath_contamination_report). ## Training the classifier Todo: share step 2 annotations and finetuning code.
{ "source": "huggingface/smollm", "title": "text/data/fineweb-edu/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/data/fineweb-edu/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 3669 }
# SmolTalk: distilabel pipelines We released [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) the SFT dataset used for building SmolLM2 instruct models. It was created with [distilabel](https://github.com/argilla-io/distilabel) and you can find the synthetic data pipelines here. <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/JLTEbnsBQ_qY032mxFzgC.png" width="800"/> <p><em>Comparison of models finetuned on SmolTalk and Orca AgentInstruct 1M. For more details, refer to the <a href="https://huggingface.co/datasets/HuggingFaceTB/smoltalk" target="_blank">dataset card</a>.</em></p> </div> > [!NOTE] > This section is still in WIP. We will upload the rest of the pipelines soon. Thanks for your patience!
{ "source": "huggingface/smollm", "title": "text/data/smoltalk/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/data/smoltalk/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 788 }
# Continual Pretraining We use [nanotron](https://github.com/huggingface/nanotron/) library to do continual pretraining. ## Setup Please refer to [nanotron](https://github.com/huggingface/nanotron/) for detailed instructions on setting up your training environment and launching jobs and [smollm/pre-training](https://github.com/huggingface/smollm/tree/main/pre-training) for and example with the pre-training scripts. ## Usage The nanotron checkpoints for SmolLM2 models are available at: https://huggingface.co/HuggingFaceTB/SmolLM2-nanotron-ckpt. ## Example: Finemath For finemath, we did continual pretraining of llama3-3B with different data mixtures. Here we will detail the steps to do the same. ### Nanotron For this example, you need to switch to this [PR](https://github.com/huggingface/nanotron/pull/255) ``` gh pr checkout 255 ``` ### Data First step is to tokenize the datasets. To do this, we use the [datatrove](https://github.com/huggingface/datatrove) library. We tokenized the following datasets with the llama3 tokenizer: - [HuggingFaceTB/smollm-corpus/fineweb-edu-dedup](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus/tree/main/fineweb-edu-dedup) - [HuggingFaceTB/finemath/finemath-3plus](https://huggingface.co/datasets/HuggingFaceTB/finemath/tree/main/finemath-3plus) - [HuggingFaceTB/finemath/finemath-4plus](https://huggingface.co/datasets/HuggingFaceTB/finemath/tree/main/finemath-4plus) - [HuggingFaceTB/finemath/infiwebmath-3plus](https://huggingface.co/datasets/HuggingFaceTB/finemath/tree/main/infiwebmath-3plus) - [HuggingFaceTB/finemath/infiwebmath-4plus](https://huggingface.co/datasets/HuggingFaceTB/finemath/tree/main/infiwebmath-4plus) - [Infi-MM/InfiMM-WebMath-40B](https://huggingface.co/datasets/Infi-MM/InfiMM-WebMath-40B) - [open-web-math/open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math) You can find an example of how to tokenize the datasets in the `finemath/finemath-tokenize.py` script. You might encounter some issues with the tokenization, you can apply the following patches: - For `Infi-MM/InfiMM-WebMath-40B`: `finemath/tokenization_InfiMM-WebMath-4OB.patch` - For others: `finemath/tokenization_finemath.patch` To apply the patch, install datatrove from source and run `git apply <path_to_patch>.patch` in the datatrove directory. ### Training Once the dataset are tokenized, you can launch the training with a similar script as the one in [smollm/pre-training](https://github.com/huggingface/smollm/tree/main/pre-training). When resuming a training from a checkpoint, you have the choice to keep the learning rate scheduler and optimizer state by changing the following parameters in the yaml file: - `load_lr_scheduler: false` - `load_optimizer: false` ### Evaluation For evaluation, you can follow the instructions in [smollm/evaluation](https://github.com/huggingface/smollm/tree/main/evaluation#finemath-dataset-ablations).
{ "source": "huggingface/smollm", "title": "text/pretraining/continual-pretraining/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/pretraining/continual-pretraining/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 2930 }
TODO: tune the values of `batch_size` and `mini_batch_size` # Clip ## Zero-shot ```bash conda activate m4-eval commit_hash=`git rev-parse HEAD` python m4/evaluation/launch.py \ --commit_hash $commit_hash \ --batch_size 16 \ --mini_batch_size 1024 \ --do_tasks Cifar10ClipZeroShoterEnsembleAcc ``` # Clip ## Linear probe ```bash conda activate m4-eval commit_hash=`git rev-parse HEAD` python m4/evaluation/launch.py \ --commit_hash $commit_hash \ --batch_size 16 \ --mini_batch_size 1024 \ --do_tasks Cifar10ClipLinearProberAcc ``` # VGPT2 ## Zero-shot ```bash conda activate m4-eval commit_hash=`git rev-parse HEAD` python m4/evaluation/launch.py \ --commit_hash $commit_hash \ --batch_size 64 \ --mini_batch_size 1024 \ --tokenizer_name $ALL_CCFRSCRATCH/experiments/local_experiment_dir/tr_04/opt_step-3766/tokenizer/ \ --model_name $ALL_CCFRSCRATCH/experiments/local_experiment_dir/tr_04/opt_step-3766/unwrapped_model/ \ --do_tasks Cifar10Vgpt2ZeroShoterAcc ``` ## Few-shot ```bash conda activate m4-eval commit_hash=`git rev-parse HEAD` python m4/evaluation/launch.py \ --commit_hash $commit_hash \ --batch_size 64 \ --mini_batch_size 1024 \ --tokenizer_name $ALL_CCFRSCRATCH/experiments/local_experiment_dir/tr_04/opt_step-3766/tokenizer/ \ --model_name $ALL_CCFRSCRATCH/experiments/local_experiment_dir/tr_04/opt_step-3766/unwrapped_model/ \ --do_tasks Cifar10Vgpt2FewShoterAccWithKLAndEntropy \ --num_shots 5 \ --shot_selection_mode rices ``` # Multi-GPU Evaluation To run multi-gpu evaluation, simply launch the above command using `accelerate` cli. Example below: ```bash accelerate launch --num_processes 2 --multi_gpu ./m4/evaluation/launch.py --batch_size 128 --mini_batch_size 4 --model_name /some/unwrapped_model --tokenizer_name /some/tokenizer --do_tasks Cifar10SampleVgpt2ZeroShoterAccWithKLAndEntropy --save_to_jsonl some.jsonl ```
{ "source": "huggingface/smollm", "title": "vision/m4/evaluation/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/evaluation/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 1956 }
SLURM driven cronjobs to manage asynchronously various tasks around checkpoints 1. a slurm cronjob to convert new checkpoints to hf 2. a slurm cronjob to launch multiple evals when it finds a new hf checkpoint 3. a slurm cronjob to launch s3 sync to clear disc space (checkpoints and other files) 4. a slurm cronjob to delete checkpoints that got eval'ed and synced already (to clear disc space) All made to work with potentially overlapping slurm jobs and time-based recovery from aborted jobs - requires tuning for estimated run-time of each job for fastest recovery. The jobs are self-replicating - they will re-schedule themselves before doing the actual work. Each job defines its repetition frequency inside its slurm file. The recommendation for a good frequency is at about the same speed as the frequency of saving checkpoints. To launch them all: ``` sbatch experiments/pretraining/vloom/slurm_scripts_templates/hfc_with_launcher/cleanup-checkpoints.slurm sbatch experiments/pretraining/vloom/slurm_scripts_templates/hfc_with_launcher/convert-checkpoints.slurm sbatch experiments/pretraining/vloom/slurm_scripts_templates/hfc_with_launcher/s3-upload-checkpoints.slurm sbatch experiments/pretraining/vloom/slurm_scripts_templates/hfc_with_launcher/schedule-evals.slurm ``` To run these manually instead, do: ``` m4/scripts/cleanup-checkpoints.py /fsx/m4/experiments/local_experiment_dir/tr-XXX/ m4/scripts/convert-checkpoints.py /fsx/m4/experiments/local_experiment_dir/tr-XXX/ m4/scripts/s3-upload-checkpoints.py /fsx/m4/experiments/local_experiment_dir/tr-XXX/ m4/scripts/schedule-evals.py /fsx/m4/experiments/local_experiment_dir/tr-XXX/ ``` The jobs can recover from aborted jobs. They rely on pre-configured heuristics of the longest time each job could run. If a new job detects the previous job hasn't finished in that pre-configured time it assumes it fails and will start it again. Since we have jobs running on different nodes we can't rely on PIDs and instead use special files on the shared file system and check the staleness of these file using `mtime` to tell how long again some job has started. If you don't want to wait for the safety period to elapse and want to force re-processing, almost all scripts come with `-f` option which will ignore the heuristics that make it safe to have overlapping jobs and force a re-run. Only `cleanup-checkpoints.slurm` doesn't have it since we never should force deletion without solid heuristics check.
{ "source": "huggingface/smollm", "title": "vision/m4/scripts/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/scripts/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 2486 }
# Data handling logic - documentation For training, data are stored in sharded web tars (i.e. webdataset format), and each of these shards live on s3. For training, we download and decode the shards into in-memory samples, pack these samples to form training examples (i.e. sequences), and yield these sequences of data through the data loader to the model training loop. Each of these three logic is handled in a different file. + `dataset_utils.py` -> *decode the shards into in-memory samples* + `packing.py` -> *pack these samples to form training examples (i.e. sequences)* + `dataset.py` -> *yield these sequences of data through the data loader to the model training loop* In this md file, we give useful details about each of these components, in addition to the docstrings in each of these files. ## Decode the shards into in-memory samples Web tar shards are downloaded with the help of the script `m4/experiments/pretraining/vloom/common/webdataset_get_file.sh`. It handles downloading the curent shards in a temp folder and temp name, yield it into a readable data stream, and delete the temp shard once we are done with it. `dataset_utils.py` mostly handles reading this stream and decode this stream into samples. The highest level entry point is the function `get_webdataset`. It defines the series of steps (splitting to nodes and data workers, decoding to samples, and shuffling). The `shuffle*` arguments are series of arguments that control for pseudo shuffling the data. The main change between different types of dataset is the decoding part. For image/text pairs, it requires loading an image and a text field, for web documents, it requires loading an arbitrary (but ordered) sequence of images and texts, etc. Each decoding function defines the necessary utilities to decode the web tar shards that have been previously saved and uploaded to s3, and sanity check them. The main drawback off webdataset that we have never solved is the determinism: every time we resume a training, we have no guarantees that the sample yielded have not already been seen. Essentially, we don't have control over the data order. Note that all the functions defined in `dataset_utils.py` are easily debuggable in the vscode debugger. ## pack these samples to form training examples (i.e. sequences) Depending on the type of data, specific samples packing method are used: + `split_pack_and_pad_iqa_finetuning` -> question/answer/image triplets, specific for vqa fine-tuning + `split_pack_and_pad_ocr` -> ocr documents that require specific pdf decoding + `split_pack_and_pad_pairs` -> image/caption pairs + `split_pack_and_pad_sft` -> chatbot formatted SFT + `split_pack_and_pad_webdocs` -> multimodal documents ### PMD (or any other image/text pairs dataset) This is the `split_pack_and_pad_pairs` method. PMD contains ~70M image-text pairs originally introduced in FLAVA paper as a combination of publicly available datasets. To use PMD we follow these steps: - Each image is represented as an `<image>` token and added to text. We add `<fake_token_around_image>` before and after the sequence of `<image>` tokens. - After each image-text pair, an end of document token is added. - We continue adding the text containing `<image>...<image>` + the caption until we cross the `max_seq_len` specified by the parameters. If we cross it, we add the current pair to next sample and pad the current sample up until the `max_seq_len`. This ensures that there is no image with incomplete text. ### CM4 (or any other multimodal documents dataset) This is the `split_pack_and_pad_webdocs` method. In Idefics2, the sequence of two image would be represented by `<fake_token_around_image><image><image>...<image><fake_token_around_image><image><image>...<fake_token_around_image>`. **Sampling sub-sequences** (copy-pasted from the code comments) Following Flamingo (i.e. Idefics1) we sample a random sequence of a specific length from a document and then takes a maximum of `max_num_images` that belong to that sequence. Computing the start index for the sub-sequence to sample is done by skewing the sampling towards sub-sequences that contain images. The main idea is to give a bonus to tokens that are closely before an image token, so that these tokens have more chance to be sampled. Bonuses are computed for each image, which means a given token can receive bonuses from multiple images if this token is closely preceding multiple images. We sum all the bonuses and L1 normalized along the seq_len axis to get a probability distribution. Each token start with a regular bonus of 1, which corresponds to the uniform distribution over the sequence when there are no bonuses added. *For the sake of simplicity, we describe the algorithm in the case where images take only ONE visual token (N in practise) in addition to the `<fake_token_around_image>` before and after.* Now the remaining question is which precedding tokens do we distribue bonuses to. We first observe that for the sampled sub-sequence to be considered valid (i.e. sub-sequence contains an image), the start index can only be among [image_idx - max_seq_len + 1, image_idx]. For the sake of the explanation, let's split the [image_idx - max_seq_len + 1, image_idx] interval in 3 parts: left, middle and right (in increasing order). If we give bonuses to the tokens just before the image (right part), then we are favoring p_next=0 because only the tokens after the image have an image to attend to. In practice, images will tend to be at the beginning of the sampled sub-sequence. If we give bonuses very far before the image (left part), then we are favoring p_next=1 because only the tokens before the image gave an image to attend to. In practice, images will tend to be at the end of the sampled sub-sequence. To avoid choosing favoring p_next=0 or p_next=1, we can give bonuses to the tokens in the middle part. In practise, images will tend to be in the middle of the sampled sequence. Ultimately, we don't want to skew the distribution fed to model in that way (i.e. whether images are in the beginning, middle or end of the sampled sub-sequence), and have all these cases represented equally in the data. So the easiest is to distribute a bonus to all of the max_seq_len tokens preceding the image. ### SFT datasets This is the `split_pack_and_pad_sft` method. It is relatively similar to `split_pack_and_pad_pairs`, the main addition is handling samples with no images. ### image/question/answer triplets datasets This is the `split_pack_and_pad_iqa_finetuning` method. It is relatively similar to `split_pack_and_pad_pairs`, the main addition is handling samples with two separated question/answer fields, which is relevant in the context of fine-tuning (in particular vqa fine-tunings). ### OCR datasets This is the `split_pack_and_pad_ocr` method. It is relatively similar to `split_pack_and_pad_pairs`, the main addition is handling the specific file decoding for ocr datasets. ## Attention masks In Idefics2, the attention masks are fully auto-regressive, meaning that tokens are attended from left to right in an auto-regressive fashion. We try having full attention (vs left-to-right attention) on the image sequences with no significant performance improvement (at the cost of a more complicated attention mask). This attention mask is referred to as `attention_mask`, which is not to be mixed with the `image_attention_mask` which handles padding for the navit-style image resolution and image ratio preserving vision encoder. ## Yield these sequences of data through the data loader to the model training loop Except for a few legacy tests, we do not pre-process the datasets but do that on the fly (notably packing). For testing out processing on the fly, we need an iterable dataset as our packing strategies generally are applied on the batch level. Alternatively, we can do it in collate function of the dataloader as it usually gets a batch but we then risk again facing [this PyTorch issue](https://github.com/pytorch/pytorch/issues/13246) as the dataset will return some text strings by default. For the iterable dataset, some of the following things are tricky. I have also added our current solutions for each of the situation. - We would want that each process's dataloader loads **different subset of the data** otherwise there can be overlap and the processes will tend to load the same data. This can be achieved in two ways: (i) different ordering of the dataset for each process (ii) different subset of the data for each of the processes. [DistributedSampler](https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler) from PyTorch uses (ii) so we will also target that. - **Shuffling** is tricky. Ideally, we need it to be reproducible but also different for each process and each epoch. We will rely on the rank as well as the current epoch to make it reproducible based on the rank and changing with each epoch. Once we have the indices we want to sample, we can shuffle them in a deterministic way. *Having reproducibility and determinism was possible in our FIRST implementation that relied on HF dataset, but after we switched to web dataset, it was not possible anymore. We did switch to web dataset over dataset because reading data was too much of a bottleneck and was significantly impacting throughput. It would be useful if time permits to revisit that choice now that hf datasets supports web dataset natively.* - For uniform distribution, we would want each worker inside the dataloader to also load different subsets of data. We will rely on the local `worker_id` to the dataset to make it reproducible. For uniform fetch, we can just take indices at the gap of `num_workers` from what was returned from the previous step. To summarize, first the indices will be divided based on the rank of the process and then further split based on the current dataloader's worker id (that's handled by `wds.SimpleShardList` and `wds.split_by_node` in `dataset_utils.py`). Once we have a list of indices we want to sample, we can iterate over them, keep appending to a batch until we reach the batch size we want while slicing any overflows in the process to the next batch. This will ensure there is no extensive wastage. We will also drop the last uneven batch to prevent any barriers with DDP. Note that in this case the batch size passed to the mapping (packing/padding) function can be different from the actual batch size yielded from the function. This can allow us to better utilize the mapping functions as more data in padding and packing will lead to less wastage and allow bigger operations to batched if possible. For brevity, one alternative to the previous approach is to just take full batch length sample from the indices that we require, pack them but then only yield batch of length batch size. This will lead to some wastage. Once we implemented the above said functionality, accelerator started to be a bottleneck in the how it handled the iterable datasets. Basically, in accelerate if you choose to not let accelerate dispatch batches for you, it [wraps the dataset](https://github.com/huggingface/accelerate/blob/469b61e0bfdb2dc3baa4df52e4e82fb6a8e48cfd/src/accelerate/data_loader.py#L216) in `IterableDatasetShard` which wastes a lot of batches but won’t probably cause any uneven batches. If you choose it to [dispatch batches for you](https://github.com/huggingface/accelerate/blob/469b61e0bfdb2dc3baa4df52e4e82fb6a8e48cfd/src/accelerate/data_loader.py#L381) then the dataloader only in the main process is used which is also a wastage (maybe can be circumvented with higher number of workers but there will be surely many zombies). In `IterableDatasetShard`, (i) it loads the same dataloader on each of the processes, (ii) collects batches until it has the batch length of global batch size, (iii) from this global batch slice the batch corresponding to the index of the current process, (iv) dump rest of the samples. This is wastage because we processed all of that data unnecessarily just to know the right batch to sample from global batch and dump the rest of it. Currently, since I am handling sharding myself, I don’t want either of them but I end up in uneven batches because different documents can lead to different number of batches but this doesn’t cause any wastage. One way to circumvent is to a gather and check if any worker has exhausted but this will lead us to lose a minor number of batches. We successfully implemented this strategy in our current system as an custom dataloader and it is working well for us. Updates: The above logic is implemented across `DataLoaderForIterableWrapperDataset` (highest-level dataloader that the training loop is iterating over), `CustomChainDataset` (handling the mixing of multiple datasets with mixture proportions that we define in config) and `IterableWrapperDataset` (iterable ofer ONE dataset type).
{ "source": "huggingface/smollm", "title": "vision/m4/training/DATA_DOCUMENTATION.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/training/DATA_DOCUMENTATION.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 12968 }
# MagPie Ultra v1.0 This [`distilabel`](https://github.com/argilla-io/distilabel) was used to generate the [magpie-ultra-v1.0](https://huggingface.co/datasets/argilla/magpie-ultra-v1.0) dataset. The dataset follows the [MagPie](https://magpie-align.github.io) pipeline recipe to generate a multi-turn conversation dataset using [meta-llama/Llama-3.1-405B-Instruct-FP8](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct-FP8). ## Setup You will need to install `distilabel` with a few extra dependencies to be able to execute the pipeline: ```bash pip install distilabel[ray,vllm,sentence-transformers,faiss-cpu,hf-transformers] ```
{ "source": "huggingface/smollm", "title": "text/data/smoltalk/magpie_ultra_v1/README.md", "url": "https://github.com/huggingface/smollm/blob/main/text/data/smoltalk/magpie_ultra_v1/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 642 }
This folder traces the exploration of additional cleaning that could be brought to the CM4 dataset. As a result of this exploration phase, 2 potential improvements have been identified: 1. Remove HTML nodes (and their descendants) whose tag class attribute value contains either "footer" or "site-info". From the exploration, this would correspond to "web" parts of the web page 2. Splitting the html at the level of the continue reading occurrence, which is often characterized by the fact that the class attribute value of the tag contains "more-link". **Before being fully implemented**, we tested the suitability of 2. by creating a filtered version of CM4 that excluded all documents that would have had an occurance of continuous reading (`04_get_banned_url.slurm` and `05_filter_cm4.slurm`). The explore folder contains streamlint spaces that have been used to find new possible cleaning rules.
{ "source": "huggingface/smollm", "title": "vision/data/datasets_processing_scripts/clean_m4_prelimenary_experiments/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/data/datasets_processing_scripts/clean_m4_prelimenary_experiments/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 904 }
These scripts have been used to create validation support, validation query, test support and test query splits from certain evaluation datasets that follow the instructions given by DeepMind in their Flamingo paper. > Dataset splits for the DEV benchmarks. Concretely, estimating few-shot learning performance of a model consists of adapting it on a set of support samples and evaluating it on a set of query samples. As a result, any evaluation set should be composed of two disjoint subsets containing respectively the support and the query samples. For the DEV benchmarks that are used both to validate design decisions and hyperparameters, as well as to report final performance, we therefore use four subsets: > - validation support: contains support samples for validation; > - validation query: contains query samples for validation; > - test support: contains support samples for final performance estimation; > - test query: contains query samples for final performance estimation. > In practice, for the test query subset, we use the subset that prior works report results on, for apples-to-apples comparison. While the validation set would be a natural choice for the validation query subset, we note that this is not possible for all benchmarks, since some benchmarks do not have an official validation set (e.g. OKVQA) and for others, the validation is commonly used to report final performance in place of the test set (e.g. ImageNet or COCO). For simplicity, we use a subset of the original training set as the validation query subset. Finally, we also use additional disjoint subsets of the training set as respectively the validation support subset and the test support subset. We now describe in more detail how we form the latter three subsets. For captioning tasks, open-ended evaluation is efficient so we evaluate on a large number of samples. Specifically, for COCO, we use the same number of samples as used in the Karpathy splits for evaluation sets (5000). For VATEX, because the training set is of limited size, we only evaluate over 1024 samples, reserving the rest for support sets. For question-answering tasks, we evaluate over 1024 samples; chosen to make both open- and close-ended evaluation reasonably fast. For image classification tasks, we evaluate over 10 images per class: 10,000 samples for ImageNet, and 7000 samples for Kinetics700. As for the support sets, for both validation and final performance estimation, we use 2048 samples across all tasks, except for classification tasks where we scale this to 32 samples per class, to better estimate expected performance for each class.
{ "source": "huggingface/smollm", "title": "vision/data/datasets_processing_scripts/create_evaluation_datasets/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/data/datasets_processing_scripts/create_evaluation_datasets/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 2640 }
This folder contains all the slurm, bash and python scripts used to build enwiki-v1 and enwiki-v2. The numbering of the files indicates the order in which they were run. Beware, these scripts have sometimes been used on different machines to process a portion of the shards, the changes needed to parallelize the work are not contained in the scripts in the folder `slurm_and_bash_scripts`.
{ "source": "huggingface/smollm", "title": "vision/data/datasets_processing_scripts/enwiki/REAME.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/data/datasets_processing_scripts/enwiki/REAME.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 392 }
# Run evaluation - create log folder before running the slurm script <!-- # Push the metrics to Wandb Currently, this step is done manually. The purpose of this section is to describe the process used. Here are the steps to follow: 1. Run a slurm script that will evaluate all the checkpoints saved for a training on a single task (e.g. see for example the slurm script [`tr_18`](experiments/evaluation/vloom/tr_18/tr_18.slurm)). Be careful to put `%x_%A_%a` in the title of the log files, 2. Note the common job id `JOB_ARRAY_COMMON_ID` for the whole jobarray which correspond to `%A` in the log file name, 3. Go to the folder containing the produced log files and run `grep "'Evaluate the checkpoint: \|<TASK_NAME>Vgpt2ZeroShoter<METRIC> <JOB_NAME>_<JOB_ARRAY_COMMON_ID>*` - where `<TASK_NAME>`, , `<METRIC>`, `<JOB_NAME>`, and `<JOB_ARRAY_COMMON_ID>` should be replaced accordingly - then copy the result 4. Use the [push_results_to_wandb.py](/home/lucile_huggingface_co/repos/m4/experiments/evaluation/vloom/utils/push_results_to_wandb.py) script to push the results to Wandb by changing the values of variables `run_name` and `content`. --> # Evaluation to be submitted to an outside server Many test set evaluations do not have ground truth and therefore require a file to be sent to an external server specific to each task. These submissions are very often limited in number (per day, per month and per year) and require the creation of an account and a team. Where possible, it's better to start by submitting the file obtained on the split `server_check` in order to check that the format of the result file is correct and that the figure calculated with our tools corresponds to the figure calculated by the server. To retrieve the file to be submitted, for the moment you need to perform a few manual steps to: 1. extract the results subpart from the jsonl results file and 2. post-process the results file. A template to perform those steps is provided bellow: ```python from pathlib import Path import json result_file = Path("/fsx/m4/experiments/local_experiment_dir/evals/results/tr_190_01_64n_check_server_evaluations.jsonl") with open(result_file, 'r', encoding="ISO-8859-1") as file: json_data = file.read() json_entities = json_data.strip().split('\n') parsed_json_objects = [] for entity in json_entities: parsed_json = json.loads(entity) parsed_json_objects.append(parsed_json) parsed_json_object = # Code to select the result item we want to extract task = parsed_json_object["task"] num_shots = parsed_json_object["in_context_params"]["num_shots"] scores = eval(parsed_json_object["score"]) for metric, score in scores.items(): if "server_results" in metric: prompt_id = parsed_json_object["prompt_template_id"] num_shots = parsed_json_object["in_context_params"]["num_shots"] max_new_tokens = parsed_json_object["text_generation_params"]["max_new_tokens"] checkpoint_id = parsed_json_object["model_name_or_path"].split("/")[-2].split("-")[-1] server_results = scores["server_results"] # Custom code to format server results, for example for VQAv2: # server_results = [{"question_id": int(server_result["question_id"]), "answer": server_result["answer"]} for server_result in server_results] output_file_path = result_file.parent / f"{task_name}_test_server_results" / f"CHANGEME_{checkpoint_id}_num_shots_{num_shots}_promptid_{prompt_id}_max_new_toks_{max_new_tokens}_{task_name}_result.json" output_file_path.parent.mkdir(parents=True, exist_ok=True) print(f"output_file_path: {output_file_path}") with open(output_file_path, 'w') as output_file: output_file.write(json.dumps(server_results)) ``` To date, the tasks that requires a submission to an outside server are: - VQAv2, format: ``` results = [result] result{ "question_id": int, "answer": str } ``` - VizWiz, format: ``` results = [result] result = { "image": string, # e.g., 'VizWiz_test_00020000.jpg' "answer": string } ``` - TextCaps, format: ``` results = [result] result = { "image_id": string, "caption": string } ``` - NoCaps, format: ``` results = [result] result = { "image_id": int, "caption": string } ```
{ "source": "huggingface/smollm", "title": "vision/experiments/evaluation/vloom/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/experiments/evaluation/vloom/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 4267 }
# Webdataset WebDataset is a Python library that provides a convenient way to work with large datasets that are stored in a remote location, such as an Amazon S3 bucket or a Google Cloud Storage bucket. The library allows to stream data from these remote locations on the fly during the training. To work on HFC, we need to use this type of solution because we don't have the space to store our data on disk. ## Setup Incidentally, to stream data from s3 you need: - install s5cmd with `conda install -c conda-forge s5cmd` - configure aws with yours credentials - put a specific command instead of the local paths that were used before, such as `pipe:bash ${WORKING_DIR}/experiments/pretraining/vloom/common/webdataset_get_file.sh ${tar_file_path_on_s3}` - set in the configuration use_webdataset to true ## Philosophy of webdataset The idea of Webdataset is to define a sequence of operations, each of which will be applied to the iterable resulting from the previous operation. Here is an outline of the sequence of operations carried out: - Input: a list or an iterable of commands or local paths. There are two identified use cases for m4. Either you want to open tar files stored locally, in which case the list should simply contain the local paths to the shards files. Or you want to open tar files stored on s3 and in this case you need to pass a command - `pipe:bash <PATH_TO_M4_CLONE>/experiments/pretraining/vloom/common/webdataset_get_file.sh <S3_URI>` - filled with the uri of the tar file. - Sharding by node: Each gpu in the training will receive a subset of the iterable of input files. If this list has no duplicates, this means that each gpu will have unique shards. - Sharding by worker: Each dataloader on each GPU can have several workers. If this is the case, the list of files assigned to a GPU will again be divided between the workers. - Conversion of the tar file into samples: This step involves re-grouping the tar files that make up a single example. Webdataset is currently not designed for webdocuments that are made up of several texts and images, which is why we have a customised method `group_by_keys_interleaved`. This method also ensures that each example is complete, as piping can result in tar files being cut off in the middle. - Collation of the samples into instances: This method involves changing the format of the samples to get closer to the format expected by our pipeline. - Decoding of the images and the texts: So far, the elements in the instances are still bytes. This step converts them to their final python objects (PIL image and string) - Batching of the examples: To finish the pipeline, we batch `map_batch_size` examples together ## Shuffling To shuffle the examples, we can either: 1. Pseudo shuffle on the fly 2. Shuffle upstream ### Pseudo shuffle on the fly When the tar files are not shuffled, or when several epochs are carried out on the same tar files, this is the method available to us. The idea of on-the-fly pseudo-shuffling is to add buffers between targeted pipeline operations in order to randomly select the examples in this buffer. We have configuration variables to define the length of each buffer, but unfortunately the larger the buffer, the more RAM it consumes. In addition, webdataset offers to add a wrap-up phase during which you can start to draw from the buffer without having to wait until the buffer is completely full. By default, there is no shuffling on the fly, the following variables need to be adjusted for each dataset: ```python shuffle_initial_urls_list: bool shuffle_before_split_by_node_buffer_size: Optional[int] shuffle_before_split_by_worker_buffer_size: Optional[int] shuffle_after_tarfile_to_samples_buffer_size: Optional[int] shuffle_after_batching_buffer_size: Optional[int] ``` ### Shuffle upstream The idea here is to store into the tar files the samples in a random order so that there is not pseudo-shuffling to do on the fly. In that case, the configuration parameters that need to be set to None/False are: ```python shuffle_initial_urls_list: bool = False shuffle_before_split_by_node_buffer_size: Optional[int] = None shuffle_before_split_by_worker_buffer_size: Optional[int] = None shuffle_after_tarfile_to_samples_buffer_size: Optional[int] = None shuffle_after_batching_buffer_size: Optional[int] = None ``` ## Resume training Currently, we don't have a feature to resume a training where from where it left on the previous run in term of data (see "Potential improvements" section). ## Hyper-parameters tuning On-the-fly streaming of examples from s3 adds hyper-parameters that have to be tuned for almost every scale of experiment. The parameters that will have the greatest influence on each other are: `max_num_images`, `max_seq_len`, `map_batch_size`, `max_num_samples_per_document`, `shuffle_before_split_by_node_buffer_size`, `shuffle_before_split_by_worker_buffer_size`, `shuffle_after_tarfile_to_samples_buffer_size`, `shuffle_after_batching_buffer_size`, `batch_size_per_gpu`, `num_workers` and the time of the forward + backward + opt step. ## Potential improvements ### S3 pipe script Currently, the s3 piping works by downloading the shard to the node's NVME drive and then piping the file. This solution appears to be sub-optimal because there is an extra write and read on the NVME. However, without this trick, we don't have the full tar files in the pipe, we never get to the end. The current hypothesis is that the internal retry system of libs such as `s5cmd` or `aws s3` do not work with the pipe. ### Disallow list To have good control over the non-repetition of data in a training that is split into several jobs, a disallow list system should be implemented and used in conjunction with upstream shuffling. It's not a simple feature, especially if you want a perfect implementation. Nevertheless, if you accept losing the end of a few shards, the solution shown in the PR #1307 should provide a good basis. ## Create tar files All the files used to create the tar files are inside `datasets_processing_scripts/01_tar_datasets_with_jpeg`. For future processing, particular attention should be paid to the order of the files in the tar. ## Debug tips Currently, if there is a bug in the webdataset pipeline, it will not cause the training to crash. The error will simply be logged and the code will move on. For the future or for debugging, the following change should be considered: `handler=log_and_continue` -> `handler=wds.reraise_exception`. # Checkpoint extraction At the end of the training the normal model weights file isn't in the checkpoint and requires a manual extraction, which is done offline and the script now has the luxury of using the whole node's CPU RAM, e.g. ``` cd /fsx/m4/experiments/local_experiment_dir/tr_171-save-load/opt_step-50/accelerator_state/ ./zero_to_fp32.py . output.bin ``` The `zero_to_fp32.py` script is already copied into the checkpoint upon checkpoint saving. We aren't gathering the full model on every save because it's slow and there might not be enough memory to perform that. Therefore we use `stage3_gather_16bit_weights_on_model_save: False` to only having each gpu save its own shards. # Monitoring a training What does it mean to monitor a training? The most important things to look at: - Most importantly, the loss should go down (on average) consistenly. If it diverges, intervention is required (see next section) - Looking at the colab metrics (with parameters/gradients/activations) is a useful indicator. Weird behaviors (as in explosion) usually precedes a divergence of the loss. - Is the training still in the slurm queue? If not, intervention is required (see next section) - Nodes failures (it will most likely make the training crash). You can do a `grep 'srun: error: Node failure'` on the text logs. Reporting them on #science-cluster-support is a good idea. # How to intervene when a training diverges In case of rewinding, I recommend starting another plot on WB by setting the `wandb_run_id` inside `resume_run_infos.json` to the empty string. This will create a new run on WB and you can compare the two runs, and hopefully the loss for that new run will not diverge. Try in order of complexity: - Rewind and restart (that is essentially reshuffling the data) - Rewind, decrease the LR and restart - Rewind, reset the optimizer, set lr=0, restart, train with lr=0 for a bit, then restart again now with restored lr ## How to reset the wandb run on rollback empty the `wandb_run_id` string in the `resume_run_infos.json` ## How do I rewind to a previous training? Change the path of the resuming checkpoint in `$SAVE_DIR/latest_opt_step_dir`. Additionally, if you already started to evaluate the previous checkpoints that you are now discarding, you might want to backup your previous results and roll back your evaluations results json file to the same opt step. Here's an example script to do it: ```python from pathlib import Path import json import shutil # ------------ Fill in these variables ------------ roll_back_number = XX # e.g. 4 roll_back_to_opt_step = XX # e.g. 16000 run_name = "XXX" # e.g. "tr_190_01_64n" # ------------------------------------------------- main_eval_path = Path(f"/fsx/m4/experiments/local_experiment_dir/evals/results/{run_name}_evaluations.jsonl") archive_eval_path = main_eval_path.parent / f"{run_name}_evaluations_archive_exp_{roll_back_number}.jsonl" # First we start by backing up the current evals file shutil.move(main_eval_path, archive_eval_path) # Then we select the evals we want to keep # 1. Load the evals from the archive parsed_json_objects = [] try: with open(archive_eval_path, 'r', encoding="ISO-8859-1") as file: json_data = file.read() # Split the JSON data into separate entities json_entities = json_data.strip().split('\n') for entity in json_entities: try: parsed_json = json.loads(entity) parsed_json_objects.append(parsed_json) except json.JSONDecodeError as e: print("Failed to parse JSON entity:", e) print("JSON entities parsing succeeded.") except IOError as e: print("Error reading the file:", e) # 2. Select the evals we want to keep cleaned_parsed_json_objects = [] for parsed_json in parsed_json_objects: curr_opt_step = int(parsed_json["model_name_or_path"].split("/")[-2].split("-")[-1]) if curr_opt_step <= roll_back_to_opt_step: cleaned_parsed_json_objects.append(parsed_json) print(len(cleaned_parsed_json_objects), len(parsed_json_objects)) with open(main_eval_path, 'w') as output_file: for r in cleaned_parsed_json_objects: output_file.write(json.dumps(r) + "\n") print("Parsed JSON data saved to", main_eval_path) ``` ## How do I decrease/change the LR mid-training? Tweak the states (there are two fields for the LR) in `opt_step-xxx/accelerator_state/custom_checkpoint.pkl`. e.g. to reduce lr by 25% do: ``` python -c "import sys, torch; sd=torch.load(sys.argv[1]); \ print(sd['base_lrs']); sd['base_lrs'] = [x*0.75 for x in sd['base_lrs']]; print(sd['base_lrs']); \ print(sd['_last_lr']); sd['_last_lr'] = [x*0.75 for x in sd['_last_lr']]; print(sd['_last_lr']); \ torch.save(sd, sys.argv[1])" opt_step-14500/accelerator_state/custom_checkpoint_0.pkl ``` ## How do I reset the optimizer? Set `load_optimizer_states` to False. # I detected a bug in the code, and it will require some time to fix. If you think that fixing a bug might take more than 1 day and fixing it doesn't require the full capacity (in terms of node) that we are training on, then put the job array on hold (`scontrol hold <job_id>`), let people know #science-cluster-planning that they should use the idle GPUs with some other runs. Right now bigcode almost have some smaller jobs to run on the side, so they can squeeze in some jobs and halt them when we are ready to relaunch.
{ "source": "huggingface/smollm", "title": "vision/experiments/pretraining/vloom/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/experiments/pretraining/vloom/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 11925 }
# Generation Process: - find one or more opt-step checkpoints to make generations with - create folder in code/m4/experiments/generations - add a config.yaml and a [gen_folder_name]_generate.slurm folder - fill the config file according to desired hyperparameters: prompt/num_beams/ngram_repeats etc.. - run sbatch [m4_repo_name]/experiments/generation/[gen_folder_name]/[gen_folder_name]_generate.slurm - check wandb and make sure your column shows up. If it doesn't, click on "columns" at the bottom right of the generation table and slide the missing generation to the "Displayed columns" side
{ "source": "huggingface/smollm", "title": "vision/m4/evaluation/generation/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/evaluation/generation/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 597 }
We need to locally save some datasets with `copy_remote_sample_datasets.py` because the caching function does not work for some datasets, see https://github.com/huggingface/datasets/issues/4760 and https://github.com/huggingface/datasets/issues/3547.
{ "source": "huggingface/smollm", "title": "vision/m4/evaluation/scripts/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/evaluation/scripts/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 250 }
# Data Collection ## Goal of `data_collection` This folder is aimed to: - Simplify HTML DOM trees; - Convert the simplified DOM trees to another structure adapted for an extraction; - Perform an extraction (either of image-text pairs, or web documents); - Perform a filtering on the extraction (either on image-text pairs, or web documents); - Visualize the results. ## Organization of `data_collection` The sub-folder `processors` contains the files defining the functions to do the big operations: - The simplification of DOM trees in `dom_tree_simplificator.py`; - The convertion of DOM trees to a more adapted structure in `pre_extraction_simplificator`; - The extraction of web documents in `web_document_extractor.py`; - The filtering of web documents in `web_document_filtering.py`; - The extraction of pairs in `pair_extractor.py`; - The filtering of pairs in `pair_filtering.py`. These files require other functions or external parameters to work, which are defined in the sub-folder `utils`. The call of these operations from `processors` to obtain outputs (in `outputs`, which is now a bit outdated, or rather `large_files` at the root of the repo) are done in the sub-folder `callers`. The sub-folder `visualization` contains the streamlit apps to visualize these outputs: - In `global_visualization.py`, one can view how the simplification of DOM trees affect the HTML codes, the trees, and the rendered webpages. We also visualize web documents and extract image-text pairs with additional information. - In `web_document_visualization.py`, one can visualize web documents and see the impact of filtering on them. - In `pair_visualization.py`, one can obtain statistics on the extracted image-text pairs, and see the impact of filtering on these statistics. - `plot_clib_distrib.py` is used to obtain the distributions of CLIP scores of reference datasets, to compare with our distribution. - We used `pair_stat_dashboard.py` to obtain a lot of statistics on the pairs at the beginning. This file might not be maintained anymore. The sub-folder `debug` is used for debugging and could be ignored. ## Explanation of the data collection pipeline ### Starting point for the dataset We start with the dataset [`c4-en-html-with-metadata`](https://huggingface.co/datasets/bs-modeling-metadata/c4-en-html-with-metadata), which contains 45M of English HTML documents whose url corresponds to C4 examples gathered by the modeling metadata group. Each example of the dataset contains much information about the metadata, but we are currently only interested in the columns `html` and `url`. The full dataset is downloaded on Jean Zay at the path `/gpfsscratch/rech/cnw/commun/local_datasets/c4-en-html-with-metadata-arrow/` (5.9T), and the full dataset containing only the columns `html` and `url` is at the path `/gpfsscratch/rech/cnw/urd43gx/c4_en_html` (3.5T). ### From an HTML string to a tree structure We use [Selectolax](https://github.com/rushter/selectolax) to efficiently parse the HTML strings and create trees. Each node in the tree corresponds to a tag or a text, and we can have information about its attributes. ### Simplifying the DOM trees With the `processor` `DOMTreeSimplificator`, we make several simplifications to have simplified DOM trees and remove the unnecessary information: - We remove comments in the HTML strings; - We replace the tags `<br>`, `<br/>` and `<br />` by a break of line; - We unwrap a list of tags (see below), meaning that we remove the tags, but we keep the content inside; - We strip every tag not in a list (see below), meaning that we completely remove both the tags and what’s inside; - In a complementary way to the previous step, we additionally remove some tags which contains wrong attribute values, for example a `<div>` tag containing an attribute `class` with the value `date`; - We remove empty nodes; - We un-nest nodes, meaning that if a parent node only has one child and no text associated, the child becomes the parent. Tags that we unwrap: `a`, `abbr`, `acronym`, `b`, `bdi`, `bdo`, `big`, `cite`, `code`, `data`, `dfn`, `em`, `font`, `i`, `ins`, `kbd`, `mark`, `q`, `s`, `samp`, `shadow`, `small`, `span`, `strike`, `strong`, `sub`, `sup`, `time`, `tt`, `u`, `var`, `wbr`. After unwrapping the tags, tags **not** in this list are removed: - Tags defining a structure : `address`, `article`, `aside`, `blink`, `blockquote`, `body`, `br`, `caption`, `center`, `dd`, `dl`, `dt`, `div`, `figcaption`, `h`, `h1`, `h2`, `h3`, `h4`, `h5`, `h6`, `hgroup`, `html`, `legend`, `main`, `marquee`, `ol`, `p`, `section`, `summary`, `title`, `ul`; - Tags defining a media: `audio`, `embed`, `figure`, `iframe`, `img`, `object`, `picture`, `video`; - Tags that could contain an interesting attribute: `source`. Tags that we could consider if we really want to have a high recall (but we cannot do anything with most of them): `bgsound`, `button`, `canvas`, `col`, `colgroup`, `datalist`, `details`, `dialog`, `dir`, `fieldset`, `form`, `frame`, `frameset`, `head`, `header`, `input`, `li`, `label`, `map`, `nav`, `optgroup`, `option`, `pre`, `select`, `slot`, `svg`, `table`, `tbody`, `td`, `template`, `tfoot`, `th`, `thead`, `tr`, `track`, `xmp`. For the 3rd point, they are tags that we currently chose to strip, but that we might reconsider later: `table` (and its associated tags), `form`, `li`, `head`, `header`, `nav`. We chose to remove these tags, either because it is hard to transform to a text (how do we transform a `table` to something clear with a linear text?), or because it can contain useful information, but in most cases this is noise of information related to the navigation in the website (`li` for example). ### Having a good structure With the `processor` `PreExtractionSimplification`, we traverse the simplified DOM tree and append the nodes to a list to have a better structure. If the node is a media node, we extract the interesting attributes. We check the validity of the source URL, if it’s not valid we discard the media node. If the node is a text node, then we format it with the following strategy: - In the HTML string, replace every +2 `\n` with only one `\n`; - In the HTML string, replace every +2 spaces with only one space; - In the HTML string, replace `<br> tags` (and their various forms) with `#BR_TAG#`; - Within a text node, replace every `\n` or `\t` with a space; - Within a text node, replace every +2 spaces with only one space; - Within a text node, split on `#BR_TAG#`, strip each element, and merge on `\n`. If the very first and/or last characters of the text are spaces, make sure to keep them. Then, we have the possibility to merge two consecutive text nodes (and repeat this operation) with the following strategy: - Append the separation at the end of the first text (if there is any, it will be one space) to a set, and remove it from this text; - Append the separation at the beginning of the second text (if there is any, it will be one space) to the set, and remove it from this text; - Consider all the tags that differ from the path of node 1 to the path of node 2. Append to the set the separations induced by each of these tags; - The biggest separation in the set wins, in this order: `\n\n`, `\n`, space, nothing. Merge the two texts with this separation; - When a text node cannot be merged anymore (the previous and following nodes are not text nodes), strip the text. ### Intuition behind Selectolax trees and merging text nodes Selectolax builds trees by creating a new node for each tag present in the HTML document, and a new node for each non-empty text. For exemple, consider the HTML document: ``` html_str = """ <html> <body> <div> this is a test <h1>Heading</h1> <p> p-inner </p> p-trailing </div> </body> </html> """ ``` When traversing the tree (depth-first) and printing the path of the nodes (which include their tag as the last component of the path), we obtain: - `.html` - `.html.head` - `.html.body` - `.html.body.-text` (the text is "\n", because there is a line break between `body` and `div`) - `.html.body.div` - - `.html.body.div.-text` (the text is "\n\tthis is a test\n\t") `.html.body.div.h1` - `.html.body.div.h1.-text` (the text is "Heading") - `.html.body.div.-text` (the text is "\n\t") - `.html.body.div.p` - `.html.body.div.p.-text` (the text is "\n\t\tp-inner\n\t") - `.html.body.div.-text` (the text is "\n\tp-trailing\n") - `.html.body.-text` (the text is "\n\n") Now we want to merge these text nodes. We first merge the first text node, "\n" at `.html.body.-text`, and the second text node, "\n\tthis is a test\n\t" at `.html.body.div.-text`. To do that, we follow the strategy by first formatting the two texts, which results in “ “ and " this is a test ". We notice that in the first text “ “, we have a space separation at the beginning of the text (it is a particular case since it is also at the end here, but considering it at the end would not change the result in the end). So we'll need to keep this space at the beginning of the merged text nodes, and we can remove it from the text 1, which becomes the empty text “”. We notice that there isn’t any separation at the end of the text 1, but there is a separation at the beginning of the text 2, which is a space. So we add this space to our set of separations, and we remove it from the text 2, which becomes “this is a test “. Now, we simply have to check the differences between the paths of the two text nodes. We only have `div` which is different, and `div` induces a line break `\n`, so we add “\n” to our set of separation. Our set of separations includes “ “ and “\n”. The strongest is “\n”, so this will be our separation between the two text nodes, which becomes “ “ (the first leading space that we should not forget about) + “” (text 1) + “\n” (separation) + “this is a test “ (text 2) = “ this is a test “. This merged text nodes takes the path of the second text nodes, which is `.html.body.div.-text`. We can now merge this new merged text node “ this is a test “ at .`html.body.div.-text` with “Heading” at `.html.body.div.h1.-text`. This results after the operation in “ this is a test\n\nHeading” at `.html.body.div.h1.-text`. And so on until merging everything, which results in “ this is a test\n\nHeading\n\np-inner\n\np-trailing\n\n”. Since we cannot merge this node anymore, we can strip it to obtain “this is a test\n\nHeading\n\np-inner\n\np-trailing”. This is what is rendering by testing online on an [HTML editor](https://htmledit.squarefree.com/). ### Web documents extraction At this stage, web documents are simply the structure detailed previously, where each node is either a text or an image (or nothing if we couldn’t download the image). We use the the `processor` `CommonCrawlWebDocumentExtractor` to extract web documents. Before doing this, make sure to [set up a DNS resolver](https://github.com/rom1504/img2dataset#setting-up-a-bind9-resolver). Performances for 10K documents, on a Mac M1 Pro (all steps are done with multiprocessing with 10 cores) **without** resizing images: - Step 1 - Extracting and processing the HTML files: 15 secs - Step 2 - Getting the URLs of all images: < 1 sec - Step 3 - Downloading the images with `img2dataset`: 4 min - Step 4 - Create the dataset containing all images (**2 Go**): 7 secs - Step 5 - Replace the URLs in the dataset obtained after Step 1 with images bytes (image retrieval): 2 secs On 1M documents on a GCP machine (60 cores): - Processing the HTML documents, simplifying them, and creating a dataset with the desired structure: 7min - Download images: 1h24min - Create the dataset containing all images: 30min - Retrieve images and add them to the initial dataset: 16min 1964081 out of 4034154 images were successfully downloaded (48.7%). The dataset containing all images weighs 185G. The final dataset with texts and images weighs 243G. On 10M documents on a GCP machine (60 cores): - Processing the HTML documents, simplifying them, and creating a dataset with the desired structure: 1h03min - Download images: 11h15min - Create the dataset containing all images: 6h29min - Retrieve images and add them to the initial dataset: 2h25min - Saving the dataset : 1h36min - Sharding the dataset : 7h29min ### Web document filtering The filtering of web documents is done at different levels. First, we modify the document with a filtering at node level. Then, we decide if we keep the document with a filtering at document level. **Node level:** For each image, filters on: - The format; - The size (original and rendered widths and heights, side ratio). For each paragraph in a text, filters on: - The number of words; - The special character ratio; - The stop word ratio. **Doc level:** Filters on: - The number of images; - The number of words; - The character repetition ratio; - The word repetition ratio; - The special character ratio; - The stop word ratio; - The flagged word ratio; - The language identification prediction score; - The perplexity score. ### Image-text pairs extraction With the `processor` `TextMediaPairsExtractor`, we extract image-text pairs first by looking at images in the list of nodes from the previously described structure. We only keep images that we are able to download. Then, to form pairs with an image, we consider the alt-text (if present), we consider the formatted filename (essentially applying some regexes to prettify the original name of the file), and we consider the extracted text if the node just after the image is a text node. We then split this text on `\n\n` and consider the first element (essentially the first paragraph). We also have the possibility to extract images not present in the simplified DOM tree, but in this case the extracted text is never present. ### Image-text pairs filtering With the `processor` `PairFiltering`, we essentially filter image-text pairs based on: - The format of the images; - The size of the images (original and displayed width, original and displayed height, side ratio); - The number of words in the texts; - The special character ratio in the texts; - The repetition ratio in the texts; - The CLIP scores.
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/data_collection/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/data_collection/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 14214 }
# Intersect a list of urls with the urls archived in a snapshot of Common Crawl In this section, I want to leave a trace of the steps I followed in order to know how many oscar's english subset urls are present in the CC-MAIN-2021-25 snapshot of Common Crawl ## 1. Get the list of urls from oscar's english subset ```python from datasets import load_dataset import json from tqdm import tqdm saving_path = "/gpfswork/rech/cnw/urd43gx/urls_oscar_english/urls_oscar_english.parquet" #CHANGEME dataset = load_dataset("oscar-corpus/OSCAR-2109", language="en", split="train", use_auth_token=True) print("Dataset successfully loaded") def get_urls_from_meta_column(meta_col): urls = [meta_item["headers"]["warc-target-uri"] for meta_item in meta_col] return {"url": urls} dataset = dataset.map(get_urls_from_meta_column, batched=True, batch_size=1000, remove_columns=dataset.column_names, num_proc=25, input_columns=["meta"]) dataset.to_parquet(saving_path) ``` Note, for the following steps we need the list in a table in parquet format ## 2. Transfer the parquet table to a S3 bucket I copied it here: `s3://m4-cc-index/urls_oscar_english/urls_oscar_english.parquet` ## 3. Create on [AWS Athena](https://aws.amazon.com/athena/) a database and a table storing the Common Crawl index Follow the steps described here: https://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/ If the table `ccindex` already exist, don't forget to update it with the latest crawl snapshots by running `MSCK REPAIR TABLE ccindex`. ## 4. Create a new database and table with the oscar's english subset urls On Athena UI, run: ```sql CREATE DATABASE m4 ``` ```sql CREATE EXTERNAL TABLE IF NOT EXISTS m4.urls_oscar_english ( `url` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' WITH SERDEPROPERTIES ( 'serialization.format' = '1' ) LOCATION 's3://m4-cc-index/urls_oscar_english' TBLPROPERTIES ('has_encrypted_data'='false'); ``` ```sql SELECT * FROM "m4"."urls_oscar_english" limit 10; ``` |url | |--------------------------------------------------------------------------------------------------------------------------------------------------------------| |https://cupe4914.com/cupe-4914-warns-that-peel-cas-needs-to-stop-putting-vital-supports-at-risk-and-get-serious-about-negotiating-a-fair-collective-agreement/| |https://cure-ed-info.com/politics/shameful-publicity-stunt-labour-mp-hits-out-at-rishi-sunak-sit-down-with-gordon-ramsay/ | |https://cure-ed-info.com/world-news/tanzanian-president-blames-lab-after-goat-papaya-test-positive-for-coronavirus/ | |https://custom-essay-writer.net/2020/10/21/literature-review-hq_7h/ | |https://customclassicsmancaves.com/index.php?route=product/product&product_id=746 | |https://customdistributors.com/recipe/blt-baked-potatoes/ | |https://customtwit.com/30-laws-of-flow-wordpress-site-with-charlene-day/ | |https://customwritingshelp.com/2021/02/23/question-1-of-20-one-purpose-of-closing-entries-is-to-give-zero/ | |https://customwritingwebsite.com/2020/06/30/write-my-college-essay_kl/ | |https://cuttingedgechicago.com/terms-and-conditions | ## 5. Join the oscar's english subset urls table with the Common Crawl index table ```sql CREATE TABLE "m4"."result_urls_oscar_english" WITH ( format = 'parquet', external_location = 's3://m4-cc-index/result_urls_oscar_english/' ) AS select a.*, b.* from ( select url from m4.urls_oscar_english ) as a left join ( select url as url_2, url_host_name, content_mime_type, content_mime_detected, warc_filename, warc_record_offset, warc_record_length, warc_segment, crawl, fetch_status, content_languages from ccindex.ccindex where crawl = 'CC-MAIN-2021-25' ) as b on a.url = b.url_2 ``` ## 6. Get the number of oscar's english subset urls in the CC-MAIN-2021-25 snapshot ```sql select count(*) from m4."result_urls_oscar_english" where url_2 is not NULL; ``` 108551545 without duplicated urls: ```sql select count(DISTINCT url) from m4."result_urls_oscar_english" where url_2 is not NULL; ``` 106503003
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/get_html_files/common_crawl.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/get_html_files/common_crawl.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 4988 }
# Data Processing Pipelines Relate to issue [#12](https://github.com/huggingface/m4/issues/12). We have two v0 data processing pipelines: - (a) split (for sharding) + parallel/slurm arrays of whatever processing scripts (python or rust for instance) - (b) apache beam (for creating processing pipelines) + Dataflow (for horizontal scaling) ## App ngram search is mostly an example. to launch the app: ```bash streamlit run app.py --server.port 6006 ```
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/processing/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/processing/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 456 }
# Web document filtering documentation The filtering is done at node and document levels. At node level, we consider paragraphs or images. This filtering is done to clean the document before doing the filtering at document level, which is deciding if we want to keep the document or not. Some filters are defined at both node and document levels. If the thresholds were the same for these two levels, it wouldn't be useful to call these filters at document level again, since it would automatically pass the filtering, given the fact that we removed the problematic nodes at node level. However, the thresholds shouldn't be the same at node and document levels. In a text, at node level, you can have short sentences while at document level, you see the bigger picture. Then, you can be much stricter on the threshold at document level while keeping a high recall than at node level. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/utils/filtering_utils.py#L37 ## Filtering at node level This operation is done in a `.map`. So we have web documents as inputs, and modified web documents as outputs. ### Filtering at node levels for texts We start by **modifying** the texts by doing: - Non printing characters removal (just removing weird characters that are not rendered when displaying a text, but still visible for a computer); - White space standardization (there are many different white space characters, and we need to standardize them to be able to split on white space characters to obtain the words in a text, useful later for the filtering). https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L61 https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L65 Then, for each text, we split on `\n\n` to obtain paragraphs, and for each paragraph, we apply a filtering on: - The number of words; - The character repetition ratio; - The word repetition ratio; - The special character ratio; - The stop word ratio; - The flagged word ratio; - The ratio of punctuation characters vs number of words; - The common word ratio; - The language identification prediction score; - The perplexity score. See details below for the calculations of these quantities. We remove paragraphs that do not pass the filtering, and join the remaining paragraphs on `\n\n`. ### Filtering at node levels for images For each image, we filter on: - The format; - The size (original and rendered widths and heights, aspect ratio). https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L13 https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L20 ## Filtering at document level This operation is done in a `.filter`. We have the modified web documents as inputs (the outputs of the filtering at node level), and the booleans indicating if we keep the documents or not as outputs. We filter web documents on: - The number of images; - The number of words; - The character repetition ratio; - The word repetition ratio; - The special character ratio; - The stop word ratio; - The flagged word ratio; - The ratio of punctuation characters vs number of words; - The common word ratio; - The language identification prediction score; - The perplexity score. See details below for the calculations of these quantities. ## Keeping a high diversity Some web documents, even if they are rare, could be extremely long, or contain an inordinate number of images. Imagine we have a dataset of 1000 documents, 999 of these documents contain 10 words and 1 image each, and the remaining document contains 10000 words and 1000 images. Then, we have the feeling that we have diversity in our dataset, since we took 1000 random documents from the internet, but half of the content is from one single document, which is likely to be on the same topic (if it is not spam, which is highly possible). To remove these outliers (that exist, and take all the place in the dataset), we remove documents with too many words or images. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L51 https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L137 ## Character repetition ratio calculation For a given $n$, we count the occurrence of each *character* $n$-gram present in the text. We denote $r$ the number of character $n$-grams with at least 2 occurences. We define the character repetition ratio as the ratio of the sum of the $min(k, r)$ largest occurrences ($k$ is defined just below) by the sum of all occurrences, and we discarded texts with a too high ratio. If $k=1$, short sentences are much more likely to have a high character repetition ratio, since the most frequent character $n$-gram represents a larger proportion of the sentence. If $k$ is the number of occurrences greater than or equal to $2$, very long texts, but not necessarily including repetitions, tend to have a high character repetition ratio, since these texts inherently have a wide diversity of character $n$-grams. $k=\lfloor \sqrt{N} \rfloor $, with $N$ the number of different character $n$-grams found in the text, counterbalances well this effect in practice. *Example:* Take the sentence `ok_ok_good_ok` and $n=3$. Character $n$-grams, with their frequencies, are given in the following table. | `ok_` | `_ok` | `k\_o` | `k\_g` | `\_go` | `goo` | `ood` | `od\_` | `d\_o` | | - | - | - | - | - | - | - | - | - | | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | Since we have 9 different character $n$-grams, $N=9$ and $k = \lfloor \sqrt{N} \rfloor =3$. We have two character $n$-grams with at least two occurrences, so $r=2$. Then, $min(k, r)=2$. The sum of the $min(k, r)$ largest occurrences is $2+2=4$ and the sum of all occurrences is $11$. Thus, the character repetition ratio for this sentence is $\frac{4}{11}$. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L167 ## Word repetition ratio calculation As a complement to the previous filter, we remove texts that have commonly repeated similar long sentences. More specifically, we create a filter for the repetitions by looking this time at the occurrences of the *word* $n$-grams, for a chosen $n$ parameter. We define the word repetition ratio as the ratio of the sum of the occurrences greater than or equal to 2 to the sum of all occurrences, and we discard texts with too high of a ratio. Contrary to the filter on the character repetition ratios, I did not find a bias of this method giving systematically higher or lower scores to longer or short texts. This filter is more robust in finding texts with long exact duplicated sentences in them, while the previous one is used to find short to medium sized repetitions. *Example:* Take the sentence `My name is Hugo. What is your name? My name is Paul.` and $n=2$. Word $n$-grams, with their frequencies, are given in the following table. | `My name` | `name is` | `is Hugo` | `Hugo What` | `What is` | `is your` | `your name` | `name My` | `is Paul` | | - | - | - | - | - | - | - | - | - | | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | We have two word $n$-grams with at least two occurrences, for a total number of $2+2=4$ occurences. The sum of all occurrences is $11$, so the word repetition ratio for this sentence is $\frac{4}{11}$ https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L197 ## Special character ratio calculation The list of special characters was defined by taking an existing list of special characters, then finding in many web texts non ASCII characters that were not present in this list and count their occurences, and finally adding the most frequent ones to the original list. Emojis are also added to the list. We simply discard texts with a special character ratio above a certain threshold. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L217 ## Stop word ratio calculation Having a low stop word (or closed class word) ratio in a text is one of the best indicators of a non-human generated content. The list of stop words was built by taking pre-existing lists, for example from Universal Dependencies. We discard texts with a too low closed class word ratio. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L236 ## Flagged word ratio calculation To build a list of flagged words, we used a circular method of 1 step: - We started with a concatenation of pre-existing (even if not perfect) lists of flagged words found on the internet; - Then, we computed the flagged word ratios of many documents from the internet, using this list; - We used these scores to build a database containing the documents with the highest flagged words ratios; - Then, we manually inspected these documents to discover new words to add to the list; - Finally, the list was filtered with precise instructions below. *Instructions for building the lists of flagged words:* Keep only the words associated with porn and systematically used in a sexual context. Remove words that can be used in medical, scientific, colloquial (without referring systematically to porn), or everyday contexts. Remove all insults. Remove all words referring to race or sexual orientation. We are then able to compute the flagged word ratio of a text and discard it if it is too high. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L255 ## Punctuation ratio calculation With a regular expression, we split a text string into a list of words and punctuations (predefined list of punctuation characters used in English). We then count the ratio between number of punctuations and number of words. We discard texts with too low punctuation ratio, as they are usually indicative of poor quality text. ## Common word ratio calculation We analyze a large amount of text from the Common Crawl using the Oscar dataset, extracting and counting words and removing those that occur only once. We calculate the common word ratio of a text to identify machine-generated content, removing texts with a low ratio. https://github.com/huggingface/m4/blob/57bda9f70eec539401046b5127ecdff5ae6b4e71/m4/sourcing/data_collection/processors/web_document_filtering.py#L317 ## Language identification prediction score calculation FastText is used to perform language identification and getting confidence scores for a text. If a score is below a specific threshold, we discard the text. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L279 ## Perplexity score calculation SentencePiece unigram tokenizers followed by KenLM 5-gram models after tokenization were trained on Wikipedia article openings. We discarded texts with too high perplexity scores. https://github.com/huggingface/m4/blob/4e95b234c1206355848faf0e77492717e3e70154/m4/sourcing/data_collection/processors/web_document_filtering.py#L363
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/data_collection/docs/filtering_doc.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/data_collection/docs/filtering_doc.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 11781 }
# Image deduplication # Methods The main information of an image is contained in its low frequencies, the high frequencies providing detail. The main methods to perform image deduplication in a fast way, and whose quality is not so bad, are based on this observation. ## Average hashing (AHash) - Convert the image to grayscale. - Reduce the size of the image. To have a hash of 64, shrink the image to 8x8. It will remove the high frequencies. - Compute the average value of the 64 pixels. - For each pixel, replace its value by 1 if it is larger than the average or by 0 otherwise. - Unroll the image to obtain the hash. A hamming distance is used to compare two hashes. It is fast, but not efficient against minor modifications of the image. Lot of false positives. ## Perceptual hashing (PHash) - Convert the image to grayscale. - Reduce the size of the image, to 32x32 for example. This step is done to simplify the DCT computation and not because it is needed to reduce the high frequencies. - Compute the 32x32 DCT, and keep the top left 8x8, which represents the low frequencies. - Compute the average value of the top left 8x8, and exclude the DC coefficient. - For each of the 64 pixels, replace its value by 1 if it is larger than the average or by 0 otherwise. - Unroll the image to obtain the hash. It is slower than AHash, but more robust to minor modifications of the image. Less false positives than AHash in practice. ## Difference hashing (DHash) - Convert the image to grayscale. - Reduce the size to 9x8, essentially to remove the high frequencies. - For each pixel, compare its value to the one at the right. Replace by 1 if it is larger or by 0 otherwise, until ending with a 8x8 image. - Unroll the image to obtain the hash. - Optional: repeat the steps by comparing pixels on the columns instead of on the rows, and concatenate the new hash with the previous one to obtain a 128-bit hash, and in practice reduce the the number of false positives. It is as fast as AHash, with less false positives. Less accuracy than PHash. ## Wavelet hashing (WHash) Same as PHash, but uses DWT instead of DCT. It is way faster than PHash, a bit less than AHash and DHash, but produces way more false positives. # Libraries ## [`imagededup`](https://github.com/idealo/imagededup) It duplicates with the algorithms: CNN, Perceptual hashing, Difference hashing, Wavelet hashing, Average hashing. ## [`imagehash`](https://github.com/JohannesBuchner/imagehash) It supports: Perceptual hashing, Difference hashing, Wavelet hashing, Average hashing, HSV color hashing (colorhash), Crop-resistant hashing. ## [`image-match`](https://github.com/ProvenanceLabs/image-match) It implements the Image signature algorithm. ## [`imgdupes`](https://github.com/knjcode/imgdupes) It supports Perceptual hashing (using only the 8x8 DCT low-frequency values including the first term), Difference hashing, Wavelet hashing, Average hashing, Perceptual hashing org (using only the 8x8 DCT low-frequency values and excluding the first term since the DC coefficient can be significantly different from the other values and will throw off the average). It uses `imagehash` except for Perceptual hashing org. ## [`simhash-py`](https://github.com/seomoz/simhash-py) Implements the SimHash algorithm, as well as a solution for the Hamming distance problem. ## [`faiss`](https://github.com/facebookresearch/faiss) Efficient similarity search and clustering of dense vectors. ## [`hnswlib`](https://github.com/nmslib/hnswlib) Fast approximate nearest neighbors # Research papers ## Interesting papers - [Duplicate Discovery on 2 Billion Internet Images (2013)](https://people.csail.mit.edu/celiu/pdfs/CVPR13-bigvis-dupdiscovery.pdf): This paper presents a method to provide a hash to an image, by considering different scales and splitting the image, computing the average pixel values of each block, gathering everything, performing a PCA (trained on a portion of the total of the images), and quantizing it by putting to 0 or 1 the coefficients of the PCA depending on if they are below or above the average PCA coefficient values.<br> Moreover, this paper performs an $\epsilon$-clustering (complexity $\mathcal{O}(n^2)$!) to find clusters by comparing the distances of the PCA signatures (before quantization), and then another loop to improve the results by merging clusters where representatives have PCA signatures (after quantization) with a low Hamming distance.<br> I am really surprised that the complexity $\mathcal{O}(n^2)$ worked for them on 2B images, even if they considered 24-bit hashes.<br> They also found 1/4 of the images being duplicated (icons, ads, ...). - [D2LV: A Data-Driven and Local-Verification Approach for Image Copy Detection (2021)](https://arxiv.org/pdf/2111.07090.pdf): The authors are the winners of the Image Similarity Challenge proposed by Facebook. They used neural networks with an unsupervised pretraining, followed by a training with both a triplet and a cross entropy loss. They used a combination of global and local features (it is often the case in other approaches). - [3rd Place: A Global and Local Dual Retrieval Solution to Facebook AI Image Similarity Challenge (2021)](https://arxiv.org/pdf/2112.02373.pdf): This paper presents a method for checking if two images are similar. It uses a combination of global and local features. The global features are obtained with a Transformer pre-trained on ImageNet and trained with a triplet loss, and the local features are from the SIFT algorithm. - [Detecting Near-Duplicates for Web Crawling (2007)](https://www2007.org/papers/paper215.pdf): This paper verifies that the SimHash algorithm (for text) creates hashes such that near duplicates have close Hamming distances. Also, and this is what is most important for us, it gives an efficient solution for the Hamming distance problem to efficiently determine closest simhashes (i.e not in quadratic time) via random projections or permutations + interpolation search. - [An image signature for any kind of image (2002)](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.2585&rep=rep1&type=pdf): This presents the Image signature algorithm. We start by cropping the image to essentially remove the constants areas around. Then, we divide the image into a grid of patches. For each patch, we compute the average gray level, and compare it to the neighboring patches. We replace the patch by 8 values, all between -2, -1, 0, 1 and 2, depending on the difference between the level of the patch and its 8 neighbors. We then concatenate all the outputs to obtain a 648-bit array. The difference between two hashes is computed with the euclidean distance.<br> This algorithm is interesting and seems fast. I like the idea of working with patches. However, the vectors are typically longer (size 648, 10x the usual size of 64). - [A robust image hashing based on discrete wavelet transform (2017)](https://sci-hub.hkvisa.net/10.1109/icsipa.2017.8120651): This paper introduces a new hashing method, based on image normalization, DWT and SVD. Some ideas can be taken to improve the simple PHash algorithm. ## Less interesting papers - [Fast and accurate near-duplicate image elimination for visual sensor networks (2016)](https://journals.sagepub.com/doi/pdf/10.1177/1550147717694172): Interesting paper, but I find it really similar to [Duplicate Discovery on 2 Billion Internet Images (2013)](https://people.csail.mit.edu/celiu/pdfs/CVPR13-bigvis-dupdiscovery.pdf) in the sense that they are also doing a two-step method with global and then local features. The clustering and nearest neighbors search is also similar. They changed the hash function, and added a PageRank algorithm to find the most relevant image to keep from a cluster once it is formed, but I don't think it really matters.<br> They provide good metrics for the evaluation. - [Benchmarking unsupervised near-duplicate image detection (2019)](https://arxiv.org/pdf/1907.02821.pdf): This paper makes a benchmark of existing methods for image deduplication. It is interesting to understand how to perform an evaluation. - [Large Scale Image Deduplication (2011)](http://vision.stanford.edu/teaching/cs231a_autumn1213_internal/project/final/writeup/nondistributable/Wen_Paper.pdf): It is based on PCA to compute the image hash. The PCA is done on a sufficiently large image collection but I am not sure performing a PCA is better than PHash. - [Secure image deduplication through image compression (2015)](https://sci-hub.hkvisa.net/10.1016/j.jisa.2015.11.003): It is based on a wavelet-based image compression algorithm called SPIHT. It creates the signature of an image by identifying the significant regions on the image. Interesting idea but I am not sure how this can be better than PHash. - [Image Deduplication Based on Hashing and Clustering in Cloud Storage (2021)](https://koreascience.kr/article/JAKO202120941694290.pdf): It presents a hashing function based on DCT (that I don't think it's better than DCT) and does the clustering with K-means, but I don't like this clustering strategy as it is really challenging to find $k$ for images. - [CE-Dedup: Cost-Effective Convolutional Neural Nets Training based on Image Deduplication (2021)](https://arxiv.org/pdf/2109.00899.pdf): This paper uses techniques like PHash, DHash, AHash or WHash to first deduplicate images in a training set, and then train a neural network on this to see that performances can be really close when training on the full dataset while reducing the size of the dataset by a large amount. However, it does not use a neural network to do the deduplication. - [Efficient Cropping-Resistant Robust Image Hashing (2014)](https://sci-hub.hkvisa.net/10.1109/ares.2014.85): This presents the Crop-resistant hashing algorithm.<br> It is an old-school method and I’m not convinced being more robust against cropping doesn’t hurt the overall performances. - [A lightweight virtual machine image deduplication backup approach in cloud environment (2014)](https://sci-hub.hkvisa.net/10.1109/compsac.2014.73): It is based on the K-means algorithm but I don’t like the approach since we don’t know how to choose $k$ and we have a constraint to fit in the RAM. - [Clustering-based acceleration for virtual machine image deduplication in the cloud environment (2016)](http://lcs.ios.ac.cn/~zhangzy/paper/JSS2016Xu.pdf): It is essentially the same paper (also same authors) as A lightweight virtual machine image deduplication backup approach in cloud environment. - [A duplicate image deduplication approach via Haar wavelet technology (2012)](https://sci-hub.hkvisa.net/10.1109/ccis.2012.6664249): It is based on the wavelet decomposition instead of the DCT, which seems to perform worse. - [A High-precision Duplicate Image Deduplication Approach (2013)](http://www.jcomputers.us/vol8/jcp0811-06.pdf): It is based on the wavelet decomposition instead of the DCT, which seems to perform worse. # Blog posts https://content-blockchain.org/research/testing-different-image-hash-functions/ https://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html https://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html https://fullstackml.com/wavelet-image-hash-in-python-3504fdd282b5 https://towardsdatascience.com/understanding-locality-sensitive-hashing-49f6d1f6134 https://santhoshhari.github.io/Locality-Sensitive-Hashing/ https://en.wikipedia.org/wiki/Locality-sensitive_hashing https://mesuvash.github.io/blog/2019/Hashing-for-similarity/ https://keras.io/examples/vision/near_dup_search/ https://drivendata.co/blog/image-similarity-winners https://www.linkedin.com/pulse/detection-duplicate-images-using-deep-learning-aditya-sharma/ https://medium.com/mlearning-ai/a-scalable-solution-to-detect-duplicate-images-97d431c2726d https://keras.io/examples/vision/siamese_network/
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/data_collection/docs/image_deduplication_doc.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/data_collection/docs/image_deduplication_doc.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 11941 }
# Clip distributions - descriptive stat - SBU Captions ```python DescribeResult(nobs=10000, minmax=(0.11153904348611832, 0.44991129636764526), mean=0.2874957061290741, variance=0.0016425453395696478, skewness=-0.22512623318313724, kurtosis=0.1512977180455395) ``` - Red Caps ```python DescribeResult(nobs=10000, minmax=(0.08980361372232437, 0.4210364818572998), mean=0.3082767878524959, variance=0.001230211924011678, skewness=-0.5157219676083339, kurtosis=0.6965278169334876) ``` - LAION 400M ```python DescribeResult(nobs=10000, minmax=(0.16056129336357117, 0.4760231077671051), mean=0.333618477447331, variance=0.0008586748609226699, skewness=0.7131919650316029, kurtosis=1.668628208211425) ```
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/data_collection/outputs/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/data_collection/outputs/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 700 }
# Creating image PMD - points of entry Some subsets are handled on JZ, and some others are handled on thomas-m4-pmd GCP VM. To launch the ones on JZ (Conceptual captions and Google WIT), from this folder, launch the slurm job: ```batch sbatch jz_image_pmd.slurm --mail-type=ALL [email protected] # The two last arguments are optional ``` The job will expect you to have the `m4` repo under `$WORK/code`, and will save data under `$cnw_ALL_CCFRSCRATCH/general_pmd/image`. You can then upload things to the bucket to save them: ```bash gsutil -m rsync -r $cnw_ALL_CCFRSCRATCH/general_pmd/image/ gs://science-m4/general_pmd/image/ ``` To launch the ones on `thomas-m4-pmd`, from this folder, run the following comamnds: ```bash mkdir -p $HOME/general_pmd/image # Optionally, activate your favorite conda env python pmd.py gsutil -m rsync -r $HOME/general_pmd/image/ gs://science-m4/general_pmd/image/ ``` Once the creation are done, you can sanity check how many images are missing using the script `check_none_ims.py`. A lot of the subsets require manually downloading files and putting them in the right folder. Note that they are files that are automatically downloaded from `facebook/pmd` (https://huggingface.co/datasets/facebook/pmd), please make sure you have filled the authorization wall so that the download can automatically happen. |Subset|File location|Where to put and what to do| |--|--|--| |LocalizedNarrativesFlickr30K|http://shannon.cs.illinois.edu/DenotationGraph/data/index.html|Download "Flickr 30k images" and decompress the tar.gz into `~/.cache/m4/flickr30k`| ## Tarring the `downloaded_images` folders in `~/.cache/m4/` ```bash find . -type d -maxdepth 1 -mindepth 1 -exec basename \{} ./ \; | parallel --verbose -j 16 --progress "tar -zcf {1}.tar.gz {1}" ``` ## Helper scripts If you want to know how many images were downloaded in a subfolder: ```bash find {dataset_name} -type f -regextype egrep -regex "{dataset_name}/downloaded_images/[0-9a-f]{3}/[0-9a-f]{3}/[0-9a-f]{64}" | wc -l ``` If you want to remove all .lock files from a subfolder: ```bash find {dataset_name} -type f -regextype egrep -regex "{dataset_name}/downloaded_images/[0-9a-f]{3}/[0-9a-f]{3}/[0-9a-f]{64}\.lock" | xargs -I {} rm {} ``` If you want to remove all tmp_files from a subfolder: ```bash find {dataset_name} -type f -regextype egrep -regex "{dataset_name}/downloaded_images/temp-.*" | xargs -I {} rm {} ``` If you want to tar and split the subfolder (typically before pushing to a bucket): ```bash tar -cvf - {dataset_name} | split -b 1G -d -a 7 --additional-suffix=.tar - "{dataset_name}_part-" ``` Note: on MAC, `split` has to be replaced with `gsplit` (pleaase install it via `brew install coreutils`)
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/pmd/scripts/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/pmd/scripts/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 2738 }
## Locally Run the `run_document_ngrams_extraction.sh` script. ## On JZ On JZ: - Add to your `~/.bashrc` the following line (custom installation of `jq` and `parallel`): ```bash export PATH=$PATH:/gpfswork/rech/six/commun/lib/jq-1.5/bin/:/gpfswork/rech/six/commun/lib/parallel/bin/ ``` Then, run the slurm script (`sbatch pipe.slurm`).
{ "source": "huggingface/smollm", "title": "vision/m4/sourcing/processing/extracting_ngrams/README.md", "url": "https://github.com/huggingface/smollm/blob/main/vision/m4/sourcing/processing/extracting_ngrams/README.md", "date": "2024-11-04T13:01:54", "stars": 1945, "description": "Everything about the SmolLM2 and SmolVLM family of models ", "file_size": 337 }
# Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. This Code of Conduct also applies outside the project spaces when there is a reasonable belief that an individual's behavior may have a negative impact on the project or its community. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at <[email protected]>. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq
{ "source": "facebookresearch/large_concept_model", "title": "CODE_OF_CONDUCT.md", "url": "https://github.com/facebookresearch/large_concept_model/blob/main/CODE_OF_CONDUCT.md", "date": "2024-12-12T21:59:57", "stars": 1938, "description": "Large Concept Models: Language modeling in a sentence representation space", "file_size": 3536 }
# Contributing to large_concept_model We want to make contributing to this project as easy and transparent as possible. ## Pull Requests We actively welcome your pull requests. 1. Fork the repo and create your branch from `main`. 2. If you've added code that should be tested, add tests. 3. If you've changed APIs, update the documentation. 4. Ensure the test suite passes. 5. Make sure your code lints. 6. If you haven't already, complete the Contributor License Agreement ("CLA"). ## Contributor License Agreement ("CLA") In order to accept your pull request, we need you to submit a CLA. You only need to do this once to work on any of Facebook's open source projects. Complete your CLA here: <https://code.facebook.com/cla> ## Issues We use GitHub issues to track public bugs. Please ensure your description is clear and has sufficient instructions to be able to reproduce the issue. Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe disclosure of security bugs. In those cases, please go through the process outlined on that page and do not file a public issue. ## License By contributing to large_concept_model, you agree that your contributions will be licensed under the LICENSE file in the root directory of this source tree.
{ "source": "facebookresearch/large_concept_model", "title": "CONTRIBUTING.md", "url": "https://github.com/facebookresearch/large_concept_model/blob/main/CONTRIBUTING.md", "date": "2024-12-12T21:59:57", "stars": 1938, "description": "Large Concept Models: Language modeling in a sentence representation space", "file_size": 1272 }
# Large Concept Models ## Language Modeling in a Sentence Representation Space [[Blog]](https://ai.meta.com/blog/meta-fair-updates-agents-robustness-safety-architecture/) [[Paper]](https://ai.meta.com/research/publications/large-concept-models-language-modeling-in-a-sentence-representation-space/) This repository provides the official implementations and experiments for [Large Concept Models](https://ai.meta.com/research/publications/large-concept-models-language-modeling-in-a-sentence-representation-space/) (**LCM**). <p align="center"> <img src="space.svg" width="50%"> </p> The LCM operates on an explicit higher-level semantic representation, which we name a "concept". Concepts are language- and modality-agnostic and represent a higher level idea. In this work, a concept corresponds to a sentence, and we use the [SONAR](https://github.com/facebookresearch/SONAR) embedding space, which supports up to 200 languages in text and 57 languages in speech. See the list of supported languages [here](https://github.com/facebookresearch/SONAR?tab=readme-ov-file#supported-languages-and-download-links). ## Approach <p align="center"> <img src="lcm.svg" width="70%"> </p> The LCM is a sequence-to-sequence model in the concepts space trained to perform auto-regressive sentence prediction. We explore multiple approaches: - MSE regression (`base_lcm` in this code). - Variants of diffusion-based generation (we include `two_tower_diffusion_lcm` in this release). - Models operating in a quantized SONAR space (coming soon). These explorations are performed using 1.6B parameter models and training data in the order of 1.3T tokens. We include in this repository recipes to reproduce the training and finetuning of 1.6B MSE LCM and Two-tower diffusion LCM. See instructions [below](#usage). ## Installing ### Using UV The LCM repository relies on fairseq2. If you have `uv` installed on your system, you can install a virtual environment with all the necessary packages by running the following commands: ```bash uv sync --extra cpu --extra eval --extra data ``` You can also use `uv run` to run the demo commands with the correct environment. Note that we only provide requirements for `cpu` dependencies, if you want to use GPU support, you will have to choose the variants of torch and fairseq2 that work for your system. For example for torch 2.5.1 with cuda 1.21, You would do something like: ``` uv pip install torch==2.5.1 --extra-index-url https://download.pytorch.org/whl/cu121 --upgrade uv pip install fairseq2==v0.3.0rc1 --pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/rc/pt2.5.1/cu121 --upgrade ``` Check [fairseq2 variants](https://github.com/facebookresearch/fairseq2?tab=readme-ov-file#variants) for possible variants. Note that LCM currently relies on the release candidate for fairseq2 0.3.0 rc1. ### Using pip To install with pip, the commands are very similar, but you will have to manage your own environment and make sure to install fairseq2 manually first. For instance, for a `cpu` install. ```bash pip install --upgrade pip pip install fairseq2==v0.3.0rc1 --pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/rc/pt2.5.1/cpu pip install -e ".[data,eval]" ``` If [fairseq2](https://github.com/facebookresearch/fairseq2) does not provide a build for your machine, check the readme of that project to build it locally. ## Usage > [!NOTE] > If using `uv` prefix all commands with `uv run` to use the environment created by default in `.venv`, e.g., > `uv run torchrun --standalone`. > Alternatively, you can activate the environment once and for all with `source .venv/bin/activate`. ### Preparing data The LCM can be trained and evaluated using textual data split in sentences and embedded with [SONAR](https://github.com/facebookresearch/SONAR/). We provide a sample processing pipeline that can be used to prepare such training data, you can run it with: ``` uv run --extra data scripts/prepare_wikipedia.py /output/dir/for/the/data ``` This pipeline shows how to get a dataset from huggingface and process it with SONAR and [SaT](https://arxiv.org/abs/2406.16678). Check out the file for more details on processing your own data. While the script provides an example pulling data from huggingface, we also provide [APIs](https://github.com/facebookresearch/stopes/tree/main/stopes/utils/sharding) to process jsonl, parquet and CSV. ### Datacards The trainer described below relies on datacards configuring the datasets. These datacards are yaml files with pointers to the dataset files (locally or on s3) and information on its schema. We provide some sample datacards in [`lcm/datacards/datacards.yaml`](https://github.com/facebookresearch/large_concept_model/blob/main/lcm/datacards/datacards.yaml). Once you have processed some data, you can update the datacards with your paths. #### Fitting a normalizer To fit a new embedding space normalizer on a given weighted mixture of datasets one can use the following command : ```bash python scripts/fit_embedding_normalizer.py --ds dataset1:4 dataset2:1 dataset3:10 --save_path "path/to/new/normalizer.pt" --max_nb_samples 1000000 ``` Here, `dataset1`, `dataset2`, `dataset3` are the names of datasets declared in the datacards as shown above and `(4, 1, 10)` their respective relative weights. The resulting normalizer can be next declared as a model as shown in `lcm/cards/sonar_normalizer.yaml` and referenced in all model training configs. ### Pre-training models #### Base MSE LCM To train an MSE LCM, we will use one of the following commands: **Option 1.** Training with SLURM using [submitit](https://github.com/facebookincubator/submitit) via [stopes](https://github.com/facebookresearch/stopes/tree/main)'s launcher: ```sh python -m lcm.train \ +pretrain=mse \ ++trainer.output_dir="checkpoints/mse_lcm" \ ++trainer.experiment_name=training_mse_lcm \ ``` With this command, we will submit a slurm job named `training_mse_lcm` with the recipe's requirements, in this case: ```yaml requirements: nodes: 4 tasks_per_node: 8 gpus_per_node: 8 cpus_per_task: 32 mem_gb: 0 timeout_min: 10000 ``` You can override the job's requirements like the timeout limit and the launcher's slurm partition with: ```sh python -m lcm.train \ +pretrain=mse \ ++trainer.output_dir="checkpoints/mse_lcm" \ ++trainer.experiment_name=training_mse_lcm \ ++trainer.requirements.timeout_min=100 \ ++trainer.requirements.cpus_per_task=8 \ ++launcher.partition=$partition_name ``` **Option 2.** Training locally with `torchrun` (e.g. using only 2 GPUs) with a smaller batch size (overriding `++trainer.data_loading_config.max_tokens=1000`): ```sh CUDA_VISIBLE_DEVICES=0,1 torchrun --standalone --nnodes=1 --nproc-per-node=2 \ -m lcm.train launcher=standalone \ +pretrain=mse \ ++trainer.data_loading_config.max_tokens=1000 \ ++trainer.output_dir="checkpoints/mse_lcm" \ +trainer.use_submitit=false \ ``` > [!IMPORTANT] > Since we're changing the number of GPUs required by the recipe, this will not reproduce the experimental setup of the paper. The checkpoints directory `checkpoints/mse_lcm` will be structured as: ``` . ├── checkpoints │   ├── step_2000 │   ├── ... │   └── step_250000 ├── config_logs ├── executor_logs ├── model_card.yaml ├── tb # tensorboard logs └── wandb # W&B logs ``` Note that W&B logging is skipped unless `wandb` is available. You can install `wandb` with `uv pip install wandb`. W&B arguments can be changed by overriding Hydra config values in the recipe: ```sh ++trainer.wandb_project=$project_name ++trainer.wandb_run_name=$run_name ``` #### Two-tower diffusion LCM Similar to the base MSE LCM we can submit a training job following the recipe in [./recipes/train/pretrain/two_tower.yaml](./recipes/train/pretrain/two_tower.yaml) via: ```sh python -m lcm.train \ +pretrain=two_tower \ ++trainer.output_dir="checkpoints/two_tower_lcm" \ ++trainer.experiment_name=training_two_tower_lcm \ ``` > [!TIP] > To understand the different ingredients of training recipes, check [this README](./recipes/train/README.md). ### Finetuning models To finetune the previously pre-trained two-tower diffusion LCM on supervised data, follow these steps: **Step 1.** Register the pre-trained checkpoint as a fairseq2 asset. You can finetune the final checkpoint with the card `checkpoints/two_tower_lcm/model_card.yaml` or any checkpoint after a specific number of training steps, e.g., `checkpoints/two_tower_lcm/checkpoints/step_2000/model_card.yaml`. To register the selected checkpoint, copy the automatically created yaml file to `./lcm/cards/mycards.yaml` and rename the model to replace the default `on_the_fly_lcm`. `./lcm/cards/mycards.yaml` will look like: ```yaml __source__: inproc checkpoint: file://path_to/large_concept_model/checkpoints/two_tower_lcm/checkpoints/step_2000/model.pt model_arch: two_tower_diffusion_lcm_1_6B model_family: two_tower_diffusion_lcm name: my_pretrained_two_tower ``` For more on how to manage fairseq2 assets, see [documentation](https://facebookresearch.github.io/fairseq2/nightly/basics/assets.html). **Step 2.** Launch a finetuning job pointing to the model to finetune, in this instance `my_pretrained_two_tower`: ```sh CUDA_VISIBLE_DEVICES=0,1 torchrun --standalone --nnodes=1 --nproc-per-node=2 \ -m lcm.train launcher=standalone \ +finetune=two_tower \ ++trainer.output_dir="checkpoints/finetune_two_tower_lcm" \ ++trainer.data_loading_config.max_tokens=1000 \ +trainer.use_submitit=false \ ++trainer.model_config_or_name=my_pretrained_two_tower ``` or ```sh python -m lcm.train \ +finetune=two_tower \ ++trainer.output_dir="checkpoints/finetune_two_tower_lcm" \ ++trainer.experiment_name=finetune_two_tower_lcm \ ++trainer.model_config_or_name=my_pretrained_two_tower ``` Similarly, to finetune an MSE LCM, follow the same instructions for registering a pre-trained checkpoint and submit a finetuning job with the appropriate recipe ([./recipes/train/finetune/mse.yaml](./recipes/train/finetune/mse.yaml)) via: ```sh python -m lcm.train \ +finetune=mse \ ++trainer.output_dir="checkpoints/finetune_mse_lcm" \ ++trainer.experiment_name=finetune_mse_lcm \ ++trainer.model_config_or_name=my_pretrained_mse_lcm ``` ### Evaluating models > [!NOTE] > For advanced evaluation (benchmarking different tasks, comparing results with LLMs, etc.) , check [the evaluation documentation](./examples/evaluation/README.md). **Step 0.** Download NLTK data required for evaluating ROUGE: ```py python -m nltk.downloader punkt_tab ``` **Step 1.** Generate and score outputs of a model either by pointing to its `model_card` yaml file or after registering it as a fairseq2 asset (the same way we registerd `my_pretrained_two_tower`): ```sh model_card=./checkpoints/finetune_two_tower_lcm/checkpoints/step_1000/model_card.yaml OUTPUT_DIR=evaluation_outputs/two_tower torchrun --standalone --nnodes=1 --nproc-per-node=1 -m lcm.evaluation \ --predictor two_tower_diffusion_lcm \ --show_progress true \ --data_loading.max_samples 100 \ --model_card ${model_card} \ --launcher standalone \ --dataset.source_suffix_text '[MODEL]:' \ --tasks finetuning_data_lcm.validation \ --task_args '{"max_gen_len": 10, "eos_config": {"text": "End of text."}}' \ --data_loading.batch_size 4 --generator_batch_size 4 \ --dump_dir ${OUTPUT_DIR} \ --inference_timesteps 40 \ --initial_noise_scale 0.6 \ --guidance_scale 3 \ --guidance_rescale 0.7 ``` where in the example we are evaluating 100 samples only (`--data_loading.max_samples 100`) and limiting the model output length to 10 sentences (`--task_args '{"max_gen_len": 10}'`). Outputs dumped in `./evaluation_outputs/two_tower` will be structured as: ``` . ├── metadata.jsonl ├── metrics.eval.jsonl ├── raw_results ├── results └── tb ``` where `metrics.eval.jsonl` contains corpus-level scores. To evaluate an MSE LCM, we use the associated predictor (`base_lcm`) and evaluate with: ```sh model_card=./checkpoints/finetune_mse_lcm/checkpoints/step_1000/model_card.yaml OUTPUT_DIR=evaluation_outputs/mse_lcm torchrun --standalone --nnodes=1 --nproc-per-node=1 -m lcm.evaluation \ --predictor base_lcm --sample_latent_variable False \ --show_progress true \ --data_loading.max_samples 100 \ --model_card ${model_card} \ --launcher standalone \ --dataset.source_suffix_text '[MODEL]:' \ --tasks finetuning_data_lcm.validation \ --task_args '{"max_gen_len": 10, "eos_config": {"text": "End of text."}}' \ --data_loading.batch_size 4 --generator_batch_size 4 \ --dump_dir ${OUTPUT_DIR} \ ``` Note that in this example, we only show how to evaluate the LCM on the same finetuning dataset (validation split). To evaluate in a downstream task, and compare results with the LLM, refer to the [Evaluation documentation](./examples/evaluation/README.md). ## Contributing See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out. ## Citation If you use this codebase, please cite: ``` @article{lcm2024, author = {{LCM team}, Lo\"{i}c Barrault, Paul-Ambroise Duquenne, Maha Elbayad, Artyom Kozhevnikov, Belen Alastruey, Pierre Andrews, Mariano Coria, Guillaume Couairon, Marta R. Costa-juss\`{a}, David Dale, Hady Elsahar, Kevin Heffernan, Jo\~{a}o Maria Janeiro, Tuan Tran, Christophe Ropers, Eduardo Sánchez, Robin San Roman, Alexandre Mourachko, Safiyyah Saleem, Holger Schwenk}, title = {{Large Concept Models}: Language Modeling in a Sentence Representation Space}, publisher = {arXiv}, year = {2024}, url = {https://arxiv.org/abs/2412.08821}, } ``` ## License This code is released under the MIT license (see [LICENSE](./LICENSE)).
{ "source": "facebookresearch/large_concept_model", "title": "README.md", "url": "https://github.com/facebookresearch/large_concept_model/blob/main/README.md", "date": "2024-12-12T21:59:57", "stars": 1938, "description": "Large Concept Models: Language modeling in a sentence representation space", "file_size": 13781 }
## Why ? Why do we need to implement this feature ? What is the use case ? ## How ? Document the technical decisions you made. If some parts are WIP, please explicit them here. ## Test plan How did you test your changes ? Include full command line to help other people reproduce if needed.
{ "source": "facebookresearch/large_concept_model", "title": ".github/pull_request_template.md", "url": "https://github.com/facebookresearch/large_concept_model/blob/main/.github/pull_request_template.md", "date": "2024-12-12T21:59:57", "stars": 1938, "description": "Large Concept Models: Language modeling in a sentence representation space", "file_size": 294 }
# Evaluation After you have trained an LCM, the checkpoint will be saved in a folder under the name `model.pt`, together with the model card under the name `model_card.yaml`. We also provide the library to evaluate the LCM and LLM. Using this library brings many benefits: You can reproduce the experiments done in the paper, you can inspect the results in an unified way, and you can also scale up the experiments for very large datasets in SLURM cluster. This document shows how to evaluate the model for different downstream tasks using the LCM eval library. ## Step 1: Prepare the data Since an LCM expects input data in sentence level, we need to preprocess the evaluation datasets accordingly. This includes parsing the raw content and splitting texts into sentences, then embedding them into vectors using a Sonar encoder. The example below shows how we prepare the data for CNN Dailymail. We load the dataset from Huggingface using [`datasets` API](https://huggingface.co/docs/datasets/en/index). The sentence splitting is done using [wtpsplit](https://github.com/segment-any-text/wtpsplit). Make sure to specify `--extra data` in installing the project to include these libraries. All processing logic is implemented in the file `prepare_evaluation_data.py`, as described below. ### Step 1.1: Process the split: Next, we download and parse the content (source text and summaries), saving different splits into JSON format ```shell uv run --extra data prepare_evaluation_data.py prepare_data \ --dataset_name=cnn_dailymail \ --output_dir=jsonl_dataset \ --source_text_column=article \ --target_text_column=highlights \ --version=3.0.0 \ --prompt_prefix="Summarize the following news to a concise list of highlights.\n[Text Start]:\n" --prompt_suffix="\n[Text End]" ``` Explain: In the above script, `cnn_dailymail` and `3.0.0` is the name and configuration of the dataset as available in HuggingFace `datasets`, `article` and `highlights` are source and summary columns. The `prompt_prefix` and `prompt_suffix` are optional arguments, if specified they will be prepended and appended to each source text to form the complete prompt. These arguments are useful if you want to embed the prompts into the dataset, and let them process all at once together with the text. Alternatively, we can specify them at later phase, when we evaluate the model (in which case the model will process the prompts on the fly) > **_NOTE:_** When `prompt_prefix` or `prompt_suffix` are specified, the dataset schema will change, i.e. the columns are renamed to "prompt" for input and "answer" for output. This is to indicate that we are handling the "processed" dataset and not the original one. The output will be stored in different files `[split].jsonl` under the directory `output_dir`. ### Step 1.2: Sentence splitting and embedding: To perform sentence splitting and sonar embedding for each split, run the following command: ```shell uv run --extra data prepare_evaluation_data.py embed \ --input_path=jsonl_dataset/cnn_dailymail/test.jsonl \ --source_text_column=prompt \ --target_text_column=answer \ --output_dir=parquet_dataset/cnn_dailymail \ --lang=eng_Latn \ --mode=local \ --log_dir=/tmp/logs/embed_cnndm ``` Depending on your machine, this might take some time. Alternatively, you can try to run in your SLURM cluster with the argmnent `--mode=slurm --shards=NO_OF_PARALLEL_JOBS`. This requires changing your SLURM config accordingly. We use [submitit](https://github.com/facebookincubator/submitit) to configure the job launcher. Here is the relevant excerpt in the script: ```python launcher = Launcher( cache=None, config_dump_dir=Path(log_dir) / "conf", log_folder=Path(log_dir) / "logs", cluster=mode, update_parameters={"partition": "your_slurm_partition"}, ) _ = await launcher.schedule(inst_stopes_module) ``` ## Step 2: Choose the predictor for evaluation To run the evaluation, we first need to map the model to a `Predictor`, which is an object that streamlines a number of steps: Loading the models, reading the prompts, performing the inference, decoding the outputs according to a given user setting, and finally formatting the text into the user-friendly format. Currently, the list of supported model families and their predictors is below. All predictors are found in "lcm/evaluation/predictors" and are registered in lcm.evaluation.predictors`_PREDICTOR_CONFIG_MAP` | Predictor | Model family | Model identifier | |-------------------------|------------------------|------------------------------------------------------------| | huggingface | AutoModel transformers | `model_name`, `revision`, `model_class`, `tokenizer_class` | | llama3 | LlaMA 3.x | `model_name` | | gemma | Gemma | `model_name` | | base_lcm | Base LCM | `model_card` | | two_tower_diffusion_lcm | Two-tower diffusion LCM| `model_card` | Next, we specify how the decoder generate texts with different generation options. For LLMs, the options are parameters found in [transformers.GenerationConfig](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig), and we port in the predictors the most popular ones: `repetition_penalty`, `encoder_repetition_penalty`, `encoder_no_repeat_ngram_size`, `no_repeat_ngram_size`. For LCMs, the options are found in [LCMGeneratorOptions](https://github.com/facebookresearch/large_concept_model/blob/main/lcm/inference/lcm/generator.py#L31) (for Base LCM) or [DiffusionLCMGeneratorOptions](https://github.com/facebookresearch/large_concept_model/blob/main/lcm/inference/two_tower_diffusion_lcm/generator.py#L31) (for Two-tower diffusion LCM). These options only specify how to generate output embeddings using diffusion process. We also want to specify the sonar decoder options, which dictates how the embeddings are decoded into texts, using parameters in [SonarDecoderConfig](https://github.com/facebookresearch/large_concept_model/blob/main/lcm/datasets/configs.py#L69). ## Step 3: Choose a downstream task and run the evaluation To run the downstream task, specify the task name and configuration, as well as parameters. We provide example tasks that were used in the paper: ### LLM evaluation tasks: | Task name | Task configuration | Explanation | |-------------------------|----------------------------------|-------------------------------------------------------------------------------------------------------------| | cnn_dailymail | cnn_dailymail_{form}llm.{split} | {form} can be empty for or "inverse_" for summary expansion, {split} can be "test", "validation" or "train" | | xsum | xsum_{form}llm.{split} | {form} can be empty for or "inverse_" for summary expansion, {split} can be "test", "validation" or "train" | | xlsum_llm | xlsum_llm.{lang}.{split} | {lang} refers to one value in [language list](../../lcm/evaluation/tasks/xlsum.py), {split} can be "test", "validation" or "train" | The evaluation library provides the handy CLI to evaluate using `lcm.evaluation` entry. Example command for evaluating the Meta Llama 3.1 8B instruction: ```shell uv run torchrun --standalone --nnodes=1 --nproc-per-node=1 -m lcm.evaluation \ --predictor llama3 \ --model_name meta-llama/Llama-3.1-8B-Instruct \ --generator_batch_size 16 \ --tasks cnn_dailymail_llm.test \ --task_args '{"max_gen_len": 200}' \ --dataset_dir jsonl_dataset/cnn_dailymail \ --data_loading.batch_size 16 \ --dataset.soure_text_column prompt \ --dataset.source_target_column answer \ --dump_dir output_results ``` In the example above, we load the model "meta-llama/Llama-3.1-8B-Instruct" as [specified](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) in HuggingFace, evaluate it on the CNN dailymail in which we process using the `prepare_evaluation_data.py` script as in Step 1.1, and store the results in the folder specified via `dump_dir`. The argument `dataset_dir` refers to the value of the argument `output_dir` in Step 1.1. In some cases, the model requires authentication token to evaluate. You can obtain them in HuggingGface (see [User Access Tokens](https://huggingface.co/docs/hub/en/security-tokens)), then add the parameter `--use_auth_token [YOUR TOKEN]` to the CLI command. In the above example, we need to provide the `source_text_column` and `source_target_column` parameters, because in Step 1, we inject the prompts direcly to the dataset and renamed the columns accordingly (to differentiate with "original" datasets). You can also skip this part and customize the prompt for each for each evaluation run. To do this, instead of specifying the `prompt_prefix` and `prompt_suffix` when preparing the data (as shown in the example in Section 1.1), we specify `dataset.source_prefix_text` and `dataset.source_suffix_text` during the evaluation run: ```shell uv run torchrun --standalone --nnodes=1 --nproc-per-node=1 -m lcm.evaluation \ --predictor llama3 \ --model_name meta-llama/Llama-3.1-8B-Instruct \ --generator_batch_size 16 \ --tasks cnn_dailymail_llm.test \ --task_args '{"max_gen_len": 200}' \ --dataset_dir jsonl_dataset/cnn_dailymail \ --data_loading.batch_size 16 \ --dataset.source_prefix_text "Summarize the following news to a concise list of highlights.\n[Text Start]:\n" \ --dataset.source_suffix_text "\n[Text End]" \ --dump_dir output_results ``` > **_NOTE:_** the missing parameters `source_text_column` and `target_text_column` and the new parameters `source_prefix_text`, `target_prefix_text` are becase we do not modify the column schema. Therefore, the original text columns ("article", "highlights") are kept and not specified in the CLI. It is also possible to provide the prompt from a YAML file. This is handy when you have to engineer the prompts carefully and have a very long detailed text. We provide one example prompt in the file [instruction.yaml](./instruction.yaml). The example command is: ```shell uv run torchrun --standalone --nnodes=1 --nproc-per-node=1 -m lcm.evaluation \ --predictor llama3 \ --model_name meta-llama/Llama-3.1-8B-Instruct \ --generator_batch_size 16 \ --tasks cnn_dailymail_llm.test \ --task_args '{"max_gen_len": 200}' \ --dataset_dir jsonl_dataset/cnn_dailymail \ --data_loading.batch_size 16 \ --prompt_file instruction.yaml \ --dump_dir output_results ``` ### LCM evaluation tasks: In contrast to LLM, the LCMs expect dataset to be preprocessed in Parquet format, with inputs being (sonar-) sentence embeddings. To evaluate an LCM on a ddownstream task, point to the directory consisting of the parquet files, as specified in Step 1, and run (example for Two-tower diffusion LCM): ```shell uv run torchrun --standalone --nnodes=1 --nproc-per-node=1 -m lcm.evaluation \ --predictor two_tower_diffusion_lcm \ --model_card path/to/the/model_card.yaml \ --generator_batch_size 16 \ --tasks lcm_generation \ --task_args '{"max_gen_len": 200}' \ --dataset.parquet_path parquet_dataset/cnn_dailymail \ --dataset.source_column prompt_sentences_sonar_emb \ --dataset.source_text_column prompt_sentences \ --dataset.target_column answer_sentences_sonar_emb \ --dataset.target_text_column answer_sentences \ --data_loading.batch_size 16 \ --dump_dir output_results ``` Similar to LLM evaluation, it is possible to specify the prompt prefix and suffix ad-hoc. This text will be sentence-split and embedded using the standard Sonar encoder. ## Common CLI arguments <a id="param_list"></a> | Argument | Description | |----------|----------| | `predictor` | The wrapper of the nodel to be evaluated. See Step 2 for more details | `data_loading.max_samples` | Evaluate on the maximum _k_ examples in the test data. Useful for debugging | | `data_loading.batch_size` | Loading and evaluate data in batch. By default `batch_size=10` | | `dataset_dir` | The directory consists of different JSONL files processed in Step 1. Only used in LLM evaluation | `dataset.parquet_path` | The parquet path consists of different Parquet files files processed in Step 1. Only used in LCM evaluation | `dataset.source_column` | The column in the data that refers to the input embedding. Not applicable when evaluating LLMs. | `dataset.source_text_column` | The column in the data that refers to the input text. | `dataset.target_column` | The column in the data that refers to the ground-truth embedding. Not applicable when evaluating LLMs. | `dataset.target_text_column` | The column in the data that refers to the ground-truth text. | `dataset.source_text_prefix` | The text that will prepended to each input text to make the prompt for the model. | `dataset.source_text_suffix` | The text that will appended after each input text to make the prompt for the model. | `task_args` | The JSON-formatted string that represents the task arguments. See [task param list](#task_param_list) below. | `dump_dir` | The directory consisting output of the eval run. If successful, there should be a file `metrics.eval.jsonl` that consists of metric results, the directory `results` that capture the verbose command line used with the detailed output scores, and the directory `raw_results` that shows the model output for each individual sample, together with the per-sample metric results. | `task` | Task configuration. See Step 3 for examples. | `task_args` | The JSON-formatted string that represents the task arguments. See [task param list](#task_param_list) below. | `launcher` | Whether the CLI should be run locally, or in SLURM cluster. Accepted value is `local`, `submitit` (SLURM) or `standalone` (debug mode). | `job_args` | Parameters used when launching eval in SLURM. See [below](#slurm-eval) for more details. *Table: List of common arguments in Evaluation CLI.* _Note_: In above examples, free arguments such as `generator_batch_size`, `temperature`, etc. are generator options. They depend on specific predictor, as explained in Step 2. Giving a wrong option will trigger and error in the CLI. Outputs dumped in the directory specified by `dump_dir` will be structured as: ``` . ├── metadata.jsonl ├── metrics.eval.jsonl ├── raw_results ├── results └── tb ``` where `metrics.eval.jsonl` contains corpus-level scores. ### Task arguments <a id="task_param_list"></a> In both LLM and LCM evaluation, we can configure how inputs and outputs are processed: - `max_prompt_len`: The model context size, i.e. maximum number of tokens (in LLM) or sentences (in LCM) that the model can accept - `max_gen_len`: The maximum number of tokens (in LLM) or sentences (in LCM) the model should generate. Note that some model generators have its own stopping criteria, so the actual generated text can be much lower than this value. - `min_gen_len`: The minimum number of tokens (in LLM) or sentences (in LCM) the model should generate. - `max_gen_len_ratio`: The maximum number of tokens (in LLM) or sentences (in LCM) the model should generate _in comparison_ to the input length. For example, if the source document is 5K long and `max_gen_len_ratio=0.2`, we are asking the model to generate 1K-long output (Again, due to the model generators inner behaviour, the output can be much shorter) ## Evaluate big datasets <a id="slurm-eval"></a> The above command is sufficient for most cases where you load the model into one GPU and evaluate the whole dataset locally, i.e. the datasets and everyhing is loaded into the memory. For bigger datasets, or for models which are not easily run in one GPU, or two slow to evaluate, we can submit the evaluation job to the SLURM cluster by choosing the `launcher=submitit`: ```shell slurm_partition=YOUR_SLURM_PARTITION shards=NUMBER_OF_SLURM_NODES timeout_min=JOB_TIMEOUT_IN_MINUTES uv run -m lcm.evaluation \ --predictor two_tower_diffusion_lcm \ --model_card path/to/the/model_card.yaml \ --generator_batch_size 16 \ --tasks lcm_generation \ --task_args '{"max_gen_len": 200}' \ --dataset.parquet_path parquet_dataset/cnn_dailymail \ --data_loading.batch_size 16 \ --dump_dir output_results \ --launcher submitit \ --job_args '{"launcher.cache": "null", "launcher.partition": "'${slurm_partition}'", "launcher.qos": "'${qos}'", "nshards": '${shards}', "requirements": {"gpus_per_node": 1, "timeout_min": '${timeout_min}'}}' \ ``` The parameters in `job_args` are submitit parameters. Please refer to https://github.com/facebookincubator/submitit for more comprehensive documentation and parameters list.
{ "source": "facebookresearch/large_concept_model", "title": "examples/evaluation/README.md", "url": "https://github.com/facebookresearch/large_concept_model/blob/main/examples/evaluation/README.md", "date": "2024-12-12T21:59:57", "stars": 1938, "description": "Large Concept Models: Language modeling in a sentence representation space", "file_size": 17083 }
# Main ingredients of training recipes ### Training and validation data ```yaml training_data: - name: "<corpus_name>=<split>:<weight>" source_prefix_text: "Beginning of source." # concept added at the beginning of source source_suffix_text: "End of source." # concept added at the end of source target_prefix_text: "Beginning of target." # concept added at the beginning of target (supervised data only) target_suffix_text: "End of target." # concept added at the end of target (supervised data only) - name: "<corpus2_name>=<split>:<weight2>" ``` ### Data loading config ```yaml data_loading_config: max_tokens: 7168 # Exclusive with batch_size batch_size: none # Exclusive with max_tokens len_to_wrap_long_seq: 128 # Sequences longer than this will be wrapped. packing: true # if True, documents in the batch will be packed. ``` The batch content can be defined in several ways: - `max_tokens` / `len_to_wrap_long_seq` approximate `batch_size`. - `batch_size` x `len_to_wrap_long_seq` approximate `max_tokens`. Note that `len_to_wrap_long_seq` has to be smaller than the model's `max_seq_len` defined in the architecture (e.g. [`two_tower_diffusion_lcm_1_6B`](../../lcm/models/two_tower_diffusion_lcm/archs.py#L36)`). To filter out long samples without wrapping, you can add `filters` to each dataset config to filter based on the length of the document's list of sentences (`text_sentences`): ```yaml - name: "<corpus_name>=<split>:<weight>" source_prefix_text: "Beginning of source." filters: 'pa.compute.less(pa.compute.list_value_length(pa.dataset.field("text_sentences")), 128)' ``` ### Checkpointing config ```yaml checkpoint_every_n_steps: 2_000 # QED keep_last_n_checkpoints: 2 # delete all but last N non-consolidated checkpoints save_model_every_n_steps: 10_000 # consolidate model every N steps (valid if using FSDP) preserve_consolidated_models: True # preserve the consolidated checkpoints ```
{ "source": "facebookresearch/large_concept_model", "title": "recipes/train/README.md", "url": "https://github.com/facebookresearch/large_concept_model/blob/main/recipes/train/README.md", "date": "2024-12-12T21:59:57", "stars": 1938, "description": "Large Concept Models: Language modeling in a sentence representation space", "file_size": 2009 }
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2022 Hugging Face SAS. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
{ "source": "huggingface/chat-macOS", "title": "LICENSE.md", "url": "https://github.com/huggingface/chat-macOS/blob/main/LICENSE.md", "date": "2024-09-24T02:50:17", "stars": 1928, "description": "Making the community's best AI chat models available to everyone.", "file_size": 11313 }
<p align="center" style="margin-bottom: 0;"> <img src="assets/banner.png" alt="HuggingChat macOS Banner"> </p> <h1 align="center" style="margin-top: 0;">HuggingChat macOS</h1> ![Static Badge](https://img.shields.io/badge/License-Apache-orange) [![swift-version](https://img.shields.io/badge/Swift-6.0-brightgreen.svg)](https://github.com/apple/swift) [![platform](https://img.shields.io/badge/Platform-macOS_14.0-blue.svg)](https://github.com/apple/swift) ### About HuggingChat macOS is a native chat interface designed specifically for macOS users, leveraging the power of open-source language models. It brings the capabilities of advanced AI conversation right to your desktop, offering a seamless and intuitive experience. ### Demo https://github.com/user-attachments/assets/dacc87b2-2242-4ef5-84d5-9f9aae50c453 ### Installation 1. Go to the [Releases](https://github.com/huggingface/chat-macOS/releases) section of this repository. 2. Download the latest `HuggingChat-macOS.zip` file. 3. Unzip the downloaded file. 4. Drag the `HuggingChat.app` to your Applications folder. #### Homebrew HuggingChat is also available via Homebrew. Simply run: ```bash brew install --cask huggingchat ``` That's it! You can now launch HuggingChat from your Applications folder or using the dedicated keyboard shortcut: `⌘ + Shift + Return`. #### VSCode Integration In order to use HuggingChat in VSCode, you'll need to install the [HuggingChat Extension](https://github.com/cyrilzakka/huggingchat-helper). After downloading it, add it to VSCode by navigating to the Extensions tab and selecting "Install from VSIX". Choose the downloaded file and restart VSCode. HuggingChat can now use context from your code editor to provide more accurate responses. ### Development Setup #### Prerequisites - Xcode 16.0 or later - macOS 14.0 or later #### Building the Project 1. Clone the repository: ```bash git clone https://github.com/huggingface/chat-macOS.git cd HuggingChat-macOS ``` 2. Open `HuggingChat-macOS.xcodeproj` in Xcode 3. Select your development team in the project settings if you plan to run on a physical device 4. Build and run the project (⌘ + R) ### Making Contributions #### 1. Choose or Create an Issue - Check existing [issues](https://github.com/huggingface/chat-macOS/issues) for something you'd like to work on - Create a new issue if you have a bug fix or feature proposal - Comment on the issue to let maintainers know you're working on it #### 2. Fork and Branch 1. Fork the repository to your GitHub account 2. Create a new branch for your work: ```bash git checkout -b feature/your-feature-name # or git checkout -b fix/your-bug-fix ``` #### 3. Code Style Guidelines - Follow Apple's [Swift API Design Guidelines](https://swift.org/documentation/api-design-guidelines/) - Use SwiftLint rules defined in the project - Maintain consistent spacing and formatting - Write meaningful commit messages - Add comments for complex logic ### Feedback We value your input! If you have any suggestions, encounter issues, or want to share your experience, please feel free to reach out: 2. **GitHub Issues**: For bug reports or feature requests, please create an issue in this repository. - Provide a clear title and description of your feedback - Include steps to reproduce the issue (for bugs) or detailed explanation (for feature requests) - Include the app version number and macOS version - Submit the issue Your feedback helps improve HuggingChat macOS for everyone. Thank you for your support!
{ "source": "huggingface/chat-macOS", "title": "README.md", "url": "https://github.com/huggingface/chat-macOS/blob/main/README.md", "date": "2024-09-24T02:50:17", "stars": 1928, "description": "Making the community's best AI chat models available to everyone.", "file_size": 3562 }
Swift Build ======= Swift Build is a high-level build system based on [llbuild](https://github.com/swiftlang/swift-llbuild) with great support for building Swift. It is used by Xcode to build Xcode projects and Swift packages, and by Swift Playground. It can also be used as the Swift Package Manager build system in preview form when passing `--build-system swiftbuild`. Usage ----- ### With SwiftPM When building SwiftPM from sources which include Swift Build integration, passing `--build-system swiftbuild` will enable the new build-system. This functionality is not currently available in nightly toolchains. ### With Xcode Changes to swift-build can also be tested in Xcode using the `launch-xcode` command plugin provided by the package. Run `swift package --disable-sandbox launch-xcode` from your checkout of swift-build to launch a copy of the currently `xcode-select`ed Xcode.app configured to use your modified copy of the build system service. This workflow is currently supported when using Xcode 16.2. ### With xcodebuild Changes to swift-build can also be tested in xcodebuild using the `run-xcodebuild` command plugin provided by the package. Run `swift package --disable-sandbox run-xcodebuild` from your checkout of swift-build to run xcodebuild from the currently `xcode-select`ed Xcode.app configured to use your modified copy of the build system service. Arguments followed by `--` will be forwarded to xcodebuild unmodified. This workflow is currently supported when using Xcode 16.2. Documentation ------------- [SwiftBuild.docc](SwiftBuild.docc) contains additional technical documentation. To view the documentation in browser, run the following command at the root of the project: ```bash docc preview SwiftBuild.docc ``` On macOS, use: ```bash xcrun docc preview SwiftBuild.docc ``` Testing ------------- Before submitting the pull request, please make sure you have tested your changes. You can run the full test suite by running `swift test` from the root of the repository. The test suite is organized into a number of different test targets, with each corresponding to a specific component. For example, `SWBTaskConstructionTests` contains tests for the `SWBTaskConstruction` module which plan builds and then inspect the resulting build graph. Many tests in Swift Build operate on test project model objects which emulate those constructed by a higher level client and validate behavior at different layers. You can learn more about how these tests are written and organized in [Project Tests](SwiftBuild.docc/Development/test-development-project-tests.md). Contributing to Swift Build ------------ Contributions to Swift Build are welcomed and encouraged! Please see the [Contributing to Swift guide](https://swift.org/contributing/). Before submitting the pull request, please make sure that they follow the Swift project [guidelines for contributing code](https://swift.org/contributing/#contributing-code). Bug reports should be filed in [the issue tracker](https://github.com/swiftlang/swift-build/issues) of `swift-build` repository on GitHub. To be a truly great community, [Swift.org](https://swift.org/) needs to welcome developers from all walks of life, with different backgrounds, and with a wide range of experience. A diverse and friendly community will have more great ideas, more unique perspectives, and produce more great code. We will work diligently to make the Swift community welcoming to everyone. To give clarity of what is expected of our members, Swift has adopted the code of conduct defined by the Contributor Covenant. This document is used across many open source communities, and we think it articulates our values well. For more, see the [Code of Conduct](https://swift.org/code-of-conduct/). License ------- See https://swift.org/LICENSE.txt for license information.
{ "source": "swiftlang/swift-build", "title": "README.md", "url": "https://github.com/swiftlang/swift-build/blob/main/README.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 3855 }
_[Put a one line description of your change into the PR title, please be specific]_ _[Explain the context, and why you're making that change. What is the problem you're trying to solve.]_ _[Tests can be run by commenting `@swift-ci` test on the pull request, for more information see [this](https://github.com/swiftlang/swift-build/blob/main/README.md)]_
{ "source": "swiftlang/swift-build", "title": ".github/PULL_REQUEST_TEMPLATE.md", "url": "https://github.com/swiftlang/swift-build/blob/main/.github/PULL_REQUEST_TEMPLATE.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 356 }
# Swift Build @Metadata { @TechnologyRoot } Swift Build is a high-level build system based on (llbuild)[https://github.com/swiftlang/swift-llbuild] with great support for building Swift. It is used by Xcode to build Xcode projects and Swift packages. It can also be used as the Swift Package Manager build system in preview form when passing `--build-system swiftbuild`. ## Overview Swift Build is structured in layers of framework targets, which is described in <doc:build-system-architecture>. ## Topics ### Architecture - <doc:build-system-architecture> - <doc:dynamic-tasks> - <doc:indexing-support> - <doc:swift-driver> ### Development This section contains notes on developing Swift Build. - <doc:build-debugging> - <doc:test-development> - <doc:test-development-project-tests> ### Core This section describes selected subsystems of the core infrastructure of Swift Build. - <doc:target-specialization> - <doc:macro-evaluation> - <doc:project-interchange-format> - <doc:xcspecs> ### Task Construction - <doc:discovered-dependencies> - <doc:mutable-outputs> - <doc:mergeable-libraries>
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/swift-build.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/swift-build.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 1108 }
This target is used to pull in some additional C headers underneath SWBLibc since SwiftPM can't build C and Swift sources in the same target.
{ "source": "swiftlang/swift-build", "title": "Sources/SWBCLibc/README.md", "url": "https://github.com/swiftlang/swift-build/blob/main/Sources/SWBCLibc/README.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 141 }
# Build System Architecture This document surveys the architecture of the Swift Build build system, from the lowest layers to the highest. ## General Structure Swift Build runs as a service process separate from the SwiftPM or Xcode process it's associated with. Clients start a SWBBuildService process on demand, and there is a single SWBBuildService process per client process. Clients and SWBBuildService communicate via serialized messages across a pipe. Swift Build is structured in layers of framework targets. For the most part this is a vertical stack of frameworks, although in some places there are multiple peer frameworks. Swift Build makes extensive use of protocols to achieve this layering, and to control access to data. Swift Build is built on top of the [llbuild](https://github.com/apple/swift-llbuild) project, an open-source build system. llbuild is included as a library in SWBBuildService's address space, as opposed to running as a separate process. SWBBuildService and llbuild will communicate bidirectionally, with SWBBuildService instructing llbuild to perform work, and llbuild calling back to SWBBuildService for additional information it needs. - - - ## Low-Level Build System **Framework:** SWBLLBuild The low-level build system is little more than a shim layer on top of llbuild and its own `BuildSystem` subcomponent, providing Swift bindings on top of the C API in a form suitable for use by Swift Build. This layer ideally would eventually be replaced by official llbuild Swift bindings supplied directly from the project. ## Utility Frameworks **Frameworks:** SWBUtil, SWBCSupport SWBUtil contains utility code used across all other layers of Swift Build frameworks. These include extensions to standard libraries, implementations of standard-library-like classes (e.g., *OrderedSet*), covers for Foundation code (e.g. *PropertyList*), covers for POSIX code (e.g., *FSProxy*, *Process*) and other utility code. Code in SWBUtil is arguably not specific to Swift Build and might be of interest to any project. SWBCSupport contains a small amount of C or Objective-C code bridged into Swift. **Testing:** SWBUtil tests are often true "unit" tests, exercising small and well-defined behaviors. Tests of SWBUtil classes should be reasonably complete along with the implementation of the classes. There are presently no tests for SWBCSupport code. Ideally all code will will eventually be ported to Swift inside the proper framework with proper tests written. If we determine that some of this code should remain non-Swift code, we should still consider whether it should be moved elsewhere, and we should still write tests to exercise it. ## Core Framework **Framework:** SWBCore SWBCore contains classes which are used at multiple higher layers of Swift Build, but unlike SWBUtil are specific to Swift Build. It includes several key subsystems: ### Macros This is the engine used for build setting evaluation. Build settings are declared as having a type (usually string or string-list), string expressions are parsed so they can be evaluated when requested, and then those expressions are evaluated using a `MacroEvaluationScope` for a particular context. ### Settings The `Settings` class contains build settings and other information bound for a particular target (or, occasionally, other configurations) to be able to build that target. A `Settings` object has a complicated creation procedure (encapsulated in the class) in order to bind all of the relevant information. This is a central class for other parts of Swift Build to get information about what is being built. ### Project Model This is the Swift Build representation of a package graph in SwiftPM or workspace in Xcode, including all model objects and all data that they contain. This representation very closely matches the higher-level representations, although some transforms or derived information are present in some cases where it is considered beneficial. The term "PIF" (Project Interchange Format) is used both to describe the serialized representation used to transfer the model from clients to Swift Build, and as a shorthand for the Swift Build-side representation of the model. ### Specifications Specifications ("specs") are a data-driven representation of certain objects used by the build system. Specs are most often used for build tools (compilers, linkers, etc.), but are used for a few other concepts such as file types. ### Platforms and SDKs A platform corresponds to a particular device target, such as macOS, iOS, iOS Simulator, etc. A platform will have one or more SDKs, which contain the headers and libraries to build against. Platforms and SDKs often contain information to direct Swift Build how to build for that target (from the platform's `Info.plist` and the SDK's `SDKSettings.plist`), and Swift Build loads that information to make use of it. ### Task Planning Support Classes One of Swift Build's central concerns is analyzing the inputs (workspace, build parameters) and generating a dependency graph (a.k.a. build description) from those inputs. This is called "task planning" or "task construction". Several classes used to represent files to build (nodes), commands to run (tasks), and information from clients (the build request, provisioning info, etc.) are defined at the core level as they are used at multiple higher levels. **Testing:** SWBCore tests are often true "unit" tests, exercising well-defined behaviors. Many classes in SWBCore will have reasonably complete tests written alongside them, although there are exceptions, such as: * Some simple structs and extensions such as in `SigningSupport` may not be tested in isolation but instead tested by implication at a higher layer. ## Task Construction **Framework:** SWBTaskConstruction This framework sits on top of SWBCore, and defines all of the high-level "business logic" for constructing concrete tasks to execute for a build from a project model, including the full command lines to use. Task construction (a.k.a. task planning) involves taking a set of inputs (typically a `WorkspaceContext` and a `BuildRequest`) and generating a collection of tasks and nodes representing the dependency graph of work to be done. This output is encapsulated in a `BuildPlan` which is used by the higher-level build system framework to create a build description and generate the llbuild manifest file. The `ProductPlanner` will create a `ProductPlan` for each target to build, and a `TaskProducer` for each chunk of work needed to plan that target. A task producer roughly corresponds to a build phase in the product model, although there are some additional task producers which do not have concrete build phases, and the role of build phases is being deemphasized over time. The product planner launches all the task producers in parallel to create their tasks, and then collects them via a serial aggregation queue to create the build plan. Much of the interesting work in task construction is in the individual task producer subclasses, and those in turn often invoke individual tool specifications in SWBCore to create individual tasks to process input files to output files. Most of the dependency graph must be created up-front, but there are some exceptions (e.g. Clang and Swift compilation) where a static node in the dependency graph may request dynamic work as an input. Some tool specifications, such as that for the Core Data compiler, also run preflight operations during task construction to learn what output files will be generated for an input file so that inputs and outputs of a task can be declared during task construction. *To be documented:* `PlannedTask`, `ExecutableTask`, `TaskPayload` **Testing:** Task construction testing has supporting infrastructure in the form of the `TaskConstructionTester` class which is used to perform task construction from a set of inputs, and then vend the results to a checker block. The checker block typically checks for the presence of expected targets and tasks (and sometimes the absence of ones which should not exist), and examines the rule info, command line, environment, and inputs and outputs of those tasks for the expected values. There are also supporting classes in `TestWorkspaces.swift` which are used to describe a test project or workspace in-memory (i.e., without needing to create a test workspace on disk). While there are a few large task construction tests which test the most common set of task construction logic, most newer tests are scenario-based, covering new functionality or fixed bugs. Enhancement of the supporting task construction test infrastructure is encouraged, when additional functionality will make writing tests easier. ## Task Execution **Framework:** SWBTaskExecution; in principle this should be a peer to SWBTaskConstruction, but the `BuildDescription` class currently imports that framework. This framework sits on top of SWBCore, and defines the code which *executes* during an actual build. We make this strongly separated from SWBTaskConstruction because in general we expect task construction to be cached, and we don't use the actual objects constructed there during a build. Separating the modules makes it more clear what code is allowed in each place. The task execution framework governs the running of tasks during a build, and it consists of two major subsystems: ### Build Description & Build Manifest The term "build description" is often used to mean both the build description itself (the `BuildDescription` class), and the build manifest, which is the file Swift Build writes for llbuild to use to drive its build logic. The manifest is a JSON file consisting of a list of commands and some auxiliary information. This is essentially a serialized representation of the dependency graph. For any given build, there is a single description-manifest pair. The build description is additional information which Swift Build needs to provide to llbuild during the build, for example task actions and build settings. There is some overlap of information between the manifest and the build description, but ultimately we hope to eliminate this redundancy by allowing Swift Build to query llbuild for information in the manifest. When Swift Build is handed a build request to build a workspace, the `BuildDescriptionManager` first checks whether it already has a cached (in-memory or on-disk) build description for those inputs, and if it does, then it will use it. Consequently, it should never be assumed that task planning will occur for every build; it may be bypassed entirely. But if no cached build description is found, then a build plan will be generated from the inputs, and a new build description and manifest will be created from the build plan. ### Task Actions Task actions are in-process commands which run during a build. These are, literally, tasks which run inside Swift Build's address space rather than as subprocesses running on-disk tools. Some task actions are in-process for historical reasons, while others require information only available inside Swift Build (for example, to to request dynamic inputs or implement custom behaviors). llbuild will call back to Swift Build when a task specifies a task action. *To be documented:* `Task`, `TaskAction` **Testing:** Tests for task actions are scenario-based, typically using a pseudo file system, to check that running a task action behaves as expected. There are some some simple build description tests, but the more interesting tests are up in SWBBuildSystem. ## Build System Framework **Framework:** SWBBuildSystem; this is the parent to SWBTaskConstruction and SWBTaskExecution This framework sits on top of SWBCore, SWBTaskConstruction, and SWBTaskExecution. It coordinates the construction and planning of build operations, defines the concrete tasks used to execute them, manages their execution, and handles dispatching status back to clients. The `BuildManager` class manages the build operations in the Swift Build process, while the `BuildOperation` class represents a single build operation. **Testing:** As in SWBTaskConstruction tests, the SWBBuildSystem tests include a helped class `BuildOperationTester` which is used to construct a build description from build inputs, and then to run that build and examine the results. Most of the tests which run builds involve performing real builds rather than simulated builds, as the simulation support is still in development. ## Build Service **Framework:** SWBBuildService This framework sits atop the rest of the service-level ones, and it implements the actual service logic. This is the layer that communicates with the client via serialized messages. This layer also manages the individual sessions, one for each workspace open in the client. The `ClientExchangeDelegate.swift` file contains communications channels where SWBBuildService will call back to clients for information it needs during task construction or task execution. There is a slightly different mechanism for retrieving signing & provisioning inputs, which should probably be migrated to the `ClientExchangeDelegate` model. **Framework:** SWBServiceCore This framework implements general-purpose logic for a service. It is intended to be decoupled from the concrete service Swift Build implements; i.e. it could be reused to build another similar service. **Bundle:** SWBBuildServiceBundle This is the actual SWBBuildService service. It is a bundle which contains an executable which implements the service over a pipe. This is just a thin shim on the SWBBuildService framework. **Testing:** There is no testing at this layer yet. ## Protocol **Framework:** SWBProtocol This framework contains the shared protocol definitions for messages exchanged between the client framework (SwiftBuild.framework) and build service (SWBBuildService). ## Public API **Framework:** SwiftBuild This is the client facing framework which provides access to swift-build, for integration into clients. ## Command Line Tools - **swbuild:** This is the command line tool for interacting with public API and service, and for testing.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Architecture/build-system-architecture.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Architecture/build-system-architecture.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 14252 }
# Dynamic Tasks ## Architecture Overview Swift Build uses llbuild to invoke a planned build by creating a build manifest (see `BuildDescription` and related files). This manifest is created during task construction and is immutable. llbuild's execution mechanism on the other hand doesn't require an immutable manifest. It needs tasks to provide their inputs (files or other tasks), a callback to indicate if a previously produced build value is still valid and another callback to produce that build value. To create the tasks, llbuild uses the manifest with its command/inputs/outputs declarations. A file that is an output to a task has a reference to this task as its producer. Other tasks using this file as an input define it as input. Dynamic tasks in Swift Build use this API for tasks that are unknown at task construction time and get discovered during task execution. Since downstream tasks need to be blocked until the yet unknown tasks finished, there needs to be an upfront planned task that provides gates around dynamic tasks. Example: Swift compilation frontend invocations are dynamic tasks but they need to block linking which is planned upfront. Take make this work, there's an upfront planned task that requests those dynamic tasks but also specifies their expected outputs as its own outputs. That blocks downstream tasks until the requested dynamic tasks all finished. ## Concrete Implementation Tasks in llbuild implement the abstract class `Command` - most that are used in Swift Build are using `ExternalCommand`. `ExternalCommand` encodes the logic to define input and output nodes, it requests inputs and checks for outputs on incremental builds. It also creates parent directories for expected outputs. Subclasses override `executeExternalCommand` to perform the actual work. Tools that Swift Build invokes outside via their command line interface use `ShellCommand` which spawns a process using the provided command line options. Dynamic tasks go through `CAPIExternalCommand` which wraps the API to the Swift interface. Swift Build implements those via `InProcessCommand` that represents task actions (in-process work) and dynamic tasks that get created by their defined `DynamicTaskSpec` (`DynamicTaskSpecRegistry.implementations`). ### Execution Flow Let's use a simplified example of tasks to compile two Swift files with a following link step. With the Swift driver implementation utilizing dynamic tasks, there's one upfront planned task to request dynamic tasks which blocks the downstream linker task. The linker produces the product of interest, so llbuild reads the manifest and creates a command for it. The command (`ShellCommand`) specifies a command line and inputs/outputs. The inputs are .o files, so when they get requested, llbuild will request each producer of those files. In this simplified example, the only producer is the upfront planned task which has both Swift files as inputs and the .o files as outputs. But it doesn't create them, it requests dynamic tasks. First, the driver planning task that runs in process and initializes the driver and let's it create a planned build. It then uses this planned build to request other dynamic tasks for the frontend invocations. Those create the actual .o files that the gate-task expects so it finishes and unblocks llbuild in executing the linker task. ![](dynamic-tasks-execution-flow.png) ### Setup Here's an overview of the classes in llbuild (green) and Swift Build (blue) that are required to request and execute dynamic tasks: ![](dynamic-tasks-setup.png) The callbacks in TaskAction use defined state and allow for different work: #### taskSetup This might get called multiple times on the same `TaskAction` instance and should reset every state. It allows the task action to request tasks (`dynamicExecutionDelegate.requestDynamicTask`) or file inputs (`dynamicExecutionDelegate.requestInputNode`). Every request takes an identifier that needs to be unique over all inputs (files and tasks) represented by the `nodeID`/`taskID` of the interface. Example: Swift driver planning is requested for a Swift driver job scheduling action in taskSetup - it needs to be requested, no matter what. #### taskDependencyReady This gets called for every input (potentially multiple times). The provided `dependencyID` matches the previously provided `nodeID`/`taskID`. This callback also allows to request more work. Example: Once driver planning finishes, this callback is used to use the result of planning and schedule driver jobs itself. #### performTaskAction In this state it's not possible to request any more inputs. It is guaranteed that all requested inputs successfully provided a build value in the `taskDependencyReady` callback. Keep in mind that the value does not necessarily represents a successful state - it might be failed, missing or invalid. `performTaskAction` is meant to validate local state like previously generated errors (failed inputs) and to do the actual work of the task action. Example: Swift driver planning is a dynamic task that calls the driver in `performTaskAction`, SwiftDriverJobTaskAction executes the compiler frontend invocation via the provided llbuild spawn interface.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Architecture/dynamic-tasks.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Architecture/dynamic-tasks.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 5228 }
# Indexing Support Swift Build, Xcode, and the low-level compilation tools work together to provide semantic functionality in the Xcode editor, like code-completion, jump-to-definition, global rename, etc. ## Overview There are two forms of indexing in Xcode, both of which cooperatively utilize the (raw) Index Data Store in Derived Data. - Index while Building - Background Indexing ### Index while Building Index-while-Building is controlled via the `INDEX_ENABLE_DATA_STORE` build setting and is enabled by default for debug builds, and the index store directory path is controlled by the `INDEX_DATA_STORE_DIR` setting. When enabled, the index data store is populated by the compilers during the build process. The `-index-store-path` flag (both for clang and swiftc) indicates the directory root in which raw index data should be placed. The individual data files written by the compilers are not considered part of the build intermediates or outputs and therefore are not tracked by the dependency system. > Note: The format and directory structure of the Index Data Store is an implementation detail of the compiler and indexing system. However, it's worth mentioning that the filenames of some of the files within the index data store are based on a hash of the absolute file path of the related translation unit. The compilers accept a `-index-unit-output-path` flag which can be used to base this hash on a relocatable path (which may or may not exist in the underlying filesystem). This is important for distributed systems where part of the raw index data may be generated on a remote server. ### Background Indexing Background indexing uses the same raw data store, but uses dedicated build system APIs (see `generateIndexingInfo` and in `SWBBuildServiceSession`) to extract the compiler command line invocations that the build system would normally perform, and then re-invokes those command lines outside the context of the build system when appropriate, in order to achieve higher performance compared to performing a full build each time the raw index data store needs to be updated for a given translation unit.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Architecture/indexing-support.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Architecture/indexing-support.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 2140 }
# Swift Driver This document will explain specifics about the Swift driver planning invocation and execution. Check out <doc:dynamic-tasks> for background about how dynamic tasks in Swift Build work. ## Difference to clang Swift has no header files, so it needs to generate modules that downstream targets can depend their compilation on. Due to the absence of header files the compiler also needs to parse in multiple files (in fact, all files of the same target + Swift modules from upstream targets) to reason about declarations within the source. Swift uses different compilation modes for that to split up that work. Whole module optimization *(which is also an optimization level, we focus on the compilation mode here)* parses and compiles all files within the context of one long running process that also produces all .o files of that target. Single file *(not available in the build settings editor anymore)* spawns one process per file - each process needs to parse all files but compilation is focused on this one file. And batch mode, the default - now called incremental, is in between. It spawns n processes for m files and forms batches to compile a given number of source files per process. The creation of those processes and the orchestration between them is owned by Swift driver. ## Legacy Swift Driver The legacy Swift driver integration in Swift Build calls `swiftc` as a command line utility, waits for it to finish and parses a JSON based stdout protocol to show sub-processes in the build log: ![](swift-driver-legacy.png) This has fundamental problems for the build system: * `swiftc` spawns processes to parallelize work and utilize the CPU. Swift Build tries the same but without a complex communication protocol they compete against each other bringing the system to a halt in the worst case * Example: * Swift Build spawns cpu.count processes in parallel (if possible, given the dependency graph) * Swift driver spawns cpu.count frontend processes in parallel (for compilation those are always independent) * This results in n*n processes, so the problem grows with the number of cores * Swift Build always needs to wait for `swiftc` to finish although the necessary products to unblock downstream tasks (Swift module) can get created eagerly * Process spawning is a complex operation with many parameters, Swift Build uses llbuild for that which spawns every build system task the same way. Swift driver needs to replicate this or do it differently ## New architecture ![](swift-driver-new1.png) With the new integration of Swift driver, Swift Build calls into it using a library interface (libSwiftDriver.dylib → part of Xcode toolchain). Instead of one task to call the Swift driver (swiftc before), there are two tasks. SwiftDriver Compilation does the actual compilation of the files, creating .o files to unblock the linker. SwiftDriver Compilation Requirements on the other hand only produces the Swift module. It can and should run in parallel to compilation to eagerly unblock downstream tasks. Here’s an overview of the dynamic tasks that are scheduled in the different buckets (Emit-Module and Compile are using the same task action SwiftDriverJobTaskAction): ![](swift-driver-new2.png) ## Incremental Builds Usually llbuild decides if a task needs to re-run based on a) an input has changed or b) the output is not valid anymore. An invalid output has multiple possible causes, it could be a missing/deleted/never created file but it could also be that a task defines that the previous output is always invalid - meaning the task always needs to re-run. Swift driver does something similar, it keeps an incremental state and collects timestamps. On subsequent builds it reads in the incremental state and schedules only driver jobs that need to re-run. For the integration of Swift driver in Swift Build the decision was made to give Swift driver the sovereignty of incremental state for Swift. So Swift Build will always ask the driver about what tasks to run.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Architecture/swift-driver.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Architecture/swift-driver.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 4033 }
# Macro Evaluation Swift Build supports macro evaluation using `make`-like `$(X)` syntax. String values that contain macro references are called *macro expressions*, and can be evaluated to obtain literal values within *macro evaluation scopes*. ## Macro Types There are two fundamental kinds of macro expression: *string* and *string list*. A third type, *boolean*, is a subtype of *string* whose evaluated literal value is interpreted in the manner of NSString's `boolValue` method. ## Macro Declarations A *macro declaration* maps a name to a type (string, string list, or boolean) in a macro namespace. A macro name cannot be empty, and there can be only one declaration of a macro with a particular name in a macro namespace. It follows that each macro name can be associated with at most one type in a macro namespace. A macro declaration can also specify an unknown type, which is used for macros that are assigned but never explicitly declared. They are treated as either strings, string lists, or booleans depending on usage. ## Macro Namespaces A *macro namespace* defines a domain in which macros can be declared. Each macro namespace maintains a mapping from macro name to the corresponding macro declaration. All occurrences of a macro with a given name have the same type within a namespace (but may of course have different types in different namespaces). Namespaces are also responsible for parsing strings and arrays of strings into macro expressions of the appropriate type. The parsing semantics depend on the type (string or list) of the macro. Macro expression parsing may yield parse errors -- an expression of the appropriate type is always returned, but it also carries an optional error description. ## Macro Definition Tables A *macro definition table* associates macro declarations with lists of parsed macro expressions. Each of the associated macro expressions can be associated with an optional *condition* that indicates when the expression should be used. ## Macro Evaluation Scopes A *macro evaluation scope* represents a stack of macro definition tables in association with a set of condition parameter values, allowing unambiguous evaluation of macros to literals. ## Macro Conditions A *macro condition* allows a macro definition to be used only some of the time. In particular, a condition specifies a pattern that is matched against the value of that condition within the evaluation scope in which the condition is being tested. ## Macro Condition Parameter A *macro condition parameter* is a predefined parameter of conditionality in terms of which a macro condition can be specified. Swift Build currently defines five macro condition parameters: *config*, *sdk*, *variant*, *arch*, and *compiler*.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Core/macro-evaluation.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Core/macro-evaluation.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 2774 }
# Project Interchange Format The Project Interchange Format (PIF) is a structured representation of the project model created by clients to send to Swift Build. A single PIF can represent a workspace and all of the projects inside it. ## Overview ### What's Included in a PIF The PIF is a representation of the SwiftPM/Xcode project model describing the static objects which contribute to building products from the package/project, independent of "how" the user has chosen to build those products in any particular build. This information can be cached by Swift Build between builds (even between builds which use different schemes or run destinations), and can be incrementally updated by clients when something changes. The PIF format and exchange mechanisms are designed under the assumption that the PIF changes rarely relatively to other requests, because active edits to the project model are rare. ### What's Not Included The PIF is only a representation of the project model, and does not contain all relevant information for a particular build. Two classes of information are excluded. First, information which persists throughout a run of the client, such as: * **Environment variables:** * **Specifications:** Including tool specs, product and package type specs, etc. * **Platforms and SDK definitions** * **Toolchain definitions** This information is loaded by Swift Build directly via the *Core* object or explicitly sent through the service interface (in the case of environment variables). Second, information which describes a particular build and which may change from build to build, and is sent as part of individual build requests: * **Scheme information:** Which targets are to be built, whether Parallelize Targets and Implicit Dependencies are turned on. * **Run Destination information:** Including the target device and the active architecture. * **Build parameters:** Including the build action, per-target overrides of build settings, the workspace arena and more. ### Incremental Updates The PIF format is designed to support incremental update of certain objects (like projects and targets). This mechanism is designed so that the service can maintain a *persistent* cache of objects, and efficiently negotiate with the client the smallest set of objects which need to be transferred in response to model changes. The incremental update mechanism works by assigning unique *signatures* to each of the objects in the PIF which can be independently replaced. A single PIF can contain definitions for multiple objects, identified by signature. Within any individual object, references to other objects are encoded indirectly via the same signature. When a client wishes to transfer a PIF to Swift Build, it incrementally negotiates the exact set of objects which are required (i.e., missing from swift-build) using the [PIF exchange protocol](#exchange-protocol). Once all the required objects have been transferred, then Swift Build has a complete set of objects and can load the workspace. <a name="global-identifiers"></a> ### Global Identifiers Identifiers for objects in the PIF are needed to support looking up objects across top-level boundaries (such as project references). Objects that require this have an identifier (GUID) which is: * *Unique* within the entire PIF. * *Stable*, such that clients can generate the same GUID consistently for an object even if the object changes, unless that change makes it fundamentally a different object. <a name="exchange-protocol"></a> ## Exchange Protocol When a client wishes to transfer a PIF to Swift Build, it uses the PIF exchange protocol described here: 1. The client initiates a PIF transfer and sends a PIF containing the top-level object it wishes to transfer, typically the workspace. 2. Swift Build scans the set of objects it has received via the PIF transfer, and replies with a list of references which it does not have and requires transfer of. 3. The client transfers a PIF containing the additional requested objects. 4. Steps 2 and 3 repeat until Swift Build has all the objects it requires, at which point it acknowledges the receipt of the PIF and constructs the appropriate object. ## Format Description The low-level encoding of the PIF is as [JSON](http://json.org). PIFs are transferred using the [PIF exchange protocol](#exchange-protocol), and are encoded using the format described here. Each PIF consists of a sequence of top-level objects (*PIFObject*). These define the objects which can be incrementally replaced by the PIF exchange protocol. Each entry in the PIF is a dictionary of the form: * *signature*: A string signature uniquely representing the PIF object. This signature **must** completely define the PIF object; i.e. any other PIF entry with the same signature and type must define an identical object. * *type*: The type of the top-level object. This is one of *workspace*, *project*, or *target*. * *contents*: The contents of the object. ### Object Description The remainder of this document defines the exact encoding for each object which can be represented in the PIF. Objects which require it will have the following data in addition to their content: * **GUID**: The object's [unique identifier](#global-identifiers) within the PIF. For brevity, this item is not included in the structures described below. Where the *value* of items below refer to a **GUID**, that indicates that the GUID of the referenced object has been captured and will be resolved by Swift Build to a concrete object elsewhere in the PIF. The PIF structure has a few notable differences from the structure of the project model itself: * Xcode models file references and product references as fairly similar objects with overlapping sets of data. The PIF models these differently. For example, neither product references nor proxies will have their own path and source tree information. * Product references are not captured in the list of references of the project, but instead are captured as the product of the target which generates them. Thus the fact that the reference is produced by the target is modeled by it being contained inside the target's PIF, and not by being a discrete references with a pointer to its producing target. This more concisely captures the hierarchical relationship of the objects. * There are no project references or reference proxies directly represented in the PIF, because since the whole workspace is being captured they can be fully resolved to the GUIDs of the objects they really represent. #### Object References Any place where an object can refer to a top-level PIF object uses the following encoding: If the item consists of a single string value, then that value is the signature for the object which is expected to be present at the top-level of the PIF (or previously loaded and available independently to the loader, in which case the PIF is incomplete).
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Core/project-interchange-format.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Core/project-interchange-format.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 6962 }
# Build Target Specialization Target specialization is the feature which allows Swift Build to build multiple instances of a target within a single build. This is used as part of Xcode's Swift package support to provide "build to order" features for Swift packages, as well as multi-platform targets. ## Overview Within Swift Build, we maintain a distinction between the `Target` model object and the `ConfiguredTarget` which is a specific instance of a target plus build parameters, as used within a particular build. While Xcode traditionally only ever would allow one target to exist in one instance in a build, Swift Build can allow the target to participate multiple times. Conceptually, the `Target` can be thought of as the template for how the something is built, and the `ConfiguredTarget` is a specific instance of that template based on the run destination and other build parameters. We use this generally capability in order to provide the "build to order" feature used by Swift packages, as well as multi-platform targets. ## Mechanism Target specialization occurs as part of resolving an incoming build request into a complete `TargetBuildGraph`. The mechanism by which this happens is that `PackageProductTarget` are considered to be a "build to order" target type. When a target depends on a `PackageProductTarget`, rather than use the targets own settings to determine how it is built, it propagates _some of_ the settings from the target which depends on it. This propagation continues along the full dependency chain, and thus it impacts the `StandardTarget` instances used by the product type. ## Platform-Based Limitation Target specialization currently *only* allows specializing based on the platform. That is, we will at most produce one instance of a target per platform. While there are valid scenarios in which one could want a target to be built multiple time for a single platform (one could imagine an XPC service that has a newer deployment target, and could benefit by compiling its dependency for a newer version), this limitation: 1. Allows us to piggy back on the existing platform-specific build directories, rather than have to invent a new mechanism to segregate different specializations. 2. Is conceptually simple to understand, which was desirable for the first attempts to take advantage of our `Target` vs `ConfiguredTarget` architecture. 3. Allows us to avoid the complicated question of exactly when we should be trying to create a separate specialization based on the wide variety of settings (or even based on user authored intent). ## Specialization Consolidation Although conceptually targets are specialized based on the parameters of their dependents, this mechanism alone would result in more specialized targets than we currently support (or would want to support). For example, a package product depended upon by a target with a 10.11 deployment target and a 10.12 deployment target should typically only build the dependency for 10.11, rather than attempt to specialize twice. We implement this by first doing a prepass over the dependency graph, to gather all of the configured targets before performing any specialization. Once complete, we run the real dependency graph computation, which then searches configured targets before it ever creates a specialization. We can extend this over time to support a more complex aggregation of the specializations. Finally, once this process is complete we perform a post-processing pass to check that all of the specializations are valid. For example, in the current implementation it is an serious bug to ever have ended up with multiple specializations in the same platform.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Core/target-specialization.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Core/target-specialization.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 3710 }
# Specifications Swift Build uses external data files known as `xcspec` files for storing the specification data used for various tasks when constructing the build tasks. > Note: This document is not finished and does not fully reflect how support for some specs was implemented in Swift Build. For example, there are specifications which describe each of the tools that are run. The `xcspec` files are automatically discovered and loaded by Swift Build during initialization. ## Basic Format `xcspec` files are stored in property lists containing the specification data. Each property list file may contain one or more specs, multiple specs are encoded as elements in an array. Each individual spec is encoded as a dictionary in the property list. Each property list has several required keys which specify the basic information of the spec, and the remainder of the keys in the dictionary are determined by the exact type of spec. See [Spec Types](#spec-types) for the list of supported types. The common keys supported by all specs are: * *"Identifier"* (**required**) (**not inherited**) The identifier of the spec. This must be unique within any particular domain, and should generally follow reverse domain name syntax (a la Java). * *"Type"* (**required**) (**not inherited**) The type of the spec. See [Spec Types](#spec-types). * *"Domain"* (**optional**) (**not inherited**) The domain the spec should be registered within. If not provided, the spec will be registered in the domain appropriate to its location, for example specs in the shared locations will be registered in the default domain and specs in platforms will be registered in the platform's domain. See [Spec Types](#spec-types). * *"BasedOn"* (**optional**) (**not inherited**) If provided, a specifier for the spec which this one is "based on" (i.e., the one it inherits from). That spec will be loaded first and its spec properties will be inherited. The exact details and meaning of inheritance depend on the spec type. The base spec **must** be of the same `"Type"` (although the `"Class"` need not be). The specifier should be of the form `( domain ':' ) identifier` (e.g, `macosx:com.apple.tool.foo`), where the domain defaults to the domain of the current spec if not specified. * *"Class"* (**optional**) (**inherited**) If provided, the name of a specific class which should be instantiated to implement the spec behavior, as opposed to using the generic implementation appropriate for the type. This key *may be* provided in the the base specification. <a name="spec-discovery"></a> ## Spec Discovery Specs are automatically discovered and loaded from plugins. These plugins are provided by the Swift Build project itself. <a name="spec-types"></a> ## Spec Types The following types of specs are supported: * **["Architecture"](#specs-architecture)** These specs define actual architecture names or virtual names for groups of architectures. * **"BuildPhase"** > Note: FIXME: Document, or eliminate. This is used by Swift but might not be necessary anymore. * **["FileType"](#specs-filetype)** These specs define information about the file types used by Swift Build. * **["PackageType"](#specs-packagetype)** These specs define information used to implement the different kinds of product packages (i.e., executable, static library, framework, app extensions, etc.). * **["ProductType"](#specs-producttype)** These specs define information about specific product types that can be produced by the build system. * **"Platform"** Each platform also contributes a spec definition properties of the platform. > Note: FIXME: We don't currently have these, and may not need them. Eliminate if never used. * **[*Property Domain Specs*](#specs-propertydomain)** Property domain specs are not a concrete type that can be instantiated, but are a common base class shared by the *BuildSystem* specs and the *Tool* specs. This base class factors out the common code for defining information about groups of build settings. * **"BuildSettings"** > Note: FIXME: Document. This is used by Swift, but may no longer be necessary. * **["BuildSystem"](#specs-buildsystem)** These specs define basic information about the build system, for example information about all the build settings. Internally, there are separate "build system" definitions for external targets, aggregate, and "native" targets. These correspond to distinct *BuildSystem* specs. * **["Tool"](#specs-tool)** Each tool that can be executed by the build system has a defining spec containing information on the tool, how to invoke it, the settings it supports, and so on. The tool type is a superclass of the *Compiler* and *Linker* specs, but can also be concretely instantiated for tools that match neither of those types (for example, `strip` or `dsymutil`). * **["*Generic* Tool"](#specs-generictool)** Generic tool specs do not have a backing class but are (usually) entirely specified by the data in their spec plist. * **["Compiler"](#specs-compiler)** These specs are used for "compiler-like" tools, i.e., tools which operate on and input and transform it into an output of another type. * **["Linker"](#specs-linker)** These specs are used for `ld` and linker-like tools (like `libtool`). <a name="spec-objecttypes"></a> ## Spec Object Types The following kinds of objects may be encoded in the spec files. |Type|Description| |----|-----------| |Bool|A boolean value, which should be encoded as either the string `"YES"` or `"NO"` (several other spellings are support for backwards compatibility, but these are the preferred spellings).| |String|An individual string, encoded as part of the property list format.| |StringList|A list of strings, encoded as part of the property list format.| |BuildSettingsDict|A dictionary defining a group of build settings. This should be encoded as a dictionary mapping setting names (and optionally, conditions) to values. Also note that there is no ordering in these objects, which can be a problem when using conditional build settings.| |CommandLineStringList|A string or list of strings, encoded as part of the property list format. If the string form is used, then it will be separated into arguments following the shell quoting rules.| > Note: define `BuildSettingsDict` a bit more. <a name="specs-architecture"></a> ### Architecture Specs > Note: FIXME: Document Architecture specs. |Name|Type|Attributes|Description| |----|----|----------|-----------| |CompatibilityArchitectures|`CommandLineStringList`|**Optional**|The list of architectures which can are compatible with this one. A `VALID_ARCHS` including this arch will implicitly include all of the ones listed here.| <a name="specs-filetype"></a> ### FileType Specs > Note: FIXME: Document FileType specs. <a name="specs-packagetype"></a> ### PackageType Specs > Note: FIXME: Completely document PackageType specs. |Name|Type|Attributes|Description| |----|----|----------|-----------| |DefaultBuildSettings|`BuildSettingsDict`|**Required**|The default build settings included for all instances of this package type.| <a name="specs-producttype"></a> ### ProductType Specs > Note: FIXME: Completely document ProductType specs. | Name | Type | Attributes | Description | |------|------|------------|-------------| | DefaultBuildProperties | `BuildSettingsDict` | **Required** | The default build settings included for all instances of this product type. | <a name="specs-propertydomain"></a> ### Property Domain Specs Property domain specs are an abstract class used by all specs which define groups of supported build options (for example, the build system specs and the compiler specs). The property domain primarily consists of definitions of each of these options, including information on how the option should drive command line arguments for tool specs which make use of the automatic command line generation infrastructure. | Name | Type | Attributes | Description | |------|------|------------|-------------| | Properties | `StringList` | **Optional** | The list of properties associated with this spec, in the order they should be processed. For legacy compatibility these values may be specified under `"Options"`. The list should be an array of items which will be parsed following the information in [Build Option Definitions](#specs-buildoptiondefs).| | DeletedProperties | `StringList` | **Optional** | The names of build options to remove from the base spec's build option list, when creating the flattened list of options supported by this spec. | <a name="specs-buildoptiondefs"></a> ### Build Option Definitions Each build option definition is a property dictionary defining information on the option. The following keys are supported: | Name | Type | Attributes | Description | |------|------|------------|-------------| | Name | String | **Required** | The name of the option and the build setting that controls it. | | Type | String | **Optional** **Default="String"** | The string identifying the type of the option. | | DefaultValue | String | **Optional** | The default value for the build setting, if any. This must always be a string, but it will be parsed as a macro expression appropriate for the type of the option (string or string list). | | Values | *Custom* | **Required (Enumeration Options)** **Optional (Boolean Options)** **Unsupported (Other)** | This is an array of [Build Option Value Definitions](#specs-buildoptionvaluedefs). This entry is **required** for *Enumeration* option types, **optional** for *boolean* types, and **unsupported** for all other option types. This entry is used to describe the set of possible values for the option, and possibly additional information about how to handle instances of that value (for example, what command line options should be used for values of that type). See [Build Option Value Definitions](#specs-buildoptionvaluedefs) for more information on the supported features. There is no technical reason we cannot support this feature for non-enumeration scalar types. Supporting that is tracked by <rdar://problem/22444795>. | | CommandLineArgs | *Custom* | **Optional** | This is a string, string list, or dictionary describing how this option should translate to command line arguments. For the string and string list forms, they will be parsed as macro expression strings and subject to macro evaluation, then added to the command line. The macro expression may make use of `$(value)` to refer to the dynamic value of the option. For the dictionary form, the dictionary entries map possible dynamic values to the string or string list forms to substitute -- similar to the handling of `"Values"` but only supporting definition of the command line template. Those individual forms are subject to the same macro evaluation behavior as the string or string list forms of `"CommandLineArgs"`. The value entry of `"<<otherwise>>"` is a special sentinel value that defines the behavior to be used for any dynamic value not explicitly mentioned (that is, a default behavior). If a value appears in `"CommandLineArgs"` and `"Values"`, it **may not** define a command line template form in the `"Values"` entry. For boolean types, only the `"YES"` or `"NO"` values may be defined. | | AdditionalLinkerArgs | *Custom* | **Optional** | If present, this should be a dictionary mapping possible values to a string or string list supplying additional linker command line arguments to add if the option is enabled. | | CommandLineFlag | String | **Optional** | If present, defines the flag to use when translating this option to command line arguments. This key **may not** be combined with `"CommandLineArgs"` or `"CommandLinePrefixFlag"`. For boolean options, the given flag will be added to the auto-generated command line if the dynamic value for the option is true. For non-boolean option types, the given flag will be added followed by dynamic value of the option. For list option types, there will be one instance of the flag and the item per item in the dynamic value list. In both cases, the empty string is treated as a special case and signifies that only the dynamic value should appear in the arguments (that is, no empty string is added to the arguments). | | CommandLinePrefixFlag | String | **Optional** **Unsupported (Boolean Options)** | If present, defines the flag to use when translating this option to command line arguments. This key **may not** be combined with `"CommandLineArgs"` or `"CommandLineFlag"`. The given flag will be added as a single argument joined with the dynamic value of the option. For list option types, there will be one instance of the flag and the item per item in the dynamic value list. | | Condition | String | **Optional** | If present, a "macro condition expression" which defines when the option should be considered active. This can be used to evaluate a more complicated set of macros to determine when the command line option should be present. | | FileTypes | StringList | **Optional** | If present, a list of file type identifiers that define when this option should be active. | | AppearsAfter | String | **Optional** | The name of a build option which this option should immediately succeed. This is useful for controlling the order of generated arguments and the order of options in the UI. | Supported values for `Type`: | Type | Description | |------|-------------| | Boolean | A boolean build setting. | | Enumeration | An enumeration build setting, encoded as a string. | | Path | A build setting which represents a path string. | | PathList | A build setting which represents a list of path strings. | | String | A string build setting. | | StringList | A build setting which is a list of strings. | | MacroString | A string build setting, which is parsed as a macro expression and subject to macro evaluation when used. | > Note: For legacy compatibility, the type can be spelled in several other variants, but these should be considered the canonical forms. FIXME: Several other types are supported, but these are the primary ones. > Note: FIXME: For `CommandLinArgs`, Document string splitting semantics (are these shell-escaped strings?). <a name="specs-buildoptionvaluedefs"></a> ### Build Option Value Definitions Each build option value definition is a property dictionary defining information about a particular possible value for a *Boolean* or *Enumeration* build option. The following keys are supported: | Name | Type | Attributes | Description | |------|------|------------|-------------| | Value | String | **Required** | The string identifying the value. For boolean types, only the values `"YES"` and `"NO"` are allowed. | | DisplayName | String | **Optional** | The human readable string used to describe this value. | | CommandLine | String | **Optional** | If present, defines a "shell"-escaped string to use when translating this option to command line arguments (if the containing option's dynamic value matches this one). The string will be broken into separate individual arguments following the shell rules, and will be subject to macro expression evaluation. The macro expression may make use of `$(value)` to refer to the dynamic value of the option. | | CommandLineArgs | StringList | **Optional** | If present, defines a list of "shell"-escaped strings to use when translating this option to command line arguments (if the containing option's dynamic value matches this one). Each item in the string list will be broken into separate individual arguments following the shell rules, and will be subject to macro expression evaluation. The macro expression may make use of `$(value)` to refer to the dynamic value of the option. | | CommandLineFlag | String | **Optional** | If present, defines a single command line flag to pass on the command line when this dynamic value is present for the containing option. | <a name="specs-buildsystem"></a> ### BuildSystem Specs > Note: FIXME: Document BuildSystem specs. <a name="specs-tool"></a> ### Tool Specs Command-line tool specs support the following keys, in addition to those supported by its base class ([Property Domain Specs](#specs-propertydomain)). | Name | Type | Attributes | Description | |------|------|------------|-------------| | SourceFileOption | String | **Optional** | The option to pass to indicate what kind of source files are being processed. This will commonly default to `-c` for typical compilation tools, but alternate uses of tools (for example, the clang static analyzer) will use a different option. | > Note: FIXME: There are other keys currently used by tool specs which are not yet documented, including: `ExecDescription`, `InputFileTypes`, `FileTypes`, `SynthesizeBuildRule`, and `InputFileGroupings`. <a name="specs-generictool"></a> ### *"Generic"* Tool Specs *"Generic"* command line tool specs (those which *do not* use a custom subclass) support additional keys which are used to drive the generic machinery. | Name | Type | Attributes | Description | |------|------|------------|-------------| | CommandLine | `CommandLineStringList` | **Required** | This property defines the template which is used to construct command lines for the tool. The template should be a "command line" string list of arguments or placeholders to create the command line from. Each individual argument may either be a macro expression string or a placeholder expression of the form `[NAME]`. The following placeholders are supported: `[exec-path]`, `[input]`, `[inputs]`, `[options]`, `[output]`, and `[special-args]`. This property must be provided by the spec or its base spec. | | RuleName | `CommandLineStringList` | **Required** | This property defines the template which is used to the "rule info" (short description) for the tool. The template should be a "command line" string list of arguments or placeholders to create the rule info. Each individual argument may either be a macro expression string or a placeholder expression of the form `[NAME]`. The following placeholders are supported: `[input]` and `[output]`. | | ExecPath | `MacroString` | **Optional** | This property defines the name or path of the executable invoked by the command. | | EnvironmentVariables | *Custom* | **Optional** | If defined, this property should be a dictionary mapping environment variable names (as strings) to environment variable values (as strings). The values will be subject to macro evaluation. These environment variables will be added to the default set when invoking the command. | | IncludeInUnionedToolDefaults | `Boolean` | **Optional** **Default=true** | Specifies whether the tool's settings should be included in the unioned tool defaults which are added to all settings tables. | | Outputs | `StringList` | **Optional** | If present, defines the names of the output files created by the tool. These values will be subject to macro evaluation. They can make use of the special macro `$(OutputPath)` to refer to the automatically output path used in phases such as the resources phase, or they can define the output purely in terms of the input macros. | | WantsBuildSettingsInEnvironment | `Boolean` | **Optional** **Default=false** **Deprecated** | Specifies whether all of the available build settings should be pushed into the environment for use by the tool. **DO NOT** use this without talking to a member of the build system team, there are usually better ways to accomplish the same task. | | GeneratedInfoPlistContentFilePath | `MacroString` | **Optional** | If used, specifies a macro string expression which should expand to the path of a file produced as an additional output of the command. The file is expected to be a property list containing additional content to be included in the `Info.plist` for the product being built. | Placeholders for the `CommandLine` property: | Placeholder | Description | |-------------|-------------| | `[exec-path]` | Expands to the dynamically computed path of the tool. | | `[input]` | Expands to the path of the first input. May not be used by tools which accept multiple inputs. | | `[options]` | Expands to the automatically generated list of command line options derived from the tool's `Properties` spec data. | | `[output]` | Expands to the path of the first output. | | `[special-args]` | Expands to an tool specific list of arguments (this is an extension point for subclasses which wish to reuse the generic tool machinery). | Placeholders for the `RuleName` property: | Placeholder | Description | |-------------|-------------| | `[input]` | Expands to the path of the first input. | | `[output]` | Expands to the path of the first output. | <a name="specs-compiler"></a> ### Compiler Specs Compiler specs support the following keys, in addition to those supported by its base class ([Tool Specs](#specs-tool)). | Name | Type | Attributes | Description | |------|------|------------|-------------| | Architectures | `StringList` | **Optional** | A specifier for the architectures the compiler supports. If omitted, the compiler is expected to support any architecture. If present, the value must be either a string list containing the exact names of real architectures which are supported, or it can be the sentinel value `$(VALID_ARCHS)` indicating that the compiler supports all current valid architectures. | > Note: FIXME: Why is that necessary? Wouldn't such a compiler just declare itself as supporting any architecture? <a name="specs-linker"></a> ### Linker Specs > Note: FIXME: Document Linker specs.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Core/xcspecs.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Core/xcspecs.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 21666 }
# Build Debugging Swift Build has a few facilities for debugging build problems. ## User Defaults Currently, you can enable various debugging facilities by setting user defaults. ### Debug activity logs ``` defaults write org.swift.swift-build EnableDebugActivityLogs -bool YES ``` This will cause Swift Build to emit more detailed note diagnostics to the build log which is deemed to verbose for normal usage. ### Incremental build debugging ``` defaults write org.swift.swift-build EnableBuildBacktraceRecording -bool YES ``` This will cause Swift Build to print build backtraces for each task to the build log. Build backtraces read top to bottom describe how a task was invalidated and the chain of events which triggered that invalidation. For example: ``` Build Backtrace: an input of 'Create static library libllbuildBuildSystem.a (arm64)' changed the producer of file '/Users/user/Library/Developer/Xcode/DerivedData/swift-build-cexiveldpggehwhajfsbrdvdsiuf/Build/Intermediates.noindex/InstallIntermediates/macosx/Intermediates.noindex/llbuild.build/Debug/llbuildBuildSystem.build/Objects-normal/arm64/BuildSystemFrontend.o' ran an input of 'Compile BuildSystemFrontend.cpp (arm64)' changed file '/Users/user/Development/llbuild/include/llbuild/BuildSystem/BuildSystem.h' changed ``` ### Additional tracing for post-mortem analysis ``` defaults write org.swift.swift-build EnableBuildDebugging -bool YES ``` This will cause Swift Build to enable `llbuild`'s tracing feature and also to create a copy of any existing database, adjacent to the normal database location (currently `$(OBJROOT)/swift-buildData`). The database can be inspected using `llbuild`'s UI to see the contents of the database at the time the build was started. The trace file provides a *very* low-level description of what the build engine did during the build. The intention is that eventually llbuild will gain facilities to ingest the pre- and post- database files and the trace file and provide additional information about what happened. ## Debugging *llbuild* If you ever need to inspect the contents of the llbuild database (for example, to see discovered dependencies), the *llbuild* project has a basic web UI which can be used to explore the database. To do this, check out `llbuild/products/ui/README.md` and follow the instructions to install and start the web viewer, then give it the path (in the web UI) to the database you want to see. ## Multi-process Debugging The service nature of Swift Build can make debugging painful, when a problem only manifests when Swift Build is running as an inferior service of Xcode. The recommended way to deal with this is by writing isolated tests that reproduce the problem directly in *SWBBuildService* (without going through IPC). This avoids the need for doing multi-process debugging in the first place, while also increasing our test coverage (and you already have a test case for the bug you want to fix). If that fails, it is possible to manually attach to the service. Once launched, you can attach to it manually using Xcode or lldb. If that fails, you can fall back to launch Swift Build in a mode where the service will run entirely in process with the client-side framework (on background queues). This is done by setting `XCBUILD_LAUNCH_IN_PROCESS=1` in the environment. Xcode also supports using a custom copy of the Swift Build service by overriding `XCBBUILDSERVICE_PATH` in the launched Xcode environment. For example, this can be used to run with a development copy of the service using an installed set of Xcode tools: env XCBBUILDSERVICE_PATH=/path/to/SWBBuildService.bundle/Contents/MacOS/SWBBuildService \ xcodebuild --workspace .../path/to/foo.xcworkspace --scheme All
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Development/build-debugging.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Development/build-debugging.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 3757 }
# Project Tests This document describes how to write "project tests", a set of specific types of integration tests used to test build system functionality. ## Overview Like most software projects, Swift Build has numerous kinds of tests, including unit tests, integration tests, and performance tests. By far the most important kind of tests in the build system domain however are project tests. There are two main approaches to writing project tests in Swift Build, based on the different layers in the build system’s overall architecture. ### Task Construction Tests Task construction tests validate the contents of the task graph generated as part of a build’s initial planning stage, including the input and output dependency edges of tasks, the command like invocation used to perform the task, and other metadata. In a task construction test, the underlying task commands are NOT run, similar in concept to dry run builds. ### Build Operation Tests Build operation tests encompass both the task planning stage covered by task construction tests, as well as the subsequent execution of the underlying commands associated with each task in the graph. Build operation tests are usually chiefly focused on validating the outputs (files) produced by a build operation rather than the structure of the task graph, though they often do some combination of the two. ### Core Qualification Tests The intent is that these are the highest level of tests (in terms of abstraction layer). They go directly through the public API and spawn a build service process, unlike build operation tests which only test the build operation execution, or task construction tests which only test construction of the task graph but don't execute tools. These tests are focused on end to end experiences where exercising the communication between the client framework and the build service process adds additional coverage. ## Setup Both task construction and build operation tests usually involve setting up a test project or workspace, which is an in-memory representation of a workspace, project, or Swift package. It's also customary to wrap the test in `withTemporaryDirectory` in order to provide a location for any files written during the test execution to be placed, and which will automatically be cleaned up once the test ends. A minimal test case might look something like this: ```swift func testProject() throws { try withTemporaryDirectory { tmpDir in let project = TestProject( "aProject", sourceRoot: tmpDir, groupTree: TestGroup( "SomeFiles", children: [ TestFile("Assets.xcassets"), TestFile("Class.swift"), ]), buildConfigurations: [ TestBuildConfiguration( "Debug", buildSettings: [ "PRODUCT_NAME": "$(TARGET_NAME)", "PRODUCT_BUNDLE_IDENTIFIER": "com.apple.project", "SWIFT_VERSION": "5.0", ]), ], targets: [ TestStandardTarget( "SomeLibrary", type: .framework, buildConfigurations: [ TestBuildConfiguration("Debug"), ], buildPhases: [ TestSourcesBuildPhase([ "Class.swift" ]), TestResourcesBuildPhase([ TestBuildFile("Assets.xcassets"), ]), ] ), ]) } } ``` These test methods are normally placed in a test class derived from `XCTestCase` and conforming to `CoreBasedTests`. ## Evaluation The next phase involves building the project, which requires setting up a test harness which operates on the project or workspace set up in the previous section, and allows operations to be performed on it. For task construction tests, create a `TaskConstructionTester`: ```swift let tester = try TaskConstructionTester(core, testProject) ``` For build operation tests, create a `BuildOperationTester`: ```swift let tester = try BuildOperationTester(core, testProject) ``` _Both test harness' initializers require a `Core` object. The `CoreBasedTests` protocol provides a `getCore()` instance method which can be used to retrieve a Core instance within a test method._ The `tester` object has several methods to perform some kind of build and subsequently provide a results object to perform analysis on it. The most widely used is `checkBuild` (and `checkIndexBuild` for index builds), which accepts a build request and is common to both `TaskConstructionTester` and `BuildOperationTester`. `checkBuildDescription` and `checkNullBuild` are exclusive to `BuildOperationTester`. For example: ```swift await tester.checkBuild(BuildParameters(configuration: "Debug")) { results in // ... } ``` All of the "check" methods can be invoked multiple times on the same `tester` object, which can be useful for testing incremental build behavior. ## Analysis Once a build has been performed via a call to one of the "check" methods, the `results` parameter in the results analysis closure provides an opportunity to inspect the task graph and (for build operation tests), the output files produced in the file system. There are numerous "check" methods on the `results` object, many of which are provided via protocols common to both task construction and build operation tests. The two most common families of checking methods relate to checking for build tasks, and checking for diagnostic messages. They operate on a consumer model: clients are expected to call `checkTask` multiple times with various parameters to "consume" all of the tasks in a graph, finally calling `checkNoTask` to ensure that no unchecked tasks remained. The same is true for diagnostics. #### checkNoDiagnostics For tests validating a successful build, it's good practice to call `checkNoDiagnostics` at the beginning of the analysis closure to ensure that the build completed successfully without any warnings or errors. For tests where a build is expected to fail, call `checkError` and `checkWarning` as needed to consume the expected diagnostic messages. ```swift await tester.checkBuild(BuildParameters(configuration: "Debug")) { results in results.checkError(.equal("The build encountered an error.")) results.checkNoDiagnostics() } ``` #### checkTask In task construction tests, the `checkTask` and related methods check if a task merely exists in the graph. In build operation tests, these methods check if the task exists in the graph _and actually executed_. This distinction can be important for build operation tests validating incremental build behavior, which want to ensure that certain tasks did or did not run in an incremental build, based on the changes made to persistent state between builds. The `checkTask` method accepts a `TaskCondition` object which allows searching for a task based on varying match criteria, such as its rule info array (or subparts of it), command line arguments, or associated target. It is an error if the match criteria passed to `checkTask` returns more than a single task. For this reason, `checkTask` often needs to be given a more specific (or multiple) match criteria. To search for multiple tasks at once, use the `checkTasks` overload, which instead returns an array of zero or more tasks. If a task matching the given criteria is found, the trailing closure passed to `checkTask` will be called, and APIs to inspect various state about the task will be provided via the `task` object. The most useful attributes of a task to validate are usually its command line invocation and input and output dependency edges. ```swift await tester.checkBuild(BuildParameters(configuration: "Debug")) { results in results.checkTask(.matchRuleType("Ld")) { task in task.checkCommandLine(["ld", "-o", "/tmp/file.dylib", "/tmp/input1.o", "/tmp/input2.o"]) task.checkInputs([.path("/tmp/input1.o"), .path("/tmp/input2.o")]) task.checkOutputs([.path("/tmp/file.dylib")]) } } ``` #### consumeTasksMatchingRuleTypes Most tests only want to check the behavior of a handful of specific build tasks or build tasks of a given type. `consumeTasksMatchingRuleTypes` is a convenience method to consume or "skip" tasks of certain rule types that a given test is not interested in observing. By default, it skips invisible `Gate` tasks, empty directory creation tasks, and a handful of others, and can be passed a custom list of task types to skip. ```swift await tester.checkBuild(BuildParameters(configuration: "Debug")) { results in results.consumeTasksMatchingRuleTypes() // ignore the default set of tasks results.consumeTasksMatchingRuleTypes(["Ld", "Lipo"]) // ignore tasks related to linking } ``` #### checkTaskFollows, checkTaskDoesNotFollow `checkTaskFollows` and `checkTaskDoesNotFollow` provide the ability to to test whether two tasks have a direct or transitive dependency on one another (or not). Overloads are provided to check specific task instances, or a task instance in combination with a match criteria to identify some other matching task in the graph. ```swift await tester.checkBuild(BuildParameters(configuration: "Debug")) { results in results.checkTaskFollows(task1, antecedent: task2) results.checkTaskFollows(task1, .matchRuleType("Ld")) } ```
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Development/test-development-project-tests.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Development/test-development-project-tests.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 9632 }
# Test Development This document contains various advice related to unit and integration test development in Swift Build. ## Working With PIF Files from Xcode Sometimes it is useful to have a [PIF](doc:project-interchange-format) to test Swift Build independently of Xcode. To get one, you can run: xcrun xcodebuild -dumpPIF path/to/output.pif -project path/to/project.xcodeproj ## Saving Temporary Directories Many of our tests create temporary directories on the file system and automatically destroy them when complete. If you would like to not have these destroyed (so you can inspect them), you can run with `SAVE_TEMPS=1` set in the environment. ## Writing Project Tests Project tests are a kind of integration test used to test build system functionality, and one of the most important classes of test in Swift Build. <doc:test-development-project-tests>
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/Development/test-development.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/Development/test-development.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 874 }
# Discovered Dependencies Swift Build can discover dependencies from tools it runs during the build. ## Overview The way this works is that a task must be constructed to emit a dependency file in one of two formats: - **Makefile format**, currently used only by `clang` and `swiftc`. This is literally a make-based file of dependency info, and is also used by makefiles which are using those tools. - **Dependency info format**, which is an Xcode-defined format described below. Then the backing class for the xcspec which created the task must supply a `DependencyDataStyle` enum with the path to the file containing the dependency information, during task creation. Then llbuild will process the file in the appropriate format and use the dependencies as part of its dependency database. ## Dependency info format A file in dependency info format is a list of <opcode/cstring> pairs. Each opcode is one byte followed by a variable length, null-byte-terminated UTF-8 string.   The defined opcodes are: | Code | Name | Description | | ---- | ---- | ----------- | | 0x10 | Input dependency | The string is the absolute path to a file used directly or indirectly as an input to the tool. | | 0x11 | Missing input dependency | The string is the absolute path to a file the tool tried to use as an input, but which could not be found. This is useful to observe that a location in a search path was examined but no match was found, so that if a file is placed there (and no other changes were made), a later rebuild will notice that it is now present. | | 0x40 | Output dependency | An output file generated by the tool. | There are presently some limitations in Swift Build in processing these codes. Specifically: - **Missing input dependency** is presently unimplemented in Swift Build/llbuild. This was due to concerns that the dependency file would become large and expensive to process and handle. We have no plans to implement this code. - **Output dependency** is not yet implemented in Swift Build/llbuild.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/TaskConstruction/discovered-dependencies.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/TaskConstruction/discovered-dependencies.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 2026 }
# Mergeable Libraries Building dynamic libraries which can be merged into another binary to improve runtime performance. ## Overview Mergeable libraries is a feature of the Apple linker (introduced in Xcode 15.0) which enables a dynamic library to be created as "mergeable", containing additional metadata so that it can be merged into another binary (similar to linking against a static library with `-all_load`), or still be treated as a dynamic library. This feature was written to support an application-space equivalent to the `dyld` shared cache in the OS, with the goal of providing a launch time improvement and/or a smaller memory footprint. (At time of writing the actual benefits are still being examined.) ## Automatically merged binaries The main workflow to use mergeable libraries is to create an **automatically merged binary** target by enabling the build setting `AUTOMATICALLY_MERGE_DEPENDENCIES` on a framework or app target. (The framework form has sometimes been called an "umbrella framework", but this is unrelated to the existing umbrella framework concept already present in Apple platforms (Cocoa.framework, etc.)). Enabling this setting will behave differently in debug and release builds. (A "debug build" here is defined as one where either the `clang` or the `swiftc` debug format is "unoptimized" - `-O0` and `-Onone` respectively - and a release build is anything else. This is represented by a new build setting `IS_UNOPTIMIZED_BUILD`.) In release builds: * Immediate framework and dylib target dependencies of the merged binary target will be built as mergeable. (This trait does _not_ transitively carry over to their dependencies.) * Those mergeable libraries will be merged into the merged binary. * When those mergeable target products are embedded in either the merged binary product, or into a product containing the merged binary product (e.g. an app containing a merged binary framework), then their binaries will not be included in the embedded copy. In debug builds: * Immediate target dependencies of the merged binary target will be built normally, not as mergeable. * The merged binary target will link dylibs produced by those dependencies to be reexported. * When those target dependency products are embedded in either the merged binary product, or into a product containing the merged binary product, then their binaries will not be included in the embedded copy. (I.e., same as the release case.) * Those target dependency products will also be copied into a special location in the merged binary product, but containing only their binary, and the merged binary product will have an additional `@rpath` into that location. The goal of the debug workflow is to imitate the release workflow, but without the additional cost of creating the mergeable metadata and performing the merge of all of the libraries. This could someday change if the overhead of that work is determined to be small enough. This imitative behavior is intended to prevent developers from accidentally adding a dependency in their project to one of the mergeable libraries, and then wonder why that dependency causes the app to crash in release builds because the mergeable library is missing (since its content was merged into the merged binary). ## Implementation Enabling `AUTOMATICALLY_MERGE_DEPENDENCIES` on a target does two things: 1. It enables `MERGEABLE_LIBRARY` on any immediate target dependencies of the target. 2. It enables `MERGE_LINKED_LIBRARIES` on the merged binary target. Enabling `MERGEABLE_LIBRARY` on the dependencies is performed using the `superimposedParameters` property of `SpecializationParameters` in `DependencyResolution`. `TargetDependencyResolver.computeGraph()` applies these `superimposedParameters` to all qualifying instances of the target in the dependency closure, eliminating duplicates as a result of this imposition. `MERGE_LINKED_LIBRARIES` changes how those libraries are linked and embedded, as discussed below. A target which has `MERGEABLE_LIBRARY` enabled will also have `MAKE_MERGEABLE` enabled if this is a release build and the target has a `MACH_O_TYPE` of `mh_dylib`. It has no intrinsic effect on the target in a debug build, but is used by other targets to identify that the library should be treated as a mergeable library, whether or not is was built as mergeable. A target which has `MAKE_MERGEABLE` enabled will be linked to add the mergeable metadata, by passing `-make_mergeable` to the linker. A target which has `MERGE_LINKED_LIBRARIES` enabled will do several things. First, it will link its dependencies differently: * Release build workflow: It will merge the products of any of its immediate dependencies which have `MAKE_MERGEABLE` enabled using the linker flags `-merge_framework`, `-merge-l` and similar. * Debug build workflow: Any of its immediate dependencies which have `MERGEABLE_LIBRARY` enabled but _not_ `MAKE_MERGEABLE` will be linked with the linker flags `-reexport_framework`, `-reexport-l`, and similar. * Targets without `MERGEABLE_LIBRARY` enabled will be linked normally (appropriate for static libraries, or dynamic libraries not to be built as mergeable). Second, when a mergeable library is embedded into either the merged binary product itself (e.g., an app or a framework), or into another product which is itself embedding the merged binary product (e.g., an app embedding a merged binary framework), the embedded product will not include the binary. This is done with `PBXCp`'s `-exclude_subpath` option. Finally, in the debug workflow, a mergeable library which does not have `MAKE_MERGEABLE` enabled will additionally be copied into a special directory in the merged binary product, but that copy will contain only the binary (and necessary code signing metadata). This is so the merged binary product can find these mergeable libraries - since they're not actually being merged into it - but nothing else can accidentally do so. When a mergeable library is re-signed after being copied (and having either its binary removed, or having everything but its binary removed), `codesign` will be passed the `--generate-pre-encrypt-hashes` option, to force it to have a signature format compatible with recent iOS releases. (This behavior is something of a hack, and a more dedicated option to make codeless bundles work may be added to `codesign` in the future.) ## Manual creation & use The "automatic" creation of a merged binary might not be what all projects want. If a project wants to manually select which dependencies are merged in, enable `MERGE_LINKED_LIBRARIES` on the merged binary target, and `MERGEABLE_LIBRARY` for only those dependencies which should be merged in. This selective control can also be useful to exercise the mergeable workflow in debug builds, by setting `MAKE_MERGEABLE` in addition to `MERGEABLE_LIBRARY` on targets in their debug configurations. ## Known issues Vendored mergeable libraries are not yet supported, as either XCFrameworks or as standalone vendored frameworks. <rdar://105298026> There is not a way to build a target as mergeable but have a target which depends on it and which has `MERGE_LINKED_LIBRARIES` to _not_ merge it in, but to treat it as an ordinary dylib.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/TaskConstruction/mergeable-libraries.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/TaskConstruction/mergeable-libraries.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 7257 }
# Mutable Outputs One of the behaviors which makes Swift Build more complicated than other command line build systems is that we need to support a number of commands which mutate their input files. Some examples of these commands: 1. `strip` will remove symbols from a linked binary. 2. `codesign` may rewrite the binary inside of an input directory, or may add additional files to the directory (_CodeSignature subdirectories). 3. `touch` may be used to update a timestamp. 4. `chmod`/`chown` may be used to change ownership or permissions on a file. These changes are difficult for typical build systems to model well, since they store the actual output content in the filesystem, so when a command mutates the filesystem it inherently discards the previous data. That is problematic if the build system was expecting to be able to look at that data to infer what the current state is (for incremental build purposes). The ultimate goal would be to move the storage of the output data out of the filesystem so that this situation is no longer a special case, but that is a complex long-term project which Swift Build/llbuild do not yet have support for. Instead, we support mutable outputs in the following manner: 1. During task construction, we require any command which mutates its input to report the exact same node as an input and an output. We also require such commands to provide an additional virtual output node which can be used for forcing the ordering of the command (see below). 2. During build description construction, we analyze and rewrite the tasks as follows: * We currently infer "mutating" commands via analyzing the inputs and outputs. FIXME: Should we make this explicit? * We collect the complete set of mutating commands and mutated files. * Once we have found the mutating commands, and thus the set of nodes which are mutated, we find the original producer of the file. * We rewrite the graph to use "command triggers" between the producer of a node and the downstream mutators (in order, see below). Since the downstream commands cannot rely on checking the filesystem state of the input node in order to properly determine when to rebuild, they use a mechanism whereby they will be rerun whenever the upstream command runs. ### Mutator Virtual Output Nodes Mutating commands are required to produce a virtual output node representing the command because their outputs are going to be rewritten to *not* include the actual mutated output (since otherwise this would look like multiple definitions of an output node to llbuild). When that happens, there must be some other output node present in order for the build phase ordering implementation to force the proper ordering of such commands. Without this, some commands might end up with no outputs whatsoever and never be caused to run. We could in theory synthesize this node automatically during build description construction, but that would be much more complicated (since it would also need to rewrite the build phase ordering edges). ### Order of Mutators We need to determine the order to run each of the mutating commands. We do this by requiring that task construction will have arranged for some other strongly ordered relation between the commands (this is typically a gate producer for build phase honoring purposes), and then we infer the serial order by (effectively) doing a topological sort of the mutating commands. In practice, what we expect to happen is that all mutating commands are strongly ordered by the gate tasks introduced by the build phases, since almost all of the mutation is introduced by product postprocessing commands. ### Additional Virtual Outputs There are several cases where build phases do not suffice to allow the ordering described above to be enforced. In those cases, we make this work by ensuring that there is an artificial edge uses virtual nodes between the tasks. For example, the linker command generates an extra virtual output node we provide as an input to the dSYM generation task, since they would otherwise be unordered (ignoring the actual mutated file, that is). We also sometimes need to add additional virtual output nodes to tasks which would otherwise not be orderable, for example the MkDir tasks. ### Unsolved Problems We don't have a working strategy for directory outputs yet, but we don't have any support for directory outputs. We don't have any way to communicate what directory outputs are to llbuild. Downstream edges of the output have to have some other order imposed between the ultimate mutator and the consumer. We can fix this at the build description level by adding a synthetic edge, fortunately, even though this probably doesn't need to be implemented.
{ "source": "swiftlang/swift-build", "title": "SwiftBuild.docc/TaskConstruction/mutable-outputs.md", "url": "https://github.com/swiftlang/swift-build/blob/main/SwiftBuild.docc/TaskConstruction/mutable-outputs.md", "date": "2025-01-28T18:53:28", "stars": 1914, "description": "A high-level build system based on llbuild, used by Xcode, Swift Playground, and the Swift Package Manager", "file_size": 4809 }
# Gemini Search A Perplexity-style search engine powered by Google's Gemini 2.0 Flash model with grounding through Google Search. Get AI-powered answers to your questions with real-time web sources and citations. Created by [@ammaar](https://x.com/ammaar) ![Kapture 2025-01-04 at 14 35 14](https://github.com/user-attachments/assets/2302898e-03ae-40a6-a16c-301d6b91c5af) ## Features - 🔍 Real-time web search integration - 🤖 Powered by Google's latest Gemini 2.0 Flash model - 📚 Source citations and references for answers - 💬 Follow-up questions in the same chat session - 🎨 Clean, modern UI inspired by Perplexity - ⚡ Fast response times ## Tech Stack - Frontend: React + Vite + TypeScript + Tailwind CSS - Backend: Express.js + TypeScript - AI: Google Gemini 2.0 Flash API - Search: Google Search API integration ## Setup ### Prerequisites - Node.js (v18 or higher recommended) - npm or yarn - A Google API key with access to Gemini API ### Installation 1. Clone the repository: ```bash git clone https://github.com/ammaarreshi/Gemini-Search.git cd Gemini-Search ``` 2. Install dependencies: ```bash npm install ``` 3. Create a `.env` file in the root directory: ``` GOOGLE_API_KEY=your_api_key_here ``` 4. Start the development server: ```bash npm run dev ``` 5. Open your browser and navigate to: ``` http://localhost:3000 ``` ## Environment Variables - `GOOGLE_API_KEY`: Your Google API key with access to Gemini API - `NODE_ENV`: Set to "development" by default, use "production" for production builds ## Development - `npm run dev`: Start the development server - `npm run build`: Build for production - `npm run start`: Run the production server - `npm run check`: Run TypeScript type checking ## Security Notes - Never commit your `.env` file or expose your API keys - The `.gitignore` file is configured to exclude sensitive files - If you fork this repository, make sure to use your own API keys ## License MIT License - feel free to use this code for your own projects! ## Acknowledgments - Inspired by [Perplexity](https://www.perplexity.ai/) - Built with [Google's Gemini API](https://ai.google.dev/) - UI components from [shadcn/ui](https://ui.shadcn.com/)
{ "source": "ammaarreshi/Gemini-Search", "title": "README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 2250 }
1.3.8 / 2022-02-02 ================== * deps: mime-types@~2.1.34 - deps: mime-db@~1.51.0 * deps: [email protected] 1.3.7 / 2019-04-29 ================== * deps: [email protected] - Fix sorting charset, encoding, and language with extra parameters 1.3.6 / 2019-04-28 ================== * deps: mime-types@~2.1.24 - deps: mime-db@~1.40.0 1.3.5 / 2018-02-28 ================== * deps: mime-types@~2.1.18 - deps: mime-db@~1.33.0 1.3.4 / 2017-08-22 ================== * deps: mime-types@~2.1.16 - deps: mime-db@~1.29.0 1.3.3 / 2016-05-02 ================== * deps: mime-types@~2.1.11 - deps: mime-db@~1.23.0 * deps: [email protected] - perf: improve `Accept` parsing speed - perf: improve `Accept-Charset` parsing speed - perf: improve `Accept-Encoding` parsing speed - perf: improve `Accept-Language` parsing speed 1.3.2 / 2016-03-08 ================== * deps: mime-types@~2.1.10 - Fix extension of `application/dash+xml` - Update primary extension for `audio/mp4` - deps: mime-db@~1.22.0 1.3.1 / 2016-01-19 ================== * deps: mime-types@~2.1.9 - deps: mime-db@~1.21.0 1.3.0 / 2015-09-29 ================== * deps: mime-types@~2.1.7 - deps: mime-db@~1.19.0 * deps: [email protected] - Fix including type extensions in parameters in `Accept` parsing - Fix parsing `Accept` parameters with quoted equals - Fix parsing `Accept` parameters with quoted semicolons - Lazy-load modules from main entry point - perf: delay type concatenation until needed - perf: enable strict mode - perf: hoist regular expressions - perf: remove closures getting spec properties - perf: remove a closure from media type parsing - perf: remove property delete from media type parsing 1.2.13 / 2015-09-06 =================== * deps: mime-types@~2.1.6 - deps: mime-db@~1.18.0 1.2.12 / 2015-07-30 =================== * deps: mime-types@~2.1.4 - deps: mime-db@~1.16.0 1.2.11 / 2015-07-16 =================== * deps: mime-types@~2.1.3 - deps: mime-db@~1.15.0 1.2.10 / 2015-07-01 =================== * deps: mime-types@~2.1.2 - deps: mime-db@~1.14.0 1.2.9 / 2015-06-08 ================== * deps: mime-types@~2.1.1 - perf: fix deopt during mapping 1.2.8 / 2015-06-07 ================== * deps: mime-types@~2.1.0 - deps: mime-db@~1.13.0 * perf: avoid argument reassignment & argument slice * perf: avoid negotiator recursive construction * perf: enable strict mode * perf: remove unnecessary bitwise operator 1.2.7 / 2015-05-10 ================== * deps: [email protected] - Fix media type parameter matching to be case-insensitive 1.2.6 / 2015-05-07 ================== * deps: mime-types@~2.0.11 - deps: mime-db@~1.9.1 * deps: [email protected] - Fix comparing media types with quoted values - Fix splitting media types with quoted commas 1.2.5 / 2015-03-13 ================== * deps: mime-types@~2.0.10 - deps: mime-db@~1.8.0 1.2.4 / 2015-02-14 ================== * Support Node.js 0.6 * deps: mime-types@~2.0.9 - deps: mime-db@~1.7.0 * deps: [email protected] - Fix preference sorting to be stable for long acceptable lists 1.2.3 / 2015-01-31 ================== * deps: mime-types@~2.0.8 - deps: mime-db@~1.6.0 1.2.2 / 2014-12-30 ================== * deps: mime-types@~2.0.7 - deps: mime-db@~1.5.0 1.2.1 / 2014-12-30 ================== * deps: mime-types@~2.0.5 - deps: mime-db@~1.3.1 1.2.0 / 2014-12-19 ================== * deps: [email protected] - Fix list return order when large accepted list - Fix missing identity encoding when q=0 exists - Remove dynamic building of Negotiator class 1.1.4 / 2014-12-10 ================== * deps: mime-types@~2.0.4 - deps: mime-db@~1.3.0 1.1.3 / 2014-11-09 ================== * deps: mime-types@~2.0.3 - deps: mime-db@~1.2.0 1.1.2 / 2014-10-14 ================== * deps: [email protected] - Fix error when media type has invalid parameter 1.1.1 / 2014-09-28 ================== * deps: mime-types@~2.0.2 - deps: mime-db@~1.1.0 * deps: [email protected] - Fix all negotiations to be case-insensitive - Stable sort preferences of same quality according to client order 1.1.0 / 2014-09-02 ================== * update `mime-types` 1.0.7 / 2014-07-04 ================== * Fix wrong type returned from `type` when match after unknown extension 1.0.6 / 2014-06-24 ================== * deps: [email protected] 1.0.5 / 2014-06-20 ================== * fix crash when unknown extension given 1.0.4 / 2014-06-19 ================== * use `mime-types` 1.0.3 / 2014-06-11 ================== * deps: [email protected] - Order by specificity when quality is the same 1.0.2 / 2014-05-29 ================== * Fix interpretation when header not in request * deps: pin [email protected] 1.0.1 / 2014-01-18 ================== * Identity encoding isn't always acceptable * deps: negotiator@~0.4.0 1.0.0 / 2013-12-27 ================== * Genesis
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/accepts/HISTORY.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/accepts/HISTORY.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 5095 }
# accepts [![NPM Version][npm-version-image]][npm-url] [![NPM Downloads][npm-downloads-image]][npm-url] [![Node.js Version][node-version-image]][node-version-url] [![Build Status][github-actions-ci-image]][github-actions-ci-url] [![Test Coverage][coveralls-image]][coveralls-url] Higher level content negotiation based on [negotiator](https://www.npmjs.com/package/negotiator). Extracted from [koa](https://www.npmjs.com/package/koa) for general use. In addition to negotiator, it allows: - Allows types as an array or arguments list, ie `(['text/html', 'application/json'])` as well as `('text/html', 'application/json')`. - Allows type shorthands such as `json`. - Returns `false` when no types match - Treats non-existent headers as `*` ## Installation This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```sh $ npm install accepts ``` ## API ```js var accepts = require('accepts') ``` ### accepts(req) Create a new `Accepts` object for the given `req`. #### .charset(charsets) Return the first accepted charset. If nothing in `charsets` is accepted, then `false` is returned. #### .charsets() Return the charsets that the request accepts, in the order of the client's preference (most preferred first). #### .encoding(encodings) Return the first accepted encoding. If nothing in `encodings` is accepted, then `false` is returned. #### .encodings() Return the encodings that the request accepts, in the order of the client's preference (most preferred first). #### .language(languages) Return the first accepted language. If nothing in `languages` is accepted, then `false` is returned. #### .languages() Return the languages that the request accepts, in the order of the client's preference (most preferred first). #### .type(types) Return the first accepted type (and it is returned as the same text as what appears in the `types` array). If nothing in `types` is accepted, then `false` is returned. The `types` array can contain full MIME types or file extensions. Any value that is not a full MIME types is passed to `require('mime-types').lookup`. #### .types() Return the types that the request accepts, in the order of the client's preference (most preferred first). ## Examples ### Simple type negotiation This simple example shows how to use `accepts` to return a different typed respond body based on what the client wants to accept. The server lists it's preferences in order and will get back the best match between the client and server. ```js var accepts = require('accepts') var http = require('http') function app (req, res) { var accept = accepts(req) // the order of this list is significant; should be server preferred order switch (accept.type(['json', 'html'])) { case 'json': res.setHeader('Content-Type', 'application/json') res.write('{"hello":"world!"}') break case 'html': res.setHeader('Content-Type', 'text/html') res.write('<b>hello, world!</b>') break default: // the fallback is text/plain, so no need to specify it above res.setHeader('Content-Type', 'text/plain') res.write('hello, world!') break } res.end() } http.createServer(app).listen(3000) ``` You can test this out with the cURL program: ```sh curl -I -H'Accept: text/html' http://localhost:3000/ ``` ## License [MIT](LICENSE) [coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/accepts/master [coveralls-url]: https://coveralls.io/r/jshttp/accepts?branch=master [github-actions-ci-image]: https://badgen.net/github/checks/jshttp/accepts/master?label=ci [github-actions-ci-url]: https://github.com/jshttp/accepts/actions/workflows/ci.yml [node-version-image]: https://badgen.net/npm/node/accepts [node-version-url]: https://nodejs.org/en/download [npm-downloads-image]: https://badgen.net/npm/dm/accepts [npm-url]: https://npmjs.org/package/accepts [npm-version-image]: https://badgen.net/npm/v/accepts
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/accepts/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/accepts/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 4122 }
# ansi-regex > Regular expression for matching [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code) ## Install ```sh npm install ansi-regex ``` ## Usage ```js import ansiRegex from 'ansi-regex'; ansiRegex().test('\u001B[4mcake\u001B[0m'); //=> true ansiRegex().test('cake'); //=> false '\u001B[4mcake\u001B[0m'.match(ansiRegex()); //=> ['\u001B[4m', '\u001B[0m'] '\u001B[4mcake\u001B[0m'.match(ansiRegex({onlyFirst: true})); //=> ['\u001B[4m'] '\u001B]8;;https://github.com\u0007click\u001B]8;;\u0007'.match(ansiRegex()); //=> ['\u001B]8;;https://github.com\u0007', '\u001B]8;;\u0007'] ``` ## API ### ansiRegex(options?) Returns a regex for matching ANSI escape codes. #### options Type: `object` ##### onlyFirst Type: `boolean`\ Default: `false` *(Matches any ANSI escape codes in a string)* Match only the first ANSI escape. ## FAQ ### Why do you test for codes not in the ECMA 48 standard? Some of the codes we run as a test are codes that we acquired finding various lists of non-standard or manufacturer specific codes. We test for both standard and non-standard codes, as most of them follow the same or similar format and can be safely matched in strings without the risk of removing actual string content. There are a few non-standard control codes that do not follow the traditional format (i.e. they end in numbers) thus forcing us to exclude them from the test because we cannot reliably match them. On the historical side, those ECMA standards were established in the early 90's whereas the VT100, for example, was designed in the mid/late 70's. At that point in time, control codes were still pretty ungoverned and engineers used them for a multitude of things, namely to activate hardware ports that may have been proprietary. Somewhere else you see a similar 'anarchy' of codes is in the x86 architecture for processors; there are a ton of "interrupts" that can mean different things on certain brands of processors, most of which have been phased out. ## Maintainers - [Sindre Sorhus](https://github.com/sindresorhus) - [Josh Junon](https://github.com/qix-)
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/ansi-regex/readme.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/ansi-regex/readme.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 2112 }
# ansi-styles > [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code#Colors_and_Styles) for styling strings in the terminal You probably want the higher-level [chalk](https://github.com/chalk/chalk) module for styling your strings. ![](screenshot.png) ## Install ```sh npm install ansi-styles ``` ## Usage ```js import styles from 'ansi-styles'; console.log(`${styles.green.open}Hello world!${styles.green.close}`); // Color conversion between 256/truecolor // NOTE: When converting from truecolor to 256 colors, the original color // may be degraded to fit the new color palette. This means terminals // that do not support 16 million colors will best-match the // original color. console.log(`${styles.color.ansi(styles.rgbToAnsi(199, 20, 250))}Hello World${styles.color.close}`) console.log(`${styles.color.ansi256(styles.rgbToAnsi256(199, 20, 250))}Hello World${styles.color.close}`) console.log(`${styles.color.ansi16m(...styles.hexToRgb('#abcdef'))}Hello World${styles.color.close}`) ``` ## API ### `open` and `close` Each style has an `open` and `close` property. ### `modifierNames`, `foregroundColorNames`, `backgroundColorNames`, and `colorNames` All supported style strings are exposed as an array of strings for convenience. `colorNames` is the combination of `foregroundColorNames` and `backgroundColorNames`. This can be useful if you need to validate input: ```js import {modifierNames, foregroundColorNames} from 'ansi-styles'; console.log(modifierNames.includes('bold')); //=> true console.log(foregroundColorNames.includes('pink')); //=> false ``` ## Styles ### Modifiers - `reset` - `bold` - `dim` - `italic` *(Not widely supported)* - `underline` - `overline` *Supported on VTE-based terminals, the GNOME terminal, mintty, and Git Bash.* - `inverse` - `hidden` - `strikethrough` *(Not widely supported)* ### Colors - `black` - `red` - `green` - `yellow` - `blue` - `magenta` - `cyan` - `white` - `blackBright` (alias: `gray`, `grey`) - `redBright` - `greenBright` - `yellowBright` - `blueBright` - `magentaBright` - `cyanBright` - `whiteBright` ### Background colors - `bgBlack` - `bgRed` - `bgGreen` - `bgYellow` - `bgBlue` - `bgMagenta` - `bgCyan` - `bgWhite` - `bgBlackBright` (alias: `bgGray`, `bgGrey`) - `bgRedBright` - `bgGreenBright` - `bgYellowBright` - `bgBlueBright` - `bgMagentaBright` - `bgCyanBright` - `bgWhiteBright` ## Advanced usage By default, you get a map of styles, but the styles are also available as groups. They are non-enumerable so they don't show up unless you access them explicitly. This makes it easier to expose only a subset in a higher-level module. - `styles.modifier` - `styles.color` - `styles.bgColor` ###### Example ```js import styles from 'ansi-styles'; console.log(styles.color.green.open); ``` Raw escape codes (i.e. without the CSI escape prefix `\u001B[` and render mode postfix `m`) are available under `styles.codes`, which returns a `Map` with the open codes as keys and close codes as values. ###### Example ```js import styles from 'ansi-styles'; console.log(styles.codes.get(36)); //=> 39 ``` ## 16 / 256 / 16 million (TrueColor) support `ansi-styles` allows converting between various color formats and ANSI escapes, with support for 16, 256 and [16 million colors](https://gist.github.com/XVilka/8346728). The following color spaces are supported: - `rgb` - `hex` - `ansi256` - `ansi` To use these, call the associated conversion function with the intended output, for example: ```js import styles from 'ansi-styles'; styles.color.ansi(styles.rgbToAnsi(100, 200, 15)); // RGB to 16 color ansi foreground code styles.bgColor.ansi(styles.hexToAnsi('#C0FFEE')); // HEX to 16 color ansi foreground code styles.color.ansi256(styles.rgbToAnsi256(100, 200, 15)); // RGB to 256 color ansi foreground code styles.bgColor.ansi256(styles.hexToAnsi256('#C0FFEE')); // HEX to 256 color ansi foreground code styles.color.ansi16m(100, 200, 15); // RGB to 16 million color foreground code styles.bgColor.ansi16m(...styles.hexToRgb('#C0FFEE')); // Hex (RGB) to 16 million color foreground code ``` ## Related - [ansi-escapes](https://github.com/sindresorhus/ansi-escapes) - ANSI escape codes for manipulating the terminal ## Maintainers - [Sindre Sorhus](https://github.com/sindresorhus) - [Josh Junon](https://github.com/qix-) ## For enterprise Available as part of the Tidelift Subscription. The maintainers of `ansi-styles` and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-ansi-styles?utm_source=npm-ansi-styles&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/ansi-styles/readme.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/ansi-styles/readme.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 4907 }
## Any Promise [![Build Status](https://secure.travis-ci.org/kevinbeaty/any-promise.svg)](http://travis-ci.org/kevinbeaty/any-promise) Let your library support any ES 2015 (ES6) compatible `Promise` and leave the choice to application authors. The application can *optionally* register its preferred `Promise` implementation and it will be exported when requiring `any-promise` from library code. If no preference is registered, defaults to the global `Promise` for newer Node.js versions. The browser version defaults to the window `Promise`, so polyfill or register as necessary. ### Usage with global Promise: Assuming the global `Promise` is the desired implementation: ```bash # Install any libraries depending on any-promise $ npm install mz ``` The installed libraries will use global Promise by default. ```js // in library var Promise = require('any-promise') // the global Promise function promiseReturningFunction(){ return new Promise(function(resolve, reject){...}) } ``` ### Usage with registration: Assuming `bluebird` is the desired Promise implementation: ```bash # Install preferred promise library $ npm install bluebird # Install any-promise to allow registration $ npm install any-promise # Install any libraries you would like to use depending on any-promise $ npm install mz ``` Register your preference in the application entry point before any other `require` of packages that load `any-promise`: ```javascript // top of application index.js or other entry point require('any-promise/register/bluebird') // -or- Equivalent to above, but allows customization of Promise library require('any-promise/register')('bluebird', {Promise: require('bluebird')}) ``` Now that the implementation is registered, you can use any package depending on `any-promise`: ```javascript var fsp = require('mz/fs') // mz/fs will use registered bluebird promises var Promise = require('any-promise') // the registered bluebird promise ``` It is safe to call `register` multiple times, but it must always be with the same implementation. Again, registration is *optional*. It should only be called by the application user if overriding the global `Promise` implementation is desired. ### Optional Application Registration As an application author, you can *optionally* register a preferred `Promise` implementation on application startup (before any call to `require('any-promise')`: You must register your preference before any call to `require('any-promise')` (by you or required packages), and only one implementation can be registered. Typically, this registration would occur at the top of the application entry point. #### Registration shortcuts If you are using a known `Promise` implementation, you can register your preference with a shortcut: ```js require('any-promise/register/bluebird') // -or- import 'any-promise/register/q'; ``` Shortcut registration is the preferred registration method as it works in the browser and Node.js. It is also convenient for using with `import` and many test runners, that offer a `--require` flag: ``` $ ava --require=any-promise/register/bluebird test.js ``` Current known implementations include `bluebird`, `q`, `when`, `rsvp`, `es6-promise`, `promise`, `native-promise-only`, `pinkie`, `vow` and `lie`. If you are not using a known implementation, you can use another registration method described below. #### Basic Registration As an alternative to registration shortcuts, you can call the `register` function with the preferred `Promise` implementation. The benefit of this approach is that a `Promise` library can be required by name without being a known implementation. This approach does NOT work in the browser. To use `any-promise` in the browser use either registration shortcuts or specify the `Promise` constructor using advanced registration (see below). ```javascript require('any-promise/register')('when') // -or- require('any-promise/register')('any other ES6 compatible library (known or otherwise)') ``` This registration method will try to detect the `Promise` constructor from requiring the specified implementation. If you would like to specify your own constructor, see advanced registration. #### Advanced Registration To use the browser version, you should either install a polyfill or explicitly register the `Promise` constructor: ```javascript require('any-promise/register')('bluebird', {Promise: require('bluebird')}) ``` This could also be used for registering a custom `Promise` implementation or subclass. Your preference will be registered globally, allowing a single registration even if multiple versions of `any-promise` are installed in the NPM dependency tree or are using multiple bundled JavaScript files in the browser. You can bypass this global registration in options: ```javascript require('../register')('es6-promise', {Promise: require('es6-promise').Promise, global: false}) ``` ### Library Usage To use any `Promise` constructor, simply require it: ```javascript var Promise = require('any-promise'); return Promise .all([xf, f, init, coll]) .then(fn); return new Promise(function(resolve, reject){ try { resolve(item); } catch(e){ reject(e); } }); ``` Except noted below, libraries using `any-promise` should only use [documented](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) functions as there is no guarantee which implementation will be chosen by the application author. Libraries should never call `register`, only the application user should call if desired. #### Advanced Library Usage If your library needs to branch code based on the registered implementation, you can retrieve it using `var impl = require('any-promise/implementation')`, where `impl` will be the package name (`"bluebird"`, `"when"`, etc.) if registered, `"global.Promise"` if using the global version on Node.js, or `"window.Promise"` if using the browser version. You should always include a default case, as there is no guarantee what package may be registered. ### Support for old Node.js versions Node.js versions prior to `v0.12` may have contained buggy versions of the global `Promise`. For this reason, the global `Promise` is not loaded automatically for these old versions. If using `any-promise` in Node.js versions versions `<= v0.12`, the user should register a desired implementation. If an implementation is not registered, `any-promise` will attempt to discover an installed `Promise` implementation. If no implementation can be found, an error will be thrown on `require('any-promise')`. While the auto-discovery usually avoids errors, it is non-deterministic. It is recommended that the user always register a preferred implementation for older Node.js versions. This auto-discovery is only available for Node.jS versions prior to `v0.12`. Any newer versions will always default to the global `Promise` implementation. ### Related - [any-observable](https://github.com/sindresorhus/any-observable) - `any-promise` for Observables.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/any-promise/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/any-promise/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 7064 }
anymatch [![Build Status](https://travis-ci.org/micromatch/anymatch.svg?branch=master)](https://travis-ci.org/micromatch/anymatch) [![Coverage Status](https://img.shields.io/coveralls/micromatch/anymatch.svg?branch=master)](https://coveralls.io/r/micromatch/anymatch?branch=master) ====== Javascript module to match a string against a regular expression, glob, string, or function that takes the string as an argument and returns a truthy or falsy value. The matcher can also be an array of any or all of these. Useful for allowing a very flexible user-defined config to define things like file paths. __Note: This module has Bash-parity, please be aware that Windows-style backslashes are not supported as separators. See https://github.com/micromatch/micromatch#backslashes for more information.__ Usage ----- ```sh npm install anymatch ``` #### anymatch(matchers, testString, [returnIndex], [options]) * __matchers__: (_Array|String|RegExp|Function_) String to be directly matched, string with glob patterns, regular expression test, function that takes the testString as an argument and returns a truthy value if it should be matched, or an array of any number and mix of these types. * __testString__: (_String|Array_) The string to test against the matchers. If passed as an array, the first element of the array will be used as the `testString` for non-function matchers, while the entire array will be applied as the arguments for function matchers. * __options__: (_Object_ [optional]_) Any of the [picomatch](https://github.com/micromatch/picomatch#options) options. * __returnIndex__: (_Boolean [optional]_) If true, return the array index of the first matcher that that testString matched, or -1 if no match, instead of a boolean result. ```js const anymatch = require('anymatch'); const matchers = [ 'path/to/file.js', 'path/anyjs/**/*.js', /foo.js$/, string => string.includes('bar') && string.length > 10 ] ; anymatch(matchers, 'path/to/file.js'); // true anymatch(matchers, 'path/anyjs/baz.js'); // true anymatch(matchers, 'path/to/foo.js'); // true anymatch(matchers, 'path/to/bar.js'); // true anymatch(matchers, 'bar.js'); // false // returnIndex = true anymatch(matchers, 'foo.js', {returnIndex: true}); // 2 anymatch(matchers, 'path/anyjs/foo.js', {returnIndex: true}); // 1 // any picomatc // using globs to match directories and their children anymatch('node_modules', 'node_modules'); // true anymatch('node_modules', 'node_modules/somelib/index.js'); // false anymatch('node_modules/**', 'node_modules/somelib/index.js'); // true anymatch('node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // false anymatch('**/node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // true const matcher = anymatch(matchers); ['foo.js', 'bar.js'].filter(matcher); // [ 'foo.js' ] anymatch master* ❯ ``` #### anymatch(matchers) You can also pass in only your matcher(s) to get a curried function that has already been bound to the provided matching criteria. This can be used as an `Array#filter` callback. ```js var matcher = anymatch(matchers); matcher('path/to/file.js'); // true matcher('path/anyjs/baz.js', true); // 1 ['foo.js', 'bar.js'].filter(matcher); // ['foo.js'] ``` Changelog ---------- [See release notes page on GitHub](https://github.com/micromatch/anymatch/releases) - **v3.0:** Removed `startIndex` and `endIndex` arguments. Node 8.x-only. - **v2.0:** [micromatch](https://github.com/jonschlinkert/micromatch) moves away from minimatch-parity and inline with Bash. This includes handling backslashes differently (see https://github.com/micromatch/micromatch#backslashes for more information). - **v1.2:** anymatch uses [micromatch](https://github.com/jonschlinkert/micromatch) for glob pattern matching. Issues with glob pattern matching should be reported directly to the [micromatch issue tracker](https://github.com/jonschlinkert/micromatch/issues). License ------- [ISC](https://raw.github.com/micromatch/anymatch/master/LICENSE)
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/anymatch/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/anymatch/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 4020 }
The MIT License (MIT) Copyright (c) 2021 Vercel, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/arg/LICENSE.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/arg/LICENSE.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1078 }
# Arg `arg` is an unopinionated, no-frills CLI argument parser. ## Installation ```bash npm install arg ``` ## Usage `arg()` takes either 1 or 2 arguments: 1. Command line specification object (see below) 2. Parse options (_Optional_, defaults to `{permissive: false, argv: process.argv.slice(2), stopAtPositional: false}`) It returns an object with any values present on the command-line (missing options are thus missing from the resulting object). Arg performs no validation/requirement checking - we leave that up to the application. All parameters that aren't consumed by options (commonly referred to as "extra" parameters) are added to `result._`, which is _always_ an array (even if no extra parameters are passed, in which case an empty array is returned). ```javascript const arg = require('arg'); // `options` is an optional parameter const args = arg( spec, (options = { permissive: false, argv: process.argv.slice(2) }) ); ``` For example: ```console $ node ./hello.js --verbose -vvv --port=1234 -n 'My name' foo bar --tag qux --tag=qix -- --foobar ``` ```javascript // hello.js const arg = require('arg'); const args = arg({ // Types '--help': Boolean, '--version': Boolean, '--verbose': arg.COUNT, // Counts the number of times --verbose is passed '--port': Number, // --port <number> or --port=<number> '--name': String, // --name <string> or --name=<string> '--tag': [String], // --tag <string> or --tag=<string> // Aliases '-v': '--verbose', '-n': '--name', // -n <string>; result is stored in --name '--label': '--name' // --label <string> or --label=<string>; // result is stored in --name }); console.log(args); /* { _: ["foo", "bar", "--foobar"], '--port': 1234, '--verbose': 4, '--name': "My name", '--tag': ["qux", "qix"] } */ ``` The values for each key=&gt;value pair is either a type (function or [function]) or a string (indicating an alias). - In the case of a function, the string value of the argument's value is passed to it, and the return value is used as the ultimate value. - In the case of an array, the only element _must_ be a type function. Array types indicate that the argument may be passed multiple times, and as such the resulting value in the returned object is an array with all of the values that were passed using the specified flag. - In the case of a string, an alias is established. If a flag is passed that matches the _key_, then the _value_ is substituted in its place. Type functions are passed three arguments: 1. The parameter value (always a string) 2. The parameter name (e.g. `--label`) 3. The previous value for the destination (useful for reduce-like operations or for supporting `-v` multiple times, etc.) This means the built-in `String`, `Number`, and `Boolean` type constructors "just work" as type functions. Note that `Boolean` and `[Boolean]` have special treatment - an option argument is _not_ consumed or passed, but instead `true` is returned. These options are called "flags". For custom handlers that wish to behave as flags, you may pass the function through `arg.flag()`: ```javascript const arg = require('arg'); const argv = [ '--foo', 'bar', '-ff', 'baz', '--foo', '--foo', 'qux', '-fff', 'qix' ]; function myHandler(value, argName, previousValue) { /* `value` is always `true` */ return 'na ' + (previousValue || 'batman!'); } const args = arg( { '--foo': arg.flag(myHandler), '-f': '--foo' }, { argv } ); console.log(args); /* { _: ['bar', 'baz', 'qux', 'qix'], '--foo': 'na na na na na na na na batman!' } */ ``` As well, `arg` supplies a helper argument handler called `arg.COUNT`, which equivalent to a `[Boolean]` argument's `.length` property - effectively counting the number of times the boolean flag, denoted by the key, is passed on the command line.. For example, this is how you could implement `ssh`'s multiple levels of verbosity (`-vvvv` being the most verbose). ```javascript const arg = require('arg'); const argv = ['-AAAA', '-BBBB']; const args = arg( { '-A': arg.COUNT, '-B': [Boolean] }, { argv } ); console.log(args); /* { _: [], '-A': 4, '-B': [true, true, true, true] } */ ``` ### Options If a second parameter is specified and is an object, it specifies parsing options to modify the behavior of `arg()`. #### `argv` If you have already sliced or generated a number of raw arguments to be parsed (as opposed to letting `arg` slice them from `process.argv`) you may specify them in the `argv` option. For example: ```javascript const args = arg( { '--foo': String }, { argv: ['hello', '--foo', 'world'] } ); ``` results in: ```javascript const args = { _: ['hello'], '--foo': 'world' }; ``` #### `permissive` When `permissive` set to `true`, `arg` will push any unknown arguments onto the "extra" argument array (`result._`) instead of throwing an error about an unknown flag. For example: ```javascript const arg = require('arg'); const argv = [ '--foo', 'hello', '--qux', 'qix', '--bar', '12345', 'hello again' ]; const args = arg( { '--foo': String, '--bar': Number }, { argv, permissive: true } ); ``` results in: ```javascript const args = { _: ['--qux', 'qix', 'hello again'], '--foo': 'hello', '--bar': 12345 }; ``` #### `stopAtPositional` When `stopAtPositional` is set to `true`, `arg` will halt parsing at the first positional argument. For example: ```javascript const arg = require('arg'); const argv = ['--foo', 'hello', '--bar']; const args = arg( { '--foo': Boolean, '--bar': Boolean }, { argv, stopAtPositional: true } ); ``` results in: ```javascript const args = { _: ['hello', '--bar'], '--foo': true }; ``` ### Errors Some errors that `arg` throws provide a `.code` property in order to aid in recovering from user error, or to differentiate between user error and developer error (bug). ##### ARG_UNKNOWN_OPTION If an unknown option (not defined in the spec object) is passed, an error with code `ARG_UNKNOWN_OPTION` will be thrown: ```js // cli.js try { require('arg')({ '--hi': String }); } catch (err) { if (err.code === 'ARG_UNKNOWN_OPTION') { console.log(err.message); } else { throw err; } } ``` ```shell node cli.js --extraneous true Unknown or unexpected option: --extraneous ``` # FAQ A few questions and answers that have been asked before: ### How do I require an argument with `arg`? Do the assertion yourself, such as: ```javascript const args = arg({ '--name': String }); if (!args['--name']) throw new Error('missing required argument: --name'); ``` # License Released under the [MIT License](LICENSE.md).
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/arg/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/arg/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 6641 }
# aria-hidden [![NPM](https://nodei.co/npm/aria-hidden.png?downloads=true&stars=true)](https://nodei.co/npm/aria-hidden/) Hides from ARIA everything, except provided node(s). Helps to isolate modal dialogs and focused task - the content will be not accessible using accessible tools. Now with [HTML inert](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/inert) support # API Just call `hideOthers` with DOM-node you want to keep, and it will _hide_ everything else. `targetNode` could be placed anywhere - its siblings would be hidden, but it and its parents - not. > "hidden" in terms or `aria-hidden` ```js import { hideOthers } from 'aria-hidden'; const undo = hideOthers(exceptThisDOMnode); // everything else is "aria-hidden" // undo changes undo(); ``` you also may limit the effect spread by providing top level node as a second parameter ```js // keep only `anotherNode` node visible in #app // the rest of document will be untouched hideOthers(anotherNode, document.getElementById('app')); ``` > `parentNode` defaults to document.body # Inert While `aria-hidden` played important role in the past and will play in the future - the main use case always was around isolating content and making elements "transparent" not only for aria, but for user interaction as well. This is why you might consider using `inertOthers` ```tsx import { hideOthers, inertOthers, supportsInert } from 'aria-hidden'; // focus on element mean "hide others". Ideally disable interactions const focusOnElement = (node) => (supportsInert() ? inertOthers(node) : hideOthers(node)); ``` the same function as above is already contructed and exported as ```tsx import { suppressOthers } from 'aria-hidden'; suppressOthers([keepThisNode, andThis]); ``` ⚠️ Note - inert **will disable any interactions** with _suppressed_ elements ⚠️ ### Suppressing interactivity without inert One can `marker`, the third argument to a function, to mark hidden elements. Later one can create a style matching given marker to apply `pointer-events:none` ```css [hidden-node] { pointer-events: none; } ``` ```tsx hideOthers(notThisOne, undefined /*parent = document*/, 'hidden-node'); ``` Generally speaking the same can be achieved by addressing `[aria-hidden]` nodes, but not all `aria-hidden` nodes are expected to be non-interactive. Hence, it's better to separate concerns. # Inspiration Based on [smooth-ui](https://github.com/smooth-code/smooth-ui) modal dialogs. # See also - [inert](https://github.com/WICG/inert) - The HTML attribute/property to mark parts of the DOM tree as "inert". - [react-focus-lock](https://github.com/theKashey/react-focus-lock) to lock Focus inside modal. - [react-scroll-lock](https://github.com/theKashey/react-scroll-lock) to disable page scroll while modal is opened. # Size Code is 30 lines long # Licence MIT
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/aria-hidden/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/aria-hidden/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 2859 }
# Array Flatten [![NPM version][npm-image]][npm-url] [![NPM downloads][downloads-image]][downloads-url] [![Build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] > Flatten an array of nested arrays into a single flat array. Accepts an optional depth. ## Installation ``` npm install array-flatten --save ``` ## Usage ```javascript var flatten = require('array-flatten') flatten([1, [2, [3, [4, [5], 6], 7], 8], 9]) //=> [1, 2, 3, 4, 5, 6, 7, 8, 9] flatten([1, [2, [3, [4, [5], 6], 7], 8], 9], 2) //=> [1, 2, 3, [4, [5], 6], 7, 8, 9] (function () { flatten(arguments) //=> [1, 2, 3] })(1, [2, 3]) ``` ## License MIT [npm-image]: https://img.shields.io/npm/v/array-flatten.svg?style=flat [npm-url]: https://npmjs.org/package/array-flatten [downloads-image]: https://img.shields.io/npm/dm/array-flatten.svg?style=flat [downloads-url]: https://npmjs.org/package/array-flatten [travis-image]: https://img.shields.io/travis/blakeembrey/array-flatten.svg?style=flat [travis-url]: https://travis-ci.org/blakeembrey/array-flatten [coveralls-image]: https://img.shields.io/coveralls/blakeembrey/array-flatten.svg?style=flat [coveralls-url]: https://coveralls.io/r/blakeembrey/array-flatten?branch=master
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/array-flatten/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/array-flatten/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1244 }
# Autoprefixer [![Cult Of Martians][cult-img]][cult] <img align="right" width="94" height="71" src="https://postcss.github.io/autoprefixer/logo.svg" title="Autoprefixer logo by Anton Lovchikov"> [PostCSS] plugin to parse CSS and add vendor prefixes to CSS rules using values from [Can I Use]. It is recommended by Google and used in Twitter and Alibaba. Write your CSS rules without vendor prefixes (in fact, forget about them entirely): ```css ::placeholder { color: gray; } .image { background-image: url([email protected]); } @media (min-resolution: 2dppx) { .image { background-image: url([email protected]); } } ``` Autoprefixer will use the data based on current browser popularity and property support to apply prefixes for you. You can try the [interactive demo] of Autoprefixer. ```css ::-moz-placeholder { color: gray; } ::placeholder { color: gray; } .image { background-image: url([email protected]); } @media (-webkit-min-device-pixel-ratio: 2), (min-resolution: 2dppx) { .image { background-image: url([email protected]); } } ``` Twitter account for news and releases: [@autoprefixer]. <a href="https://evilmartians.com/?utm_source=autoprefixer"> <img src="https://evilmartians.com/badges/sponsored-by-evil-martians.svg" alt="Sponsored by Evil Martians" width="236" height="54"> </a> [interactive demo]: https://autoprefixer.github.io/ [@autoprefixer]: https://twitter.com/autoprefixer [Can I Use]: https://caniuse.com/ [cult-img]: https://cultofmartians.com/assets/badges/badge.svg [PostCSS]: https://github.com/postcss/postcss [cult]: https://cultofmartians.com/tasks/autoprefixer-grid.html ## Docs Read full docs **[here](https://github.com/postcss/autoprefixer#readme)**.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/autoprefixer/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/autoprefixer/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1764 }
(MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/balanced-match/LICENSE.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/balanced-match/LICENSE.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1095 }
# balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/balanced-match/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/balanced-match/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 3501 }
# binary-extensions > List of binary file extensions The list is just a [JSON file](binary-extensions.json) and can be used anywhere. ## Install ```sh npm install binary-extensions ``` ## Usage ```js const binaryExtensions = require('binary-extensions'); console.log(binaryExtensions); //=> ['3ds', '3g2', …] ``` ## Related - [is-binary-path](https://github.com/sindresorhus/is-binary-path) - Check if a filepath is a binary file - [text-extensions](https://github.com/sindresorhus/text-extensions) - List of text file extensions
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/binary-extensions/readme.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/binary-extensions/readme.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 538 }
1.20.3 / 2024-09-10 =================== * deps: [email protected] * add `depth` option to customize the depth level in the parser * IMPORTANT: The default `depth` level for parsing URL-encoded data is now `32` (previously was `Infinity`) 1.20.2 / 2023-02-21 =================== * Fix strict json error message on Node.js 19+ * deps: content-type@~1.0.5 - perf: skip value escaping when unnecessary * deps: [email protected] 1.20.1 / 2022-10-06 =================== * deps: [email protected] * perf: remove unnecessary object clone 1.20.0 / 2022-04-02 =================== * Fix error message for json parse whitespace in `strict` * Fix internal error when inflated body exceeds limit * Prevent loss of async hooks context * Prevent hanging when request already read * deps: [email protected] - Replace internal `eval` usage with `Function` constructor - Use instance methods on `process` to check for listeners * deps: [email protected] - deps: [email protected] - deps: [email protected] * deps: [email protected] * deps: [email protected] * deps: [email protected] - deps: [email protected] 1.19.2 / 2022-02-15 =================== * deps: [email protected] * deps: [email protected] * Fix handling of `__proto__` keys * deps: [email protected] - deps: [email protected] 1.19.1 / 2021-12-10 =================== * deps: [email protected] * deps: [email protected] - deps: [email protected] - deps: [email protected] - deps: [email protected] * deps: [email protected] * deps: [email protected] - deps: [email protected] - deps: [email protected] * deps: [email protected] * deps: type-is@~1.6.18 1.19.0 / 2019-04-25 =================== * deps: [email protected] - Add petabyte (`pb`) support * deps: [email protected] - Set constructor name when possible - deps: [email protected] - deps: statuses@'>= 1.5.0 < 2' * deps: [email protected] - Added encoding MIK * deps: [email protected] - Fix parsing array brackets after index * deps: [email protected] - deps: [email protected] - deps: [email protected] - deps: [email protected] * deps: type-is@~1.6.17 - deps: mime-types@~2.1.24 - perf: prevent internal `throw` on invalid type 1.18.3 / 2018-05-14 =================== * Fix stack trace for strict json parse error * deps: depd@~1.1.2 - perf: remove argument reassignment * deps: http-errors@~1.6.3 - deps: depd@~1.1.2 - deps: [email protected] - deps: statuses@'>= 1.3.1 < 2' * deps: [email protected] - Fix loading encoding with year appended - Fix deprecation warnings on Node.js 10+ * deps: [email protected] * deps: [email protected] - deps: [email protected] - deps: [email protected] * deps: type-is@~1.6.16 - deps: mime-types@~2.1.18 1.18.2 / 2017-09-22 =================== * deps: [email protected] * perf: remove argument reassignment 1.18.1 / 2017-09-12 =================== * deps: content-type@~1.0.4 - perf: remove argument reassignment - perf: skip parameter parsing when no parameters * deps: [email protected] - Fix ISO-8859-1 regression - Update Windows-1255 * deps: [email protected] - Fix parsing & compacting very deep objects * deps: [email protected] - deps: [email protected] 1.18.0 / 2017-09-08 =================== * Fix JSON strict violation error to match native parse error * Include the `body` property on verify errors * Include the `type` property on all generated errors * Use `http-errors` to set status code on errors * deps: [email protected] * deps: [email protected] * deps: depd@~1.1.1 - Remove unnecessary `Buffer` loading * deps: http-errors@~1.6.2 - deps: [email protected] * deps: [email protected] - Add support for React Native - Add a warning if not loaded as utf-8 - Fix CESU-8 decoding in Node.js 8 - Improve speed of ISO-8859-1 encoding * deps: [email protected] * deps: [email protected] - Use `http-errors` for standard emitted errors - deps: [email protected] - deps: [email protected] - perf: skip buffer decoding on overage chunk * perf: prevent internal `throw` when missing charset 1.17.2 / 2017-05-17 =================== * deps: [email protected] - Fix `DEBUG_MAX_ARRAY_LENGTH` - deps: [email protected] * deps: type-is@~1.6.15 - deps: mime-types@~2.1.15 1.17.1 / 2017-03-06 =================== * deps: [email protected] - Fix regression parsing keys starting with `[` 1.17.0 / 2017-03-01 =================== * deps: http-errors@~1.6.1 - Make `message` property enumerable for `HttpError`s - deps: [email protected] * deps: [email protected] - Fix compacting nested arrays 1.16.1 / 2017-02-10 =================== * deps: [email protected] - Fix deprecation messages in WebStorm and other editors - Undeprecate `DEBUG_FD` set to `1` or `2` 1.16.0 / 2017-01-17 =================== * deps: [email protected] - Allow colors in workers - Deprecated `DEBUG_FD` environment variable - Fix error when running under React Native - Use same color for same namespace - deps: [email protected] * deps: http-errors@~1.5.1 - deps: [email protected] - deps: [email protected] - deps: statuses@'>= 1.3.1 < 2' * deps: [email protected] - Added encoding MS-31J - Added encoding MS-932 - Added encoding MS-936 - Added encoding MS-949 - Added encoding MS-950 - Fix GBK/GB18030 handling of Euro character * deps: [email protected] - Fix array parsing from skipping empty values * deps: raw-body@~2.2.0 - deps: [email protected] * deps: type-is@~1.6.14 - deps: mime-types@~2.1.13 1.15.2 / 2016-06-19 =================== * deps: [email protected] * deps: content-type@~1.0.2 - perf: enable strict mode * deps: http-errors@~1.5.0 - Use `setprototypeof` module to replace `__proto__` setting - deps: statuses@'>= 1.3.0 < 2' - perf: enable strict mode * deps: [email protected] * deps: raw-body@~2.1.7 - deps: [email protected] - perf: remove double-cleanup on happy path * deps: type-is@~1.6.13 - deps: mime-types@~2.1.11 1.15.1 / 2016-05-05 =================== * deps: [email protected] - Drop partial bytes on all parsed units - Fix parsing byte string that looks like hex * deps: raw-body@~2.1.6 - deps: [email protected] * deps: type-is@~1.6.12 - deps: mime-types@~2.1.10 1.15.0 / 2016-02-10 =================== * deps: http-errors@~1.4.0 - Add `HttpError` export, for `err instanceof createError.HttpError` - deps: [email protected] - deps: statuses@'>= 1.2.1 < 2' * deps: [email protected] * deps: type-is@~1.6.11 - deps: mime-types@~2.1.9 1.14.2 / 2015-12-16 =================== * deps: [email protected] * deps: [email protected] * deps: [email protected] * deps: raw-body@~2.1.5 - deps: [email protected] - deps: [email protected] * deps: type-is@~1.6.10 - deps: mime-types@~2.1.8 1.14.1 / 2015-09-27 =================== * Fix issue where invalid charset results in 400 when `verify` used * deps: [email protected] - Fix CESU-8 decoding in Node.js 4.x * deps: raw-body@~2.1.4 - Fix masking critical errors from `iconv-lite` - deps: [email protected] * deps: type-is@~1.6.9 - deps: mime-types@~2.1.7 1.14.0 / 2015-09-16 =================== * Fix JSON strict parse error to match syntax errors * Provide static `require` analysis in `urlencoded` parser * deps: depd@~1.1.0 - Support web browser loading * deps: [email protected] * deps: raw-body@~2.1.3 - Fix sync callback when attaching data listener causes sync read * deps: type-is@~1.6.8 - Fix type error when given invalid type to match against - deps: mime-types@~2.1.6 1.13.3 / 2015-07-31 =================== * deps: type-is@~1.6.6 - deps: mime-types@~2.1.4 1.13.2 / 2015-07-05 =================== * deps: [email protected] * deps: [email protected] - Fix dropping parameters like `hasOwnProperty` - Fix user-visible incompatibilities from 3.1.0 - Fix various parsing edge cases * deps: raw-body@~2.1.2 - Fix error stack traces to skip `makeError` - deps: [email protected] * deps: type-is@~1.6.4 - deps: mime-types@~2.1.2 - perf: enable strict mode - perf: remove argument reassignment 1.13.1 / 2015-06-16 =================== * deps: [email protected] - Downgraded from 3.1.0 because of user-visible incompatibilities 1.13.0 / 2015-06-14 =================== * Add `statusCode` property on `Error`s, in addition to `status` * Change `type` default to `application/json` for JSON parser * Change `type` default to `application/x-www-form-urlencoded` for urlencoded parser * Provide static `require` analysis * Use the `http-errors` module to generate errors * deps: [email protected] - Slight optimizations * deps: [email protected] - The encoding UTF-16 without BOM now defaults to UTF-16LE when detection fails - Leading BOM is now removed when decoding * deps: on-finished@~2.3.0 - Add defined behavior for HTTP `CONNECT` requests - Add defined behavior for HTTP `Upgrade` requests - deps: [email protected] * deps: [email protected] - Fix dropping parameters like `hasOwnProperty` - Fix various parsing edge cases - Parsed object now has `null` prototype * deps: raw-body@~2.1.1 - Use `unpipe` module for unpiping requests - deps: [email protected] * deps: type-is@~1.6.3 - deps: mime-types@~2.1.1 - perf: reduce try block size - perf: remove bitwise operations * perf: enable strict mode * perf: remove argument reassignment * perf: remove delete call 1.12.4 / 2015-05-10 =================== * deps: debug@~2.2.0 * deps: [email protected] - Fix allowing parameters like `constructor` * deps: on-finished@~2.2.1 * deps: raw-body@~2.0.1 - Fix a false-positive when unpiping in Node.js 0.8 - deps: [email protected] * deps: type-is@~1.6.2 - deps: mime-types@~2.0.11 1.12.3 / 2015-04-15 =================== * Slight efficiency improvement when not debugging * deps: depd@~1.0.1 * deps: [email protected] - Add encoding alias UNICODE-1-1-UTF-7 * deps: [email protected] - Fix hanging callback if request aborts during read - deps: [email protected] 1.12.2 / 2015-03-16 =================== * deps: [email protected] - Fix error when parameter `hasOwnProperty` is present 1.12.1 / 2015-03-15 =================== * deps: debug@~2.1.3 - Fix high intensity foreground color for bold - deps: [email protected] * deps: type-is@~1.6.1 - deps: mime-types@~2.0.10 1.12.0 / 2015-02-13 =================== * add `debug` messages * accept a function for the `type` option * use `content-type` to parse `Content-Type` headers * deps: [email protected] - Gracefully support enumerables on `Object.prototype` * deps: [email protected] - deps: [email protected] * deps: type-is@~1.6.0 - fix argument reassignment - fix false-positives in `hasBody` `Transfer-Encoding` check - support wildcard for both type and subtype (`*/*`) - deps: mime-types@~2.0.9 1.11.0 / 2015-01-30 =================== * make internal `extended: true` depth limit infinity * deps: type-is@~1.5.6 - deps: mime-types@~2.0.8 1.10.2 / 2015-01-20 =================== * deps: [email protected] - Fix rare aliases of single-byte encodings * deps: [email protected] - deps: [email protected] 1.10.1 / 2015-01-01 =================== * deps: on-finished@~2.2.0 * deps: type-is@~1.5.5 - deps: mime-types@~2.0.7 1.10.0 / 2014-12-02 =================== * make internal `extended: true` array limit dynamic 1.9.3 / 2014-11-21 ================== * deps: [email protected] - Fix Windows-31J and X-SJIS encoding support * deps: [email protected] - Fix `arrayLimit` behavior * deps: [email protected] - deps: [email protected] * deps: type-is@~1.5.3 - deps: mime-types@~2.0.3 1.9.2 / 2014-10-27 ================== * deps: [email protected] - Fix parsing of mixed objects and values 1.9.1 / 2014-10-22 ================== * deps: on-finished@~2.1.1 - Fix handling of pipelined requests * deps: [email protected] - Fix parsing of mixed implicit and explicit arrays * deps: type-is@~1.5.2 - deps: mime-types@~2.0.2 1.9.0 / 2014-09-24 ================== * include the charset in "unsupported charset" error message * include the encoding in "unsupported content encoding" error message * deps: depd@~1.0.0 1.8.4 / 2014-09-23 ================== * fix content encoding to be case-insensitive 1.8.3 / 2014-09-19 ================== * deps: [email protected] - Fix issue with object keys starting with numbers truncated 1.8.2 / 2014-09-15 ================== * deps: [email protected] 1.8.1 / 2014-09-07 ================== * deps: [email protected] * deps: type-is@~1.5.1 1.8.0 / 2014-09-05 ================== * make empty-body-handling consistent between chunked requests - empty `json` produces `{}` - empty `raw` produces `new Buffer(0)` - empty `text` produces `''` - empty `urlencoded` produces `{}` * deps: [email protected] - Fix issue where first empty value in array is discarded * deps: type-is@~1.5.0 - fix `hasbody` to be true for `content-length: 0` 1.7.0 / 2014-09-01 ================== * add `parameterLimit` option to `urlencoded` parser * change `urlencoded` extended array limit to 100 * respond with 413 when over `parameterLimit` in `urlencoded` 1.6.7 / 2014-08-29 ================== * deps: [email protected] - Remove unnecessary cloning 1.6.6 / 2014-08-27 ================== * deps: [email protected] - Array parsing fix - Performance improvements 1.6.5 / 2014-08-16 ================== * deps: [email protected] 1.6.4 / 2014-08-14 ================== * deps: [email protected] 1.6.3 / 2014-08-10 ================== * deps: [email protected] 1.6.2 / 2014-08-07 ================== * deps: [email protected] - Fix parsing array of objects 1.6.1 / 2014-08-06 ================== * deps: [email protected] - Accept urlencoded square brackets - Accept empty values in implicit array notation 1.6.0 / 2014-08-05 ================== * deps: [email protected] - Complete rewrite - Limits array length to 20 - Limits object depth to 5 - Limits parameters to 1,000 1.5.2 / 2014-07-27 ================== * deps: [email protected] - Work-around v8 generating empty stack traces 1.5.1 / 2014-07-26 ================== * deps: [email protected] - Fix exception when global `Error.stackTraceLimit` is too low 1.5.0 / 2014-07-20 ================== * deps: [email protected] - Add `TRACE_DEPRECATION` environment variable - Remove non-standard grey color from color output - Support `--no-deprecation` argument - Support `--trace-deprecation` argument * deps: [email protected] - Added encoding UTF-7 * deps: [email protected] - deps: [email protected] - Added encoding UTF-7 - Fix `Cannot switch to old mode now` error on Node.js 0.10+ * deps: type-is@~1.3.2 1.4.3 / 2014-06-19 ================== * deps: [email protected] - fix global variable leak 1.4.2 / 2014-06-19 ================== * deps: [email protected] - improve type parsing 1.4.1 / 2014-06-19 ================== * fix urlencoded extended deprecation message 1.4.0 / 2014-06-19 ================== * add `text` parser * add `raw` parser * check accepted charset in content-type (accepts utf-8) * check accepted encoding in content-encoding (accepts identity) * deprecate `bodyParser()` middleware; use `.json()` and `.urlencoded()` as needed * deprecate `urlencoded()` without provided `extended` option * lazy-load urlencoded parsers * parsers split into files for reduced mem usage * support gzip and deflate bodies - set `inflate: false` to turn off * deps: [email protected] - Support all encodings from `iconv-lite` 1.3.1 / 2014-06-11 ================== * deps: [email protected] - Switch dependency from mime to [email protected] 1.3.0 / 2014-05-31 ================== * add `extended` option to urlencoded parser 1.2.2 / 2014-05-27 ================== * deps: [email protected] - assert stream encoding on node.js 0.8 - assert stream encoding on node.js < 0.10.6 - deps: bytes@1 1.2.1 / 2014-05-26 ================== * invoke `next(err)` after request fully read - prevents hung responses and socket hang ups 1.2.0 / 2014-05-11 ================== * add `verify` option * deps: [email protected] - support suffix matching 1.1.2 / 2014-05-11 ================== * improve json parser speed 1.1.1 / 2014-05-11 ================== * fix repeated limit parsing with every request 1.1.0 / 2014-05-10 ================== * add `type` option * deps: pin for safety and consistency 1.0.2 / 2014-04-14 ================== * use `type-is` module 1.0.1 / 2014-03-20 ================== * lower default limits to 100kb
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/body-parser/HISTORY.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/body-parser/HISTORY.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 16728 }
# body-parser [![NPM Version][npm-version-image]][npm-url] [![NPM Downloads][npm-downloads-image]][npm-url] [![Build Status][ci-image]][ci-url] [![Test Coverage][coveralls-image]][coveralls-url] [![OpenSSF Scorecard Badge][ossf-scorecard-badge]][ossf-scorecard-visualizer] Node.js body parsing middleware. Parse incoming request bodies in a middleware before your handlers, available under the `req.body` property. **Note** As `req.body`'s shape is based on user-controlled input, all properties and values in this object are untrusted and should be validated before trusting. For example, `req.body.foo.toString()` may fail in multiple ways, for example the `foo` property may not be there or may not be a string, and `toString` may not be a function and instead a string or other user input. [Learn about the anatomy of an HTTP transaction in Node.js](https://nodejs.org/en/docs/guides/anatomy-of-an-http-transaction/). _This does not handle multipart bodies_, due to their complex and typically large nature. For multipart bodies, you may be interested in the following modules: * [busboy](https://www.npmjs.org/package/busboy#readme) and [connect-busboy](https://www.npmjs.org/package/connect-busboy#readme) * [multiparty](https://www.npmjs.org/package/multiparty#readme) and [connect-multiparty](https://www.npmjs.org/package/connect-multiparty#readme) * [formidable](https://www.npmjs.org/package/formidable#readme) * [multer](https://www.npmjs.org/package/multer#readme) This module provides the following parsers: * [JSON body parser](#bodyparserjsonoptions) * [Raw body parser](#bodyparserrawoptions) * [Text body parser](#bodyparsertextoptions) * [URL-encoded form body parser](#bodyparserurlencodedoptions) Other body parsers you might be interested in: - [body](https://www.npmjs.org/package/body#readme) - [co-body](https://www.npmjs.org/package/co-body#readme) ## Installation ```sh $ npm install body-parser ``` ## API ```js var bodyParser = require('body-parser') ``` The `bodyParser` object exposes various factories to create middlewares. All middlewares will populate the `req.body` property with the parsed body when the `Content-Type` request header matches the `type` option, or an empty object (`{}`) if there was no body to parse, the `Content-Type` was not matched, or an error occurred. The various errors returned by this module are described in the [errors section](#errors). ### bodyParser.json([options]) Returns middleware that only parses `json` and only looks at requests where the `Content-Type` header matches the `type` option. This parser accepts any Unicode encoding of the body and supports automatic inflation of `gzip` and `deflate` encodings. A new `body` object containing the parsed data is populated on the `request` object after the middleware (i.e. `req.body`). #### Options The `json` function takes an optional `options` object that may contain any of the following keys: ##### inflate When set to `true`, then deflated (compressed) bodies will be inflated; when `false`, deflated bodies are rejected. Defaults to `true`. ##### limit Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the [bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults to `'100kb'`. ##### reviver The `reviver` option is passed directly to `JSON.parse` as the second argument. You can find more information on this argument [in the MDN documentation about JSON.parse](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Example.3A_Using_the_reviver_parameter). ##### strict When set to `true`, will only accept arrays and objects; when `false` will accept anything `JSON.parse` accepts. Defaults to `true`. ##### type The `type` option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, `type` option is passed directly to the [type-is](https://www.npmjs.org/package/type-is#readme) library and this can be an extension name (like `json`), a mime type (like `application/json`), or a mime type with a wildcard (like `*/*` or `*/json`). If a function, the `type` option is called as `fn(req)` and the request is parsed if it returns a truthy value. Defaults to `application/json`. ##### verify The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`, where `buf` is a `Buffer` of the raw request body and `encoding` is the encoding of the request. The parsing can be aborted by throwing an error. ### bodyParser.raw([options]) Returns middleware that parses all bodies as a `Buffer` and only looks at requests where the `Content-Type` header matches the `type` option. This parser supports automatic inflation of `gzip` and `deflate` encodings. A new `body` object containing the parsed data is populated on the `request` object after the middleware (i.e. `req.body`). This will be a `Buffer` object of the body. #### Options The `raw` function takes an optional `options` object that may contain any of the following keys: ##### inflate When set to `true`, then deflated (compressed) bodies will be inflated; when `false`, deflated bodies are rejected. Defaults to `true`. ##### limit Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the [bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults to `'100kb'`. ##### type The `type` option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, `type` option is passed directly to the [type-is](https://www.npmjs.org/package/type-is#readme) library and this can be an extension name (like `bin`), a mime type (like `application/octet-stream`), or a mime type with a wildcard (like `*/*` or `application/*`). If a function, the `type` option is called as `fn(req)` and the request is parsed if it returns a truthy value. Defaults to `application/octet-stream`. ##### verify The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`, where `buf` is a `Buffer` of the raw request body and `encoding` is the encoding of the request. The parsing can be aborted by throwing an error. ### bodyParser.text([options]) Returns middleware that parses all bodies as a string and only looks at requests where the `Content-Type` header matches the `type` option. This parser supports automatic inflation of `gzip` and `deflate` encodings. A new `body` string containing the parsed data is populated on the `request` object after the middleware (i.e. `req.body`). This will be a string of the body. #### Options The `text` function takes an optional `options` object that may contain any of the following keys: ##### defaultCharset Specify the default character set for the text content if the charset is not specified in the `Content-Type` header of the request. Defaults to `utf-8`. ##### inflate When set to `true`, then deflated (compressed) bodies will be inflated; when `false`, deflated bodies are rejected. Defaults to `true`. ##### limit Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the [bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults to `'100kb'`. ##### type The `type` option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, `type` option is passed directly to the [type-is](https://www.npmjs.org/package/type-is#readme) library and this can be an extension name (like `txt`), a mime type (like `text/plain`), or a mime type with a wildcard (like `*/*` or `text/*`). If a function, the `type` option is called as `fn(req)` and the request is parsed if it returns a truthy value. Defaults to `text/plain`. ##### verify The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`, where `buf` is a `Buffer` of the raw request body and `encoding` is the encoding of the request. The parsing can be aborted by throwing an error. ### bodyParser.urlencoded([options]) Returns middleware that only parses `urlencoded` bodies and only looks at requests where the `Content-Type` header matches the `type` option. This parser accepts only UTF-8 encoding of the body and supports automatic inflation of `gzip` and `deflate` encodings. A new `body` object containing the parsed data is populated on the `request` object after the middleware (i.e. `req.body`). This object will contain key-value pairs, where the value can be a string or array (when `extended` is `false`), or any type (when `extended` is `true`). #### Options The `urlencoded` function takes an optional `options` object that may contain any of the following keys: ##### extended The `extended` option allows to choose between parsing the URL-encoded data with the `querystring` library (when `false`) or the `qs` library (when `true`). The "extended" syntax allows for rich objects and arrays to be encoded into the URL-encoded format, allowing for a JSON-like experience with URL-encoded. For more information, please [see the qs library](https://www.npmjs.org/package/qs#readme). Defaults to `true`, but using the default has been deprecated. Please research into the difference between `qs` and `querystring` and choose the appropriate setting. ##### inflate When set to `true`, then deflated (compressed) bodies will be inflated; when `false`, deflated bodies are rejected. Defaults to `true`. ##### limit Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the [bytes](https://www.npmjs.com/package/bytes) library for parsing. Defaults to `'100kb'`. ##### parameterLimit The `parameterLimit` option controls the maximum number of parameters that are allowed in the URL-encoded data. If a request contains more parameters than this value, a 413 will be returned to the client. Defaults to `1000`. ##### type The `type` option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, `type` option is passed directly to the [type-is](https://www.npmjs.org/package/type-is#readme) library and this can be an extension name (like `urlencoded`), a mime type (like `application/x-www-form-urlencoded`), or a mime type with a wildcard (like `*/x-www-form-urlencoded`). If a function, the `type` option is called as `fn(req)` and the request is parsed if it returns a truthy value. Defaults to `application/x-www-form-urlencoded`. ##### verify The `verify` option, if supplied, is called as `verify(req, res, buf, encoding)`, where `buf` is a `Buffer` of the raw request body and `encoding` is the encoding of the request. The parsing can be aborted by throwing an error. #### depth The `depth` option is used to configure the maximum depth of the `qs` library when `extended` is `true`. This allows you to limit the amount of keys that are parsed and can be useful to prevent certain types of abuse. Defaults to `32`. It is recommended to keep this value as low as possible. ## Errors The middlewares provided by this module create errors using the [`http-errors` module](https://www.npmjs.com/package/http-errors). The errors will typically have a `status`/`statusCode` property that contains the suggested HTTP response code, an `expose` property to determine if the `message` property should be displayed to the client, a `type` property to determine the type of error without matching against the `message`, and a `body` property containing the read body, if available. The following are the common errors created, though any error can come through for various reasons. ### content encoding unsupported This error will occur when the request had a `Content-Encoding` header that contained an encoding but the "inflation" option was set to `false`. The `status` property is set to `415`, the `type` property is set to `'encoding.unsupported'`, and the `charset` property will be set to the encoding that is unsupported. ### entity parse failed This error will occur when the request contained an entity that could not be parsed by the middleware. The `status` property is set to `400`, the `type` property is set to `'entity.parse.failed'`, and the `body` property is set to the entity value that failed parsing. ### entity verify failed This error will occur when the request contained an entity that could not be failed verification by the defined `verify` option. The `status` property is set to `403`, the `type` property is set to `'entity.verify.failed'`, and the `body` property is set to the entity value that failed verification. ### request aborted This error will occur when the request is aborted by the client before reading the body has finished. The `received` property will be set to the number of bytes received before the request was aborted and the `expected` property is set to the number of expected bytes. The `status` property is set to `400` and `type` property is set to `'request.aborted'`. ### request entity too large This error will occur when the request body's size is larger than the "limit" option. The `limit` property will be set to the byte limit and the `length` property will be set to the request body's length. The `status` property is set to `413` and the `type` property is set to `'entity.too.large'`. ### request size did not match content length This error will occur when the request's length did not match the length from the `Content-Length` header. This typically occurs when the request is malformed, typically when the `Content-Length` header was calculated based on characters instead of bytes. The `status` property is set to `400` and the `type` property is set to `'request.size.invalid'`. ### stream encoding should not be set This error will occur when something called the `req.setEncoding` method prior to this middleware. This module operates directly on bytes only and you cannot call `req.setEncoding` when using this module. The `status` property is set to `500` and the `type` property is set to `'stream.encoding.set'`. ### stream is not readable This error will occur when the request is no longer readable when this middleware attempts to read it. This typically means something other than a middleware from this module read the request body already and the middleware was also configured to read the same request. The `status` property is set to `500` and the `type` property is set to `'stream.not.readable'`. ### too many parameters This error will occur when the content of the request exceeds the configured `parameterLimit` for the `urlencoded` parser. The `status` property is set to `413` and the `type` property is set to `'parameters.too.many'`. ### unsupported charset "BOGUS" This error will occur when the request had a charset parameter in the `Content-Type` header, but the `iconv-lite` module does not support it OR the parser does not support it. The charset is contained in the message as well as in the `charset` property. The `status` property is set to `415`, the `type` property is set to `'charset.unsupported'`, and the `charset` property is set to the charset that is unsupported. ### unsupported content encoding "bogus" This error will occur when the request had a `Content-Encoding` header that contained an unsupported encoding. The encoding is contained in the message as well as in the `encoding` property. The `status` property is set to `415`, the `type` property is set to `'encoding.unsupported'`, and the `encoding` property is set to the encoding that is unsupported. ### The input exceeded the depth This error occurs when using `bodyParser.urlencoded` with the `extended` property set to `true` and the input exceeds the configured `depth` option. The `status` property is set to `400`. It is recommended to review the `depth` option and evaluate if it requires a higher value. When the `depth` option is set to `32` (default value), the error will not be thrown. ## Examples ### Express/Connect top-level generic This example demonstrates adding a generic JSON and URL-encoded parser as a top-level middleware, which will parse the bodies of all incoming requests. This is the simplest setup. ```js var express = require('express') var bodyParser = require('body-parser') var app = express() // parse application/x-www-form-urlencoded app.use(bodyParser.urlencoded({ extended: false })) // parse application/json app.use(bodyParser.json()) app.use(function (req, res) { res.setHeader('Content-Type', 'text/plain') res.write('you posted:\n') res.end(JSON.stringify(req.body, null, 2)) }) ``` ### Express route-specific This example demonstrates adding body parsers specifically to the routes that need them. In general, this is the most recommended way to use body-parser with Express. ```js var express = require('express') var bodyParser = require('body-parser') var app = express() // create application/json parser var jsonParser = bodyParser.json() // create application/x-www-form-urlencoded parser var urlencodedParser = bodyParser.urlencoded({ extended: false }) // POST /login gets urlencoded bodies app.post('/login', urlencodedParser, function (req, res) { res.send('welcome, ' + req.body.username) }) // POST /api/users gets JSON bodies app.post('/api/users', jsonParser, function (req, res) { // create user in req.body }) ``` ### Change accepted type for parsers All the parsers accept a `type` option which allows you to change the `Content-Type` that the middleware will parse. ```js var express = require('express') var bodyParser = require('body-parser') var app = express() // parse various different custom JSON types as JSON app.use(bodyParser.json({ type: 'application/*+json' })) // parse some custom thing into a Buffer app.use(bodyParser.raw({ type: 'application/vnd.custom-type' })) // parse an HTML body into a string app.use(bodyParser.text({ type: 'text/html' })) ``` ## License [MIT](LICENSE) [ci-image]: https://badgen.net/github/checks/expressjs/body-parser/master?label=ci [ci-url]: https://github.com/expressjs/body-parser/actions/workflows/ci.yml [coveralls-image]: https://badgen.net/coveralls/c/github/expressjs/body-parser/master [coveralls-url]: https://coveralls.io/r/expressjs/body-parser?branch=master [node-version-image]: https://badgen.net/npm/node/body-parser [node-version-url]: https://nodejs.org/en/download [npm-downloads-image]: https://badgen.net/npm/dm/body-parser [npm-url]: https://npmjs.org/package/body-parser [npm-version-image]: https://badgen.net/npm/v/body-parser [ossf-scorecard-badge]: https://api.scorecard.dev/projects/github.com/expressjs/body-parser/badge [ossf-scorecard-visualizer]: https://ossf.github.io/scorecard-visualizer/#/projects/github.com/expressjs/body-parser
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/body-parser/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/body-parser/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 19180 }
# Security Policies and Procedures ## Reporting a Bug The Express team and community take all security bugs seriously. Thank you for improving the security of Express. We appreciate your efforts and responsible disclosure and will make every effort to acknowledge your contributions. Report security bugs by emailing the current owner(s) of `body-parser`. This information can be found in the npm registry using the command `npm owner ls body-parser`. If unsure or unable to get the information from the above, open an issue in the [project issue tracker](https://github.com/expressjs/body-parser/issues) asking for the current contact information. To ensure the timely response to your report, please ensure that the entirety of the report is contained within the email body and not solely behind a web link or an attachment. At least one owner will acknowledge your email within 48 hours, and will send a more detailed response within 48 hours indicating the next steps in handling your report. After the initial reply to your report, the owners will endeavor to keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/body-parser/SECURITY.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/body-parser/SECURITY.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1192 }
# brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/brace-expansion/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/brace-expansion/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 4251 }
# braces [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/braces.svg?style=flat)](https://www.npmjs.com/package/braces) [![NPM monthly downloads](https://img.shields.io/npm/dm/braces.svg?style=flat)](https://npmjs.org/package/braces) [![NPM total downloads](https://img.shields.io/npm/dt/braces.svg?style=flat)](https://npmjs.org/package/braces) [![Linux Build Status](https://img.shields.io/travis/micromatch/braces.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/braces) > Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save braces ``` ## v3.0.0 Released!! See the [changelog](CHANGELOG.md) for details. ## Why use braces? Brace patterns make globs more powerful by adding the ability to match specific ranges and sequences of characters. - **Accurate** - complete support for the [Bash 4.3 Brace Expansion](www.gnu.org/software/bash/) specification (passes all of the Bash braces tests) - **[fast and performant](#benchmarks)** - Starts fast, runs fast and [scales well](#performance) as patterns increase in complexity. - **Organized code base** - The parser and compiler are easy to maintain and update when edge cases crop up. - **Well-tested** - Thousands of test assertions, and passes all of the Bash, minimatch, and [brace-expansion](https://github.com/juliangruber/brace-expansion) unit tests (as of the date this was written). - **Safer** - You shouldn't have to worry about users defining aggressive or malicious brace patterns that can break your application. Braces takes measures to prevent malicious regex that can be used for DDoS attacks (see [catastrophic backtracking](https://www.regular-expressions.info/catastrophic.html)). - [Supports lists](#lists) - (aka "sets") `a/{b,c}/d` => `['a/b/d', 'a/c/d']` - [Supports sequences](#sequences) - (aka "ranges") `{01..03}` => `['01', '02', '03']` - [Supports steps](#steps) - (aka "increments") `{2..10..2}` => `['2', '4', '6', '8', '10']` - [Supports escaping](#escaping) - To prevent evaluation of special characters. ## Usage The main export is a function that takes one or more brace `patterns` and `options`. ```js const braces = require('braces'); // braces(patterns[, options]); console.log(braces(['{01..05}', '{a..e}'])); //=> ['(0[1-5])', '([a-e])'] console.log(braces(['{01..05}', '{a..e}'], { expand: true })); //=> ['01', '02', '03', '04', '05', 'a', 'b', 'c', 'd', 'e'] ``` ### Brace Expansion vs. Compilation By default, brace patterns are compiled into strings that are optimized for creating regular expressions and matching. **Compiled** ```js console.log(braces('a/{x,y,z}/b')); //=> ['a/(x|y|z)/b'] console.log(braces(['a/{01..20}/b', 'a/{1..5}/b'])); //=> [ 'a/(0[1-9]|1[0-9]|20)/b', 'a/([1-5])/b' ] ``` **Expanded** Enable brace expansion by setting the `expand` option to true, or by using [braces.expand()](#expand) (returns an array similar to what you'd expect from Bash, or `echo {1..5}`, or [minimatch](https://github.com/isaacs/minimatch)): ```js console.log(braces('a/{x,y,z}/b', { expand: true })); //=> ['a/x/b', 'a/y/b', 'a/z/b'] console.log(braces.expand('{01..10}')); //=> ['01','02','03','04','05','06','07','08','09','10'] ``` ### Lists Expand lists (like Bash "sets"): ```js console.log(braces('a/{foo,bar,baz}/*.js')); //=> ['a/(foo|bar|baz)/*.js'] console.log(braces.expand('a/{foo,bar,baz}/*.js')); //=> ['a/foo/*.js', 'a/bar/*.js', 'a/baz/*.js'] ``` ### Sequences Expand ranges of characters (like Bash "sequences"): ```js console.log(braces.expand('{1..3}')); // ['1', '2', '3'] console.log(braces.expand('a/{1..3}/b')); // ['a/1/b', 'a/2/b', 'a/3/b'] console.log(braces('{a..c}', { expand: true })); // ['a', 'b', 'c'] console.log(braces('foo/{a..c}', { expand: true })); // ['foo/a', 'foo/b', 'foo/c'] // supports zero-padded ranges console.log(braces('a/{01..03}/b')); //=> ['a/(0[1-3])/b'] console.log(braces('a/{001..300}/b')); //=> ['a/(0{2}[1-9]|0[1-9][0-9]|[12][0-9]{2}|300)/b'] ``` See [fill-range](https://github.com/jonschlinkert/fill-range) for all available range-expansion options. ### Steppped ranges Steps, or increments, may be used with ranges: ```js console.log(braces.expand('{2..10..2}')); //=> ['2', '4', '6', '8', '10'] console.log(braces('{2..10..2}')); //=> ['(2|4|6|8|10)'] ``` When the [.optimize](#optimize) method is used, or [options.optimize](#optionsoptimize) is set to true, sequences are passed to [to-regex-range](https://github.com/jonschlinkert/to-regex-range) for expansion. ### Nesting Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved. **"Expanded" braces** ```js console.log(braces.expand('a{b,c,/{x,y}}/e')); //=> ['ab/e', 'ac/e', 'a/x/e', 'a/y/e'] console.log(braces.expand('a/{x,{1..5},y}/c')); //=> ['a/x/c', 'a/1/c', 'a/2/c', 'a/3/c', 'a/4/c', 'a/5/c', 'a/y/c'] ``` **"Optimized" braces** ```js console.log(braces('a{b,c,/{x,y}}/e')); //=> ['a(b|c|/(x|y))/e'] console.log(braces('a/{x,{1..5},y}/c')); //=> ['a/(x|([1-5])|y)/c'] ``` ### Escaping **Escaping braces** A brace pattern will not be expanded or evaluted if _either the opening or closing brace is escaped_: ```js console.log(braces.expand('a\\{d,c,b}e')); //=> ['a{d,c,b}e'] console.log(braces.expand('a{d,c,b\\}e')); //=> ['a{d,c,b}e'] ``` **Escaping commas** Commas inside braces may also be escaped: ```js console.log(braces.expand('a{b\\,c}d')); //=> ['a{b,c}d'] console.log(braces.expand('a{d\\,c,b}e')); //=> ['ad,ce', 'abe'] ``` **Single items** Following bash conventions, a brace pattern is also not expanded when it contains a single character: ```js console.log(braces.expand('a{b}c')); //=> ['a{b}c'] ``` ## Options ### options.maxLength **Type**: `Number` **Default**: `10,000` **Description**: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera. ```js console.log(braces('a/{b,c}/d', { maxLength: 3 })); //=> throws an error ``` ### options.expand **Type**: `Boolean` **Default**: `undefined` **Description**: Generate an "expanded" brace pattern (alternatively you can use the `braces.expand()` method, which does the same thing). ```js console.log(braces('a/{b,c}/d', { expand: true })); //=> [ 'a/b/d', 'a/c/d' ] ``` ### options.nodupes **Type**: `Boolean` **Default**: `undefined` **Description**: Remove duplicates from the returned array. ### options.rangeLimit **Type**: `Number` **Default**: `1000` **Description**: To prevent malicious patterns from being passed by users, an error is thrown when `braces.expand()` is used or `options.expand` is true and the generated range will exceed the `rangeLimit`. You can customize `options.rangeLimit` or set it to `Inifinity` to disable this altogether. **Examples** ```js // pattern exceeds the "rangeLimit", so it's optimized automatically console.log(braces.expand('{1..1000}')); //=> ['([1-9]|[1-9][0-9]{1,2}|1000)'] // pattern does not exceed "rangeLimit", so it's NOT optimized console.log(braces.expand('{1..100}')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100'] ``` ### options.transform **Type**: `Function` **Default**: `undefined` **Description**: Customize range expansion. **Example: Transforming non-numeric values** ```js const alpha = braces.expand('x/{a..e}/y', { transform(value, index) { // When non-numeric values are passed, "value" is a character code. return 'foo/' + String.fromCharCode(value) + '-' + index; }, }); console.log(alpha); //=> [ 'x/foo/a-0/y', 'x/foo/b-1/y', 'x/foo/c-2/y', 'x/foo/d-3/y', 'x/foo/e-4/y' ] ``` **Example: Transforming numeric values** ```js const numeric = braces.expand('{1..5}', { transform(value) { // when numeric values are passed, "value" is a number return 'foo/' + value * 2; }, }); console.log(numeric); //=> [ 'foo/2', 'foo/4', 'foo/6', 'foo/8', 'foo/10' ] ``` ### options.quantifiers **Type**: `Boolean` **Default**: `undefined` **Description**: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, `a{1,3}` will match the letter `a` one to three times. Unfortunately, regex quantifiers happen to share the same syntax as [Bash lists](#lists) The `quantifiers` option tells braces to detect when [regex quantifiers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp#quantifiers) are defined in the given pattern, and not to try to expand them as lists. **Examples** ```js const braces = require('braces'); console.log(braces('a/b{1,3}/{x,y,z}')); //=> [ 'a/b(1|3)/(x|y|z)' ] console.log(braces('a/b{1,3}/{x,y,z}', { quantifiers: true })); //=> [ 'a/b{1,3}/(x|y|z)' ] console.log(braces('a/b{1,3}/{x,y,z}', { quantifiers: true, expand: true })); //=> [ 'a/b{1,3}/x', 'a/b{1,3}/y', 'a/b{1,3}/z' ] ``` ### options.keepEscaping **Type**: `Boolean` **Default**: `undefined` **Description**: Do not strip backslashes that were used for escaping from the result. ## What is "brace expansion"? Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs). In addition to "expansion", braces are also used for matching. In other words: - [brace expansion](#brace-expansion) is for generating new lists - [brace matching](#brace-matching) is for filtering existing lists <details> <summary><strong>More about brace expansion</strong> (click to expand)</summary> There are two main types of brace expansion: 1. **lists**: which are defined using comma-separated values inside curly braces: `{a,b,c}` 2. **sequences**: which are defined using a starting value and an ending value, separated by two dots: `a{1..3}b`. Optionally, a third argument may be passed to define a "step" or increment to use: `a{1..100..10}b`. These are also sometimes referred to as "ranges". Here are some example brace patterns to illustrate how they work: **Sets** ``` {a,b,c} => a b c {a,b,c}{1,2} => a1 a2 b1 b2 c1 c2 ``` **Sequences** ``` {1..9} => 1 2 3 4 5 6 7 8 9 {4..-4} => 4 3 2 1 0 -1 -2 -3 -4 {1..20..3} => 1 4 7 10 13 16 19 {a..j} => a b c d e f g h i j {j..a} => j i h g f e d c b a {a..z..3} => a d g j m p s v y ``` **Combination** Sets and sequences can be mixed together or used along with any other strings. ``` {a,b,c}{1..3} => a1 a2 a3 b1 b2 b3 c1 c2 c3 foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar ``` The fact that braces can be "expanded" from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases. ## Brace matching In addition to _expansion_, brace patterns are also useful for performing regular-expression-like matching. For example, the pattern `foo/{1..3}/bar` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar ``` But not: ``` baz/1/qux baz/2/qux baz/3/qux ``` Braces can also be combined with [glob patterns](https://github.com/jonschlinkert/micromatch) to perform more advanced wildcard matching. For example, the pattern `*/{1..3}/*` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar baz/1/qux baz/2/qux baz/3/qux ``` ## Brace matching pitfalls Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of. ### tldr **"brace bombs"** - brace expansion can eat up a huge amount of processing resources - as brace patterns increase _linearly in size_, the system resources required to expand the pattern increase exponentially - users can accidentally (or intentially) exhaust your system's resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!) For a more detailed explanation with examples, see the [geometric complexity](#geometric-complexity) section. ### The solution Jump to the [performance section](#performance) to see how Braces solves this problem in comparison to other libraries. ### Geometric complexity At minimum, brace patterns with sets limited to two elements have quadradic or `O(n^2)` complexity. But the complexity of the algorithm increases exponentially as the number of sets, _and elements per set_, increases, which is `O(n^c)`. For example, the following sets demonstrate quadratic (`O(n^2)`) complexity: ``` {1,2}{3,4} => (2X2) => 13 14 23 24 {1,2}{3,4}{5,6} => (2X2X2) => 135 136 145 146 235 236 245 246 ``` But add an element to a set, and we get a n-fold Cartesian product with `O(n^c)` complexity: ``` {1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 249 257 258 259 267 268 269 347 348 349 357 358 359 367 368 369 ``` Now, imagine how this complexity grows given that each element is a n-tuple: ``` {1..100}{1..100} => (100X100) => 10,000 elements (38.4 kB) {1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB) ``` Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control. **More information** Interested in learning more about brace expansion? - [linuxjournal/bash-brace-expansion](http://www.linuxjournal.com/content/bash-brace-expansion) - [rosettacode/Brace_expansion](https://rosettacode.org/wiki/Brace_expansion) - [cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) </details> ## Performance Braces is not only screaming fast, it's also more accurate the other brace expansion libraries. ### Better algorithms Fortunately there is a solution to the ["brace bomb" problem](#brace-matching-pitfalls): _don't expand brace patterns into an array when they're used for matching_. Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently. **The proof is in the numbers** Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using `braces()` and `minimatch.braceExpand()`, respectively. | **Pattern** | **braces** | **[minimatch][]** | | --------------------------- | ------------------- | ---------------------------- | | `{1..9007199254740991}`[^1] | `298 B` (5ms 459μs) | N/A (freezes) | | `{1..1000000000000000}` | `41 B` (1ms 15μs) | N/A (freezes) | | `{1..100000000000000}` | `40 B` (890μs) | N/A (freezes) | | `{1..10000000000000}` | `39 B` (2ms 49μs) | N/A (freezes) | | `{1..1000000000000}` | `38 B` (608μs) | N/A (freezes) | | `{1..100000000000}` | `37 B` (397μs) | N/A (freezes) | | `{1..10000000000}` | `35 B` (983μs) | N/A (freezes) | | `{1..1000000000}` | `34 B` (798μs) | N/A (freezes) | | `{1..100000000}` | `33 B` (733μs) | N/A (freezes) | | `{1..10000000}` | `32 B` (5ms 632μs) | `78.89 MB` (16s 388ms 569μs) | | `{1..1000000}` | `31 B` (1ms 381μs) | `6.89 MB` (1s 496ms 887μs) | | `{1..100000}` | `30 B` (950μs) | `588.89 kB` (146ms 921μs) | | `{1..10000}` | `29 B` (1ms 114μs) | `48.89 kB` (14ms 187μs) | | `{1..1000}` | `28 B` (760μs) | `3.89 kB` (1ms 453μs) | | `{1..100}` | `22 B` (345μs) | `291 B` (196μs) | | `{1..10}` | `10 B` (533μs) | `20 B` (37μs) | | `{1..3}` | `7 B` (190μs) | `5 B` (27μs) | ### Faster algorithms When you need expansion, braces is still much faster. _(the following results were generated using `braces.expand()` and `minimatch.braceExpand()`, respectively)_ | **Pattern** | **braces** | **[minimatch][]** | | --------------- | --------------------------- | ---------------------------- | | `{1..10000000}` | `78.89 MB` (2s 698ms 642μs) | `78.89 MB` (18s 601ms 974μs) | | `{1..1000000}` | `6.89 MB` (458ms 576μs) | `6.89 MB` (1s 491ms 621μs) | | `{1..100000}` | `588.89 kB` (20ms 728μs) | `588.89 kB` (156ms 919μs) | | `{1..10000}` | `48.89 kB` (2ms 202μs) | `48.89 kB` (13ms 641μs) | | `{1..1000}` | `3.89 kB` (1ms 796μs) | `3.89 kB` (1ms 958μs) | | `{1..100}` | `291 B` (424μs) | `291 B` (211μs) | | `{1..10}` | `20 B` (487μs) | `20 B` (72μs) | | `{1..3}` | `5 B` (166μs) | `5 B` (27μs) | If you'd like to run these comparisons yourself, see [test/support/generate.js](test/support/generate.js). ## Benchmarks ### Running benchmarks Install dev dependencies: ```bash npm i -d && npm benchmark ``` ### Latest results Braces is more accurate, without sacrificing performance. ```bash ● expand - range (expanded) braces x 53,167 ops/sec ±0.12% (102 runs sampled) minimatch x 11,378 ops/sec ±0.10% (102 runs sampled) ● expand - range (optimized for regex) braces x 373,442 ops/sec ±0.04% (100 runs sampled) minimatch x 3,262 ops/sec ±0.18% (100 runs sampled) ● expand - nested ranges (expanded) braces x 33,921 ops/sec ±0.09% (99 runs sampled) minimatch x 10,855 ops/sec ±0.28% (100 runs sampled) ● expand - nested ranges (optimized for regex) braces x 287,479 ops/sec ±0.52% (98 runs sampled) minimatch x 3,219 ops/sec ±0.28% (101 runs sampled) ● expand - set (expanded) braces x 238,243 ops/sec ±0.19% (97 runs sampled) minimatch x 538,268 ops/sec ±0.31% (96 runs sampled) ● expand - set (optimized for regex) braces x 321,844 ops/sec ±0.10% (97 runs sampled) minimatch x 140,600 ops/sec ±0.15% (100 runs sampled) ● expand - nested sets (expanded) braces x 165,371 ops/sec ±0.42% (96 runs sampled) minimatch x 337,720 ops/sec ±0.28% (100 runs sampled) ● expand - nested sets (optimized for regex) braces x 242,948 ops/sec ±0.12% (99 runs sampled) minimatch x 87,403 ops/sec ±0.79% (96 runs sampled) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Contributors | **Commits** | **Contributor** | | ----------- | ------------------------------------------------------------- | | 197 | [jonschlinkert](https://github.com/jonschlinkert) | | 4 | [doowb](https://github.com/doowb) | | 1 | [es128](https://github.com/es128) | | 1 | [eush77](https://github.com/eush77) | | 1 | [hemanth](https://github.com/hemanth) | | 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | ### Author **Jon Schlinkert** - [GitHub Profile](https://github.com/jonschlinkert) - [Twitter Profile](https://twitter.com/jonschlinkert) - [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). --- _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 08, 2019._
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/braces/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/braces/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 21430 }
# Browserslist [![Cult Of Martians][cult-img]][cult] <img width="120" height="120" alt="Browserslist logo by Anton Popov" src="https://browsersl.ist/logo.svg" align="right"> The config to share target browsers and Node.js versions between different front-end tools. It is used in: * [Autoprefixer] * [Babel] * [postcss-preset-env] * [eslint-plugin-compat] * [stylelint-no-unsupported-browser-features] * [postcss-normalize] * [obsolete-webpack-plugin] All tools will find target browsers automatically, when you add the following to `package.json`: ```json "browserslist": [ "defaults and fully supports es6-module", "maintained node versions" ] ``` Or in `.browserslistrc` config: ```yaml # Browsers that we support defaults and fully supports es6-module maintained node versions ``` Developers set their version lists using queries like `last 2 versions` to be free from updating versions manually. Browserslist will use [`caniuse-lite`] with [Can I Use] data for this queries. You can check how config works at our playground: [`browsersl.ist`](https://browsersl.ist/) <a href="https://browsersl.ist/"> <img src="/img/screenshot.webp" alt="browsersl.ist website"> </a> <br> <br> <div align="center"> <a href="https://evilmartians.com/?utm_source=browserslist"><img src="https://evilmartians.com/badges/sponsored-by-evil-martians.svg" alt="Sponsored by Evil Martians" width="236" height="54"></a>  <a href="https://cube.dev/?ref=eco-browserslist-github"><img src="https://user-images.githubusercontent.com/986756/154330861-d79ab8ec-aacb-4af8-9e17-1b28f1eccb01.svg" alt="Supported by Cube" width="227" height="46"></a> </div> [stylelint-no-unsupported-browser-features]: https://github.com/ismay/stylelint-no-unsupported-browser-features [obsolete-webpack-plugin]: https://github.com/ElemeFE/obsolete-webpack-plugin [eslint-plugin-compat]: https://github.com/amilajack/eslint-plugin-compat [Browserslist Example]: https://github.com/browserslist/browserslist-example [postcss-preset-env]: https://github.com/csstools/postcss-plugins/tree/main/plugin-packs/postcss-preset-env [postcss-normalize]: https://github.com/csstools/postcss-normalize [`browsersl.ist`]: https://browsersl.ist/ [`caniuse-lite`]: https://github.com/ben-eb/caniuse-lite [Autoprefixer]: https://github.com/postcss/autoprefixer [Can I Use]: https://caniuse.com/ [Babel]: https://github.com/babel/babel/tree/master/packages/babel-preset-env [cult-img]: https://cultofmartians.com/assets/badges/badge.svg [cult]: https://cultofmartians.com/done.html ## Docs Read full docs **[here](https://github.com/browserslist/browserslist#readme)**.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/browserslist/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/browserslist/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 2897 }
# Buffer From A [ponyfill](https://ponyfill.com) for `Buffer.from`, uses native implementation if available. ## Installation ```sh npm install --save buffer-from ``` ## Usage ```js const bufferFrom = require('buffer-from') console.log(bufferFrom([1, 2, 3, 4])) //=> <Buffer 01 02 03 04> const arr = new Uint8Array([1, 2, 3, 4]) console.log(bufferFrom(arr.buffer, 1, 2)) //=> <Buffer 02 03> console.log(bufferFrom('test', 'utf8')) //=> <Buffer 74 65 73 74> const buf = bufferFrom('test') console.log(bufferFrom(buf)) //=> <Buffer 74 65 73 74> ``` ## API ### bufferFrom(array) - `array` &lt;Array&gt; Allocates a new `Buffer` using an `array` of octets. ### bufferFrom(arrayBuffer[, byteOffset[, length]]) - `arrayBuffer` &lt;ArrayBuffer&gt; The `.buffer` property of a TypedArray or ArrayBuffer - `byteOffset` &lt;Integer&gt; Where to start copying from `arrayBuffer`. **Default:** `0` - `length` &lt;Integer&gt; How many bytes to copy from `arrayBuffer`. **Default:** `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a TypedArray instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ### bufferFrom(buffer) - `buffer` &lt;Buffer&gt; An existing `Buffer` to copy data from Copies the passed `buffer` data onto a new `Buffer` instance. ### bufferFrom(string[, encoding]) - `string` &lt;String&gt; A string to encode. - `encoding` &lt;String&gt; The encoding of `string`. **Default:** `'utf8'` Creates a new `Buffer` containing the given JavaScript string `string`. If provided, the `encoding` parameter identifies the character encoding of `string`. ## See also - [buffer-alloc](https://github.com/LinusU/buffer-alloc) A ponyfill for `Buffer.alloc` - [buffer-alloc-unsafe](https://github.com/LinusU/buffer-alloc-unsafe) A ponyfill for `Buffer.allocUnsafe`
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/buffer-from/readme.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/buffer-from/readme.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1989 }
# bufferutil [![Version npm](https://img.shields.io/npm/v/bufferutil.svg?logo=npm)](https://www.npmjs.com/package/bufferutil) [![Linux/macOS/Windows Build](https://img.shields.io/github/actions/workflow/status/websockets/bufferutil/ci.yml?branch=master&label=build&logo=github)](https://github.com/websockets/bufferutil/actions?query=workflow%3ACI+branch%3Amaster) `bufferutil` is what makes `ws` fast. It provides some utilities to efficiently perform some operations such as masking and unmasking the data payload of WebSocket frames. ## Installation ``` npm install bufferutil --save-optional ``` The `--save-optional` flag tells npm to save the package in your package.json under the [`optionalDependencies`](https://docs.npmjs.com/files/package.json#optionaldependencies) key. ## API The module exports two functions. ### `bufferUtil.mask(source, mask, output, offset, length)` Masks a buffer using the given masking-key as specified by the WebSocket protocol. #### Arguments - `source` - The buffer to mask. - `mask` - A buffer representing the masking-key. - `output` - The buffer where to store the result. - `offset` - The offset at which to start writing. - `length` - The number of bytes to mask. #### Example ```js 'use strict'; const bufferUtil = require('bufferutil'); const crypto = require('crypto'); const source = crypto.randomBytes(10); const mask = crypto.randomBytes(4); bufferUtil.mask(source, mask, source, 0, source.length); ``` ### `bufferUtil.unmask(buffer, mask)` Unmasks a buffer using the given masking-key as specified by the WebSocket protocol. #### Arguments - `buffer` - The buffer to unmask. - `mask` - A buffer representing the masking-key. #### Example ```js 'use strict'; const bufferUtil = require('bufferutil'); const crypto = require('crypto'); const buffer = crypto.randomBytes(10); const mask = crypto.randomBytes(4); bufferUtil.unmask(buffer, mask); ``` ## License [MIT](LICENSE)
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/bufferutil/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/bufferutil/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1949 }
3.1.2 / 2022-01-27 ================== * Fix return value for un-parsable strings 3.1.1 / 2021-11-15 ================== * Fix "thousandsSeparator" incorrecting formatting fractional part 3.1.0 / 2019-01-22 ================== * Add petabyte (`pb`) support 3.0.0 / 2017-08-31 ================== * Change "kB" to "KB" in format output * Remove support for Node.js 0.6 * Remove support for ComponentJS 2.5.0 / 2017-03-24 ================== * Add option "unit" 2.4.0 / 2016-06-01 ================== * Add option "unitSeparator" 2.3.0 / 2016-02-15 ================== * Drop partial bytes on all parsed units * Fix non-finite numbers to `.format` to return `null` * Fix parsing byte string that looks like hex * perf: hoist regular expressions 2.2.0 / 2015-11-13 ================== * add option "decimalPlaces" * add option "fixedDecimals" 2.1.0 / 2015-05-21 ================== * add `.format` export * add `.parse` export 2.0.2 / 2015-05-20 ================== * remove map recreation * remove unnecessary object construction 2.0.1 / 2015-05-07 ================== * fix browserify require * remove node.extend dependency 2.0.0 / 2015-04-12 ================== * add option "case" * add option "thousandsSeparator" * return "null" on invalid parse input * support proper round-trip: bytes(bytes(num)) === num * units no longer case sensitive when parsing 1.0.0 / 2014-05-05 ================== * add negative support. fixes #6 0.3.0 / 2014-03-19 ================== * added terabyte support 0.2.1 / 2013-04-01 ================== * add .component 0.2.0 / 2012-10-28 ================== * bytes(200).should.eql('200b') 0.1.0 / 2012-07-04 ================== * add bytes to string conversion [yields]
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/bytes/History.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/bytes/History.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1774 }
# Bytes utility [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Build Status][ci-image]][ci-url] [![Test Coverage][coveralls-image]][coveralls-url] Utility to parse a string bytes (ex: `1TB`) to bytes (`1099511627776`) and vice-versa. ## Installation This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install bytes ``` ## Usage ```js var bytes = require('bytes'); ``` #### bytes(number|string value, [options]): number|string|null Default export function. Delegates to either `bytes.format` or `bytes.parse` based on the type of `value`. **Arguments** | Name | Type | Description | |---------|----------|--------------------| | value | `number`|`string` | Number value to format or string value to parse | | options | `Object` | Conversion options for `format` | **Returns** | Name | Type | Description | |---------|------------------|-------------------------------------------------| | results | `string`|`number`|`null` | Return null upon error. Numeric value in bytes, or string value otherwise. | **Example** ```js bytes(1024); // output: '1KB' bytes('1KB'); // output: 1024 ``` #### bytes.format(number value, [options]): string|null Format the given value in bytes into a string. If the value is negative, it is kept as such. If it is a float, it is rounded. **Arguments** | Name | Type | Description | |---------|----------|--------------------| | value | `number` | Value in bytes | | options | `Object` | Conversion options | **Options** | Property | Type | Description | |-------------------|--------|-----------------------------------------------------------------------------------------| | decimalPlaces | `number`|`null` | Maximum number of decimal places to include in output. Default value to `2`. | | fixedDecimals | `boolean`|`null` | Whether to always display the maximum number of decimal places. Default value to `false` | | thousandsSeparator | `string`|`null` | Example of values: `' '`, `','` and `'.'`... Default value to `''`. | | unit | `string`|`null` | The unit in which the result will be returned (B/KB/MB/GB/TB). Default value to `''` (which means auto detect). | | unitSeparator | `string`|`null` | Separator to use between number and unit. Default value to `''`. | **Returns** | Name | Type | Description | |---------|------------------|-------------------------------------------------| | results | `string`|`null` | Return null upon error. String value otherwise. | **Example** ```js bytes.format(1024); // output: '1KB' bytes.format(1000); // output: '1000B' bytes.format(1000, {thousandsSeparator: ' '}); // output: '1 000B' bytes.format(1024 * 1.7, {decimalPlaces: 0}); // output: '2KB' bytes.format(1024, {unitSeparator: ' '}); // output: '1 KB' ``` #### bytes.parse(string|number value): number|null Parse the string value into an integer in bytes. If no unit is given, or `value` is a number, it is assumed the value is in bytes. Supported units and abbreviations are as follows and are case-insensitive: * `b` for bytes * `kb` for kilobytes * `mb` for megabytes * `gb` for gigabytes * `tb` for terabytes * `pb` for petabytes The units are in powers of two, not ten. This means 1kb = 1024b according to this parser. **Arguments** | Name | Type | Description | |---------------|--------|--------------------| | value | `string`|`number` | String to parse, or number in bytes. | **Returns** | Name | Type | Description | |---------|-------------|-------------------------| | results | `number`|`null` | Return null upon error. Value in bytes otherwise. | **Example** ```js bytes.parse('1KB'); // output: 1024 bytes.parse('1024'); // output: 1024 bytes.parse(1024); // output: 1024 ``` ## License [MIT](LICENSE) [ci-image]: https://badgen.net/github/checks/visionmedia/bytes.js/master?label=ci [ci-url]: https://github.com/visionmedia/bytes.js/actions?query=workflow%3Aci [coveralls-image]: https://badgen.net/coveralls/c/github/visionmedia/bytes.js/master [coveralls-url]: https://coveralls.io/r/visionmedia/bytes.js?branch=master [downloads-image]: https://badgen.net/npm/dm/bytes [downloads-url]: https://npmjs.org/package/bytes [npm-image]: https://badgen.net/npm/v/bytes [npm-url]: https://npmjs.org/package/bytes
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/bytes/Readme.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/bytes/Readme.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 4735 }
# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [v1.0.7](https://github.com/ljharb/call-bind/compare/v1.0.6...v1.0.7) - 2024-02-12 ### Commits - [Refactor] use `es-define-property` [`09b76a0`](https://github.com/ljharb/call-bind/commit/09b76a01634440461d44a80c9924ec4b500f3b03) - [Deps] update `get-intrinsic`, `set-function-length` [`ad5136d`](https://github.com/ljharb/call-bind/commit/ad5136ddda2a45c590959829ad3dce0c9f4e3590) ## [v1.0.6](https://github.com/ljharb/call-bind/compare/v1.0.5...v1.0.6) - 2024-02-05 ### Commits - [Dev Deps] update `aud`, `npmignore`, `tape` [`d564d5c`](https://github.com/ljharb/call-bind/commit/d564d5ce3e06a19df4d499c77f8d1a9da44e77aa) - [Deps] update `get-intrinsic`, `set-function-length` [`cfc2bdc`](https://github.com/ljharb/call-bind/commit/cfc2bdca7b633df0e0e689e6b637f668f1c6792e) - [Refactor] use `es-errors`, so things that only need those do not need `get-intrinsic` [`64cd289`](https://github.com/ljharb/call-bind/commit/64cd289ae5862c250a4ca80aa8d461047c166af5) - [meta] add missing `engines.node` [`32a4038`](https://github.com/ljharb/call-bind/commit/32a4038857b62179f7f9b7b3df2c5260036be582) ## [v1.0.5](https://github.com/ljharb/call-bind/compare/v1.0.4...v1.0.5) - 2023-10-19 ### Commits - [Fix] throw an error on non-functions as early as possible [`f262408`](https://github.com/ljharb/call-bind/commit/f262408f822c840fbc268080f3ad7c429611066d) - [Deps] update `set-function-length` [`3fff271`](https://github.com/ljharb/call-bind/commit/3fff27145a1e3a76a5b74f1d7c3c43d0fa3b9871) ## [v1.0.4](https://github.com/ljharb/call-bind/compare/v1.0.3...v1.0.4) - 2023-10-19 ## [v1.0.3](https://github.com/ljharb/call-bind/compare/v1.0.2...v1.0.3) - 2023-10-19 ### Commits - [actions] reuse common workflows [`a994df6`](https://github.com/ljharb/call-bind/commit/a994df69f401f4bf735a4ccd77029b85d1549453) - [meta] use `npmignore` to autogenerate an npmignore file [`eef3ef2`](https://github.com/ljharb/call-bind/commit/eef3ef21e1f002790837fedb8af2679c761fbdf5) - [readme] flesh out content [`1845ccf`](https://github.com/ljharb/call-bind/commit/1845ccfd9976a607884cfc7157c93192cc16cf22) - [actions] use `node/install` instead of `node/run`; use `codecov` action [`5b47d53`](https://github.com/ljharb/call-bind/commit/5b47d53d2fd74af5ea0a44f1d51e503cd42f7a90) - [Refactor] use `set-function-length` [`a0e165c`](https://github.com/ljharb/call-bind/commit/a0e165c5dc61db781cbc919b586b1c2b8da0b150) - [Dev Deps] update `@ljharb/eslint-config`, `aud`, `tape` [`9c50103`](https://github.com/ljharb/call-bind/commit/9c50103f44137279a817317cf6cc421a658f85b4) - [meta] simplify "exports" [`019c6d0`](https://github.com/ljharb/call-bind/commit/019c6d06b0e1246ceed8e579f57e44441cbbf6d9) - [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `auto-changelog`, `safe-publish-latest`, `tape` [`23bd718`](https://github.com/ljharb/call-bind/commit/23bd718a288d3b03042062b4ef5153b3cea83f11) - [actions] update codecov uploader [`62552d7`](https://github.com/ljharb/call-bind/commit/62552d79cc79e05825e99aaba134ae5b37f33da5) - [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `auto-changelog`, `tape` [`ec81665`](https://github.com/ljharb/call-bind/commit/ec81665b300f87eabff597afdc8b8092adfa7afd) - [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `safe-publish-latest`, `tape` [`35d67fc`](https://github.com/ljharb/call-bind/commit/35d67fcea883e686650f736f61da5ddca2592de8) - [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `tape` [`0266d8d`](https://github.com/ljharb/call-bind/commit/0266d8d2a45086a922db366d0c2932fa463662ff) - [Dev Deps] update `@ljharb/eslint-config`, `aud`, `tape` [`43a5b28`](https://github.com/ljharb/call-bind/commit/43a5b28a444e710e1bbf92adb8afb5cf7523a223) - [Deps] update `define-data-property`, `function-bind`, `get-intrinsic` [`780eb36`](https://github.com/ljharb/call-bind/commit/780eb36552514f8cc99c70821ce698697c2726a5) - [Dev Deps] update `aud`, `tape` [`90d50ad`](https://github.com/ljharb/call-bind/commit/90d50ad03b061e0268b3380b0065fcaec183dc05) - [meta] use `prepublishOnly` script for npm 7+ [`44c5433`](https://github.com/ljharb/call-bind/commit/44c5433b7980e02b4870007046407cf6fc543329) - [Deps] update `get-intrinsic` [`86bfbfc`](https://github.com/ljharb/call-bind/commit/86bfbfcf34afdc6eabc93ce3d408548d0e27d958) - [Deps] update `get-intrinsic` [`5c53354`](https://github.com/ljharb/call-bind/commit/5c5335489be0294c18cd7a8bb6e08226ee019ff5) - [actions] update checkout action [`4c393a8`](https://github.com/ljharb/call-bind/commit/4c393a8173b3c8e5b30d5b3297b3b94d48bf87f3) - [Deps] update `get-intrinsic` [`4e70bde`](https://github.com/ljharb/call-bind/commit/4e70bdec0626acb11616d66250fc14565e716e91) - [Deps] update `get-intrinsic` [`55ae803`](https://github.com/ljharb/call-bind/commit/55ae803a920bd93c369cd798c20de31f91e9fc60) ## [v1.0.2](https://github.com/ljharb/call-bind/compare/v1.0.1...v1.0.2) - 2021-01-11 ### Commits - [Fix] properly include the receiver in the bound length [`dbae7bc`](https://github.com/ljharb/call-bind/commit/dbae7bc676c079a0d33c0a43e9ef92cb7b01345d) ## [v1.0.1](https://github.com/ljharb/call-bind/compare/v1.0.0...v1.0.1) - 2021-01-08 ### Commits - [Tests] migrate tests to Github Actions [`b6db284`](https://github.com/ljharb/call-bind/commit/b6db284c36f8ccd195b88a6764fe84b7223a0da1) - [meta] do not publish github action workflow files [`ec7fe46`](https://github.com/ljharb/call-bind/commit/ec7fe46e60cfa4764ee943d2755f5e5a366e578e) - [Fix] preserve original function’s length when possible [`adbceaa`](https://github.com/ljharb/call-bind/commit/adbceaa3cac4b41ea78bb19d7ccdbaaf7e0bdadb) - [Tests] gather coverage data on every job [`d69e23c`](https://github.com/ljharb/call-bind/commit/d69e23cc65f101ba1d4c19bb07fa8eb0ec624be8) - [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `tape` [`2fd3586`](https://github.com/ljharb/call-bind/commit/2fd3586c5d47b335364c14293114c6b625ae1f71) - [Deps] update `get-intrinsic` [`f23e931`](https://github.com/ljharb/call-bind/commit/f23e9318cc271c2add8bb38cfded85ee7baf8eee) - [Deps] update `get-intrinsic` [`72d9f44`](https://github.com/ljharb/call-bind/commit/72d9f44e184465ba8dd3fb48260bbcff234985f2) - [meta] fix FUNDING.yml [`e723573`](https://github.com/ljharb/call-bind/commit/e723573438c5a68dcec31fb5d96ea6b7e4a93be8) - [eslint] ignore coverage output [`15e76d2`](https://github.com/ljharb/call-bind/commit/15e76d28a5f43e504696401e5b31ebb78ee1b532) - [meta] add Automatic Rebase and Require Allow Edits workflows [`8fa4dab`](https://github.com/ljharb/call-bind/commit/8fa4dabb23ba3dd7bb92c9571c1241c08b56e4b6) ## v1.0.0 - 2020-10-30 ### Commits - Initial commit [`306cf98`](https://github.com/ljharb/call-bind/commit/306cf98c7ec9e7ef66b653ec152277ac1381eb50) - Tests [`e10d0bb`](https://github.com/ljharb/call-bind/commit/e10d0bbdadc7a10ecedc9a1c035112d3e368b8df) - Implementation [`43852ed`](https://github.com/ljharb/call-bind/commit/43852eda0f187327b7fad2423ca972149a52bd65) - npm init [`408f860`](https://github.com/ljharb/call-bind/commit/408f860b773a2f610805fd3613d0d71bac1b6249) - [meta] add Automatic Rebase and Require Allow Edits workflows [`fb349b2`](https://github.com/ljharb/call-bind/commit/fb349b2e48defbec8b5ec8a8395cc8f69f220b13) - [meta] add `auto-changelog` [`c4001fc`](https://github.com/ljharb/call-bind/commit/c4001fc43031799ef908211c98d3b0fb2b60fde4) - [meta] add "funding"; create `FUNDING.yml` [`d4d6d29`](https://github.com/ljharb/call-bind/commit/d4d6d2974a14bc2e98830468eda7fe6d6a776717) - [Tests] add `npm run lint` [`dedfb98`](https://github.com/ljharb/call-bind/commit/dedfb98bd0ecefb08ddb9a94061bd10cde4332af) - Only apps should have lockfiles [`54ac776`](https://github.com/ljharb/call-bind/commit/54ac77653db45a7361dc153d2f478e743f110650) - [meta] add `safe-publish-latest` [`9ea8e43`](https://github.com/ljharb/call-bind/commit/9ea8e435b950ce9b705559cd651039f9bf40140f)
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/call-bind/CHANGELOG.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/call-bind/CHANGELOG.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 8139 }
# call-bind <sup>[![Version Badge][npm-version-svg]][package-url]</sup> [![github actions][actions-image]][actions-url] [![coverage][codecov-image]][codecov-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][npm-badge-png]][package-url] Robustly `.call.bind()` a function. ## Getting started ```sh npm install --save call-bind ``` ## Usage/Examples ```js const assert = require('assert'); const callBind = require('call-bind'); const callBound = require('call-bind/callBound'); function f(a, b) { assert.equal(this, 1); assert.equal(a, 2); assert.equal(b, 3); assert.equal(arguments.length, 2); } const fBound = callBind(f); const slice = callBound('Array.prototype.slice'); delete Function.prototype.call; delete Function.prototype.bind; fBound(1, 2, 3); assert.deepEqual(slice([1, 2, 3, 4], 1, -1), [2, 3]); ``` ## Tests Clone the repo, `npm install`, and run `npm test` [package-url]: https://npmjs.org/package/call-bind [npm-version-svg]: https://versionbadg.es/ljharb/call-bind.svg [deps-svg]: https://david-dm.org/ljharb/call-bind.svg [deps-url]: https://david-dm.org/ljharb/call-bind [dev-deps-svg]: https://david-dm.org/ljharb/call-bind/dev-status.svg [dev-deps-url]: https://david-dm.org/ljharb/call-bind#info=devDependencies [npm-badge-png]: https://nodei.co/npm/call-bind.png?downloads=true&stars=true [license-image]: https://img.shields.io/npm/l/call-bind.svg [license-url]: LICENSE [downloads-image]: https://img.shields.io/npm/dm/call-bind.svg [downloads-url]: https://npm-stat.com/charts.html?package=call-bind [codecov-image]: https://codecov.io/gh/ljharb/call-bind/branch/main/graphs/badge.svg [codecov-url]: https://app.codecov.io/gh/ljharb/call-bind/ [actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/ljharb/call-bind [actions-url]: https://github.com/ljharb/call-bind/actions
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/call-bind/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/call-bind/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 2025 }
# camelcase-css [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Convert a kebab-cased CSS property into a camelCased DOM property. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. Type this at the command line: ```shell npm install camelcase-css ``` ## Usage ```js const camelCaseCSS = require('camelcase-css'); camelCaseCSS('-webkit-border-radius'); //-> WebkitBorderRadius camelCaseCSS('-moz-border-radius'); //-> MozBorderRadius camelCaseCSS('-ms-border-radius'); //-> msBorderRadius camelCaseCSS('border-radius'); //-> borderRadius ``` [npm-image]: https://img.shields.io/npm/v/camelcase-css.svg [npm-url]: https://npmjs.org/package/camelcase-css [travis-image]: https://img.shields.io/travis/stevenvachon/camelcase-css.svg [travis-url]: https://travis-ci.org/stevenvachon/camelcase-css
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/camelcase-css/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/camelcase-css/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 869 }
# caniuse-lite A smaller version of caniuse-db, with only the essentials! ## Docs Read full docs **[here](https://github.com/browserslist/caniuse-lite#readme)**.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/caniuse-lite/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/caniuse-lite/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 163 }
# Chokidar [![Weekly downloads](https://img.shields.io/npm/dw/chokidar.svg)](https://github.com/paulmillr/chokidar) [![Yearly downloads](https://img.shields.io/npm/dy/chokidar.svg)](https://github.com/paulmillr/chokidar) > Minimal and efficient cross-platform file watching library [![NPM](https://nodei.co/npm/chokidar.png)](https://www.npmjs.com/package/chokidar) ## Why? Node.js `fs.watch`: * Doesn't report filenames on MacOS. * Doesn't report events at all when using editors like Sublime on MacOS. * Often reports events twice. * Emits most changes as `rename`. * Does not provide an easy way to recursively watch file trees. * Does not support recursive watching on Linux. Node.js `fs.watchFile`: * Almost as bad at event handling. * Also does not provide any recursive watching. * Results in high CPU utilization. Chokidar resolves these problems. Initially made for **[Brunch](https://brunch.io/)** (an ultra-swift web app build tool), it is now used in [Microsoft's Visual Studio Code](https://github.com/microsoft/vscode), [gulp](https://github.com/gulpjs/gulp/), [karma](https://karma-runner.github.io/), [PM2](https://github.com/Unitech/PM2), [browserify](http://browserify.org/), [webpack](https://webpack.github.io/), [BrowserSync](https://www.browsersync.io/), and [many others](https://www.npmjs.com/browse/depended/chokidar). It has proven itself in production environments. Version 3 is out! Check out our blog post about it: [Chokidar 3: How to save 32TB of traffic every week](https://paulmillr.com/posts/chokidar-3-save-32tb-of-traffic/) ## How? Chokidar does still rely on the Node.js core `fs` module, but when using `fs.watch` and `fs.watchFile` for watching, it normalizes the events it receives, often checking for truth by getting file stats and/or dir contents. On MacOS, chokidar by default uses a native extension exposing the Darwin `FSEvents` API. This provides very efficient recursive watching compared with implementations like `kqueue` available on most \*nix platforms. Chokidar still does have to do some work to normalize the events received that way as well. On most other platforms, the `fs.watch`-based implementation is the default, which avoids polling and keeps CPU usage down. Be advised that chokidar will initiate watchers recursively for everything within scope of the paths that have been specified, so be judicious about not wasting system resources by watching much more than needed. ## Getting started Install with npm: ```sh npm install chokidar ``` Then `require` and use it in your code: ```javascript const chokidar = require('chokidar'); // One-liner for current directory chokidar.watch('.').on('all', (event, path) => { console.log(event, path); }); ``` ## API ```javascript // Example of a more typical implementation structure // Initialize watcher. const watcher = chokidar.watch('file, dir, glob, or array', { ignored: /(^|[\/\\])\../, // ignore dotfiles persistent: true }); // Something to use when events are received. const log = console.log.bind(console); // Add event listeners. watcher .on('add', path => log(`File ${path} has been added`)) .on('change', path => log(`File ${path} has been changed`)) .on('unlink', path => log(`File ${path} has been removed`)); // More possible events. watcher .on('addDir', path => log(`Directory ${path} has been added`)) .on('unlinkDir', path => log(`Directory ${path} has been removed`)) .on('error', error => log(`Watcher error: ${error}`)) .on('ready', () => log('Initial scan complete. Ready for changes')) .on('raw', (event, path, details) => { // internal log('Raw event info:', event, path, details); }); // 'add', 'addDir' and 'change' events also receive stat() results as second // argument when available: https://nodejs.org/api/fs.html#fs_class_fs_stats watcher.on('change', (path, stats) => { if (stats) console.log(`File ${path} changed size to ${stats.size}`); }); // Watch new files. watcher.add('new-file'); watcher.add(['new-file-2', 'new-file-3', '**/other-file*']); // Get list of actual paths being watched on the filesystem var watchedPaths = watcher.getWatched(); // Un-watch some files. await watcher.unwatch('new-file*'); // Stop watching. // The method is async! watcher.close().then(() => console.log('closed')); // Full list of options. See below for descriptions. // Do not use this example! chokidar.watch('file', { persistent: true, ignored: '*.txt', ignoreInitial: false, followSymlinks: true, cwd: '.', disableGlobbing: false, usePolling: false, interval: 100, binaryInterval: 300, alwaysStat: false, depth: 99, awaitWriteFinish: { stabilityThreshold: 2000, pollInterval: 100 }, ignorePermissionErrors: false, atomic: true // or a custom 'atomicity delay', in milliseconds (default 100) }); ``` `chokidar.watch(paths, [options])` * `paths` (string or array of strings). Paths to files, dirs to be watched recursively, or glob patterns. - Note: globs must not contain windows separators (`\`), because that's how they work by the standard — you'll need to replace them with forward slashes (`/`). - Note 2: for additional glob documentation, check out low-level library: [picomatch](https://github.com/micromatch/picomatch). * `options` (object) Options object as defined below: #### Persistence * `persistent` (default: `true`). Indicates whether the process should continue to run as long as files are being watched. If set to `false` when using `fsevents` to watch, no more events will be emitted after `ready`, even if the process continues to run. #### Path filtering * `ignored` ([anymatch](https://github.com/es128/anymatch)-compatible definition) Defines files/paths to be ignored. The whole relative or absolute path is tested, not just filename. If a function with two arguments is provided, it gets called twice per path - once with a single argument (the path), second time with two arguments (the path and the [`fs.Stats`](https://nodejs.org/api/fs.html#fs_class_fs_stats) object of that path). * `ignoreInitial` (default: `false`). If set to `false` then `add`/`addDir` events are also emitted for matching paths while instantiating the watching as chokidar discovers these file paths (before the `ready` event). * `followSymlinks` (default: `true`). When `false`, only the symlinks themselves will be watched for changes instead of following the link references and bubbling events through the link's path. * `cwd` (no default). The base directory from which watch `paths` are to be derived. Paths emitted with events will be relative to this. * `disableGlobbing` (default: `false`). If set to `true` then the strings passed to `.watch()` and `.add()` are treated as literal path names, even if they look like globs. #### Performance * `usePolling` (default: `false`). Whether to use fs.watchFile (backed by polling), or fs.watch. If polling leads to high CPU utilization, consider setting this to `false`. It is typically necessary to **set this to `true` to successfully watch files over a network**, and it may be necessary to successfully watch files in other non-standard situations. Setting to `true` explicitly on MacOS overrides the `useFsEvents` default. You may also set the CHOKIDAR_USEPOLLING env variable to true (1) or false (0) in order to override this option. * _Polling-specific settings_ (effective when `usePolling: true`) * `interval` (default: `100`). Interval of file system polling, in milliseconds. You may also set the CHOKIDAR_INTERVAL env variable to override this option. * `binaryInterval` (default: `300`). Interval of file system polling for binary files. ([see list of binary extensions](https://github.com/sindresorhus/binary-extensions/blob/master/binary-extensions.json)) * `useFsEvents` (default: `true` on MacOS). Whether to use the `fsevents` watching interface if available. When set to `true` explicitly and `fsevents` is available this supercedes the `usePolling` setting. When set to `false` on MacOS, `usePolling: true` becomes the default. * `alwaysStat` (default: `false`). If relying upon the [`fs.Stats`](https://nodejs.org/api/fs.html#fs_class_fs_stats) object that may get passed with `add`, `addDir`, and `change` events, set this to `true` to ensure it is provided even in cases where it wasn't already available from the underlying watch events. * `depth` (default: `undefined`). If set, limits how many levels of subdirectories will be traversed. * `awaitWriteFinish` (default: `false`). By default, the `add` event will fire when a file first appears on disk, before the entire file has been written. Furthermore, in some cases some `change` events will be emitted while the file is being written. In some cases, especially when watching for large files there will be a need to wait for the write operation to finish before responding to a file creation or modification. Setting `awaitWriteFinish` to `true` (or a truthy value) will poll file size, holding its `add` and `change` events until the size does not change for a configurable amount of time. The appropriate duration setting is heavily dependent on the OS and hardware. For accurate detection this parameter should be relatively high, making file watching much less responsive. Use with caution. * *`options.awaitWriteFinish` can be set to an object in order to adjust timing params:* * `awaitWriteFinish.stabilityThreshold` (default: 2000). Amount of time in milliseconds for a file size to remain constant before emitting its event. * `awaitWriteFinish.pollInterval` (default: 100). File size polling interval, in milliseconds. #### Errors * `ignorePermissionErrors` (default: `false`). Indicates whether to watch files that don't have read permissions if possible. If watching fails due to `EPERM` or `EACCES` with this set to `true`, the errors will be suppressed silently. * `atomic` (default: `true` if `useFsEvents` and `usePolling` are `false`). Automatically filters out artifacts that occur when using editors that use "atomic writes" instead of writing directly to the source file. If a file is re-added within 100 ms of being deleted, Chokidar emits a `change` event rather than `unlink` then `add`. If the default of 100 ms does not work well for you, you can override it by setting `atomic` to a custom value, in milliseconds. ### Methods & Events `chokidar.watch()` produces an instance of `FSWatcher`. Methods of `FSWatcher`: * `.add(path / paths)`: Add files, directories, or glob patterns for tracking. Takes an array of strings or just one string. * `.on(event, callback)`: Listen for an FS event. Available events: `add`, `addDir`, `change`, `unlink`, `unlinkDir`, `ready`, `raw`, `error`. Additionally `all` is available which gets emitted with the underlying event name and path for every event other than `ready`, `raw`, and `error`. `raw` is internal, use it carefully. * `.unwatch(path / paths)`: Stop watching files, directories, or glob patterns. Takes an array of strings or just one string. * `.close()`: **async** Removes all listeners from watched files. Asynchronous, returns Promise. Use with `await` to ensure bugs don't happen. * `.getWatched()`: Returns an object representing all the paths on the file system being watched by this `FSWatcher` instance. The object's keys are all the directories (using absolute paths unless the `cwd` option was used), and the values are arrays of the names of the items contained in each directory. ## CLI If you need a CLI interface for your file watching, check out [chokidar-cli](https://github.com/open-cli-tools/chokidar-cli), allowing you to execute a command on each change, or get a stdio stream of change events. ## Install Troubleshooting * `npm WARN optional dep failed, continuing [email protected]` * This message is normal part of how `npm` handles optional dependencies and is not indicative of a problem. Even if accompanied by other related error messages, Chokidar should function properly. * `TypeError: fsevents is not a constructor` * Update chokidar by doing `rm -rf node_modules package-lock.json yarn.lock && npm install`, or update your dependency that uses chokidar. * Chokidar is producing `ENOSP` error on Linux, like this: * `bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell` `Error: watch /home/ ENOSPC` * This means Chokidar ran out of file handles and you'll need to increase their count by executing the following command in Terminal: `echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p` ## Changelog For more detailed changelog, see [`full_changelog.md`](.github/full_changelog.md). - **v3.5 (Jan 6, 2021):** Support for ARM Macs with Apple Silicon. Fixes for deleted symlinks. - **v3.4 (Apr 26, 2020):** Support for directory-based symlinks. Fixes for macos file replacement. - **v3.3 (Nov 2, 2019):** `FSWatcher#close()` method became async. That fixes IO race conditions related to close method. - **v3.2 (Oct 1, 2019):** Improve Linux RAM usage by 50%. Race condition fixes. Windows glob fixes. Improve stability by using tight range of dependency versions. - **v3.1 (Sep 16, 2019):** dotfiles are no longer filtered out by default. Use `ignored` option if needed. Improve initial Linux scan time by 50%. - **v3 (Apr 30, 2019):** massive CPU & RAM consumption improvements; reduces deps / package size by a factor of 17x and bumps Node.js requirement to v8.16 and higher. - **v2 (Dec 29, 2017):** Globs are now posix-style-only; without windows support. Tons of bugfixes. - **v1 (Apr 7, 2015):** Glob support, symlink support, tons of bugfixes. Node 0.8+ is supported - **v0.1 (Apr 20, 2012):** Initial release, extracted from [Brunch](https://github.com/brunch/brunch/blob/9847a065aea300da99bd0753f90354cde9de1261/src/helpers.coffee#L66) ## Also Why was chokidar named this way? What's the meaning behind it? >Chowkidar is a transliteration of a Hindi word meaning 'watchman, gatekeeper', चौकीदार. This ultimately comes from Sanskrit _ चतुष्क_ (crossway, quadrangle, consisting-of-four). This word is also used in other languages like Urdu as (چوکیدار) which is widely used in Pakistan and India. ## License MIT (c) Paul Miller (<https://paulmillr.com>), see [LICENSE](LICENSE) file.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/chokidar/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/chokidar/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 14356 }
# class-variance-authority For documentation, visit [cva.style](https://cva.style).
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/class-variance-authority/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/class-variance-authority/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 84 }
# clsx [![CI](https://github.com/lukeed/clsx/workflows/CI/badge.svg)](https://github.com/lukeed/clsx/actions?query=workflow%3ACI) [![codecov](https://badgen.net/codecov/c/github/lukeed/clsx)](https://codecov.io/gh/lukeed/clsx) [![licenses](https://licenses.dev/b/npm/clsx)](https://licenses.dev/npm/clsx) > A tiny (239B) utility for constructing `className` strings conditionally.<Br>Also serves as a [faster](bench) & smaller drop-in replacement for the `classnames` module. This module is available in three formats: * **ES Module**: `dist/clsx.mjs` * **CommonJS**: `dist/clsx.js` * **UMD**: `dist/clsx.min.js` ## Install ``` $ npm install --save clsx ``` ## Usage ```js import clsx from 'clsx'; // or import { clsx } from 'clsx'; // Strings (variadic) clsx('foo', true && 'bar', 'baz'); //=> 'foo bar baz' // Objects clsx({ foo:true, bar:false, baz:isTrue() }); //=> 'foo baz' // Objects (variadic) clsx({ foo:true }, { bar:false }, null, { '--foobar':'hello' }); //=> 'foo --foobar' // Arrays clsx(['foo', 0, false, 'bar']); //=> 'foo bar' // Arrays (variadic) clsx(['foo'], ['', 0, false, 'bar'], [['baz', [['hello'], 'there']]]); //=> 'foo bar baz hello there' // Kitchen sink (with nesting) clsx('foo', [1 && 'bar', { baz:false, bat:null }, ['hello', ['world']]], 'cya'); //=> 'foo bar hello world cya' ``` ## API ### clsx(...input) Returns: `String` #### input Type: `Mixed` The `clsx` function can take ***any*** number of arguments, each of which can be an Object, Array, Boolean, or String. > **Important:** _Any_ falsey values are discarded!<br>Standalone Boolean values are discarded as well. ```js clsx(true, false, '', null, undefined, 0, NaN); //=> '' ``` ## Modes There are multiple "versions" of `clsx` available, which allows you to bring only the functionality you need! #### `clsx` > **Size (gzip):** 239 bytes<br> > **Availability:** CommonJS, ES Module, UMD The default `clsx` module; see [API](#API) for info. ```js import { clsx } from 'clsx'; // or import clsx from 'clsx'; ``` #### `clsx/lite` > **Size (gzip):** 140 bytes<br> > **Availability:** CommonJS, ES Module<br> > **CAUTION:** Accepts **ONLY** string arguments! Ideal for applications that ***only*** use the string-builder pattern. Any non-string arguments are ignored! ```js import { clsx } from 'clsx/lite'; // or import clsx from 'clsx/lite'; // string clsx('hello', true && 'foo', false && 'bar'); // => "hello foo" // NOTE: Any non-string input(s) ignored clsx({ foo: true }); //=> "" ``` ## Benchmarks For snapshots of cross-browser results, check out the [`bench`](bench) directory~! ## Support All versions of Node.js are supported. All browsers that support [`Array.isArray`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/isArray#Browser_compatibility) are supported (IE9+). >**Note:** For IE8 support and older, please install `[email protected]` and beware of [#17](https://github.com/lukeed/clsx/issues/17). ## Tailwind Support Here some additional (optional) steps to enable classes autocompletion using `clsx` with Tailwind CSS. <details> <summary> Visual Studio Code </summary> 1. [Install the "Tailwind CSS IntelliSense" Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss) 2. Add the following to your [`settings.json`](https://code.visualstudio.com/docs/getstarted/settings): ```json { "tailwindCSS.experimental.classRegex": [ ["clsx\\(([^)]*)\\)", "(?:'|\"|`)([^']*)(?:'|\"|`)"] ] } ``` </details> You may find the [`clsx/lite`](#clsxlite) module useful within Tailwind contexts. This is especially true if/when your application **only** composes classes in this pattern: ```js clsx('text-base', props.active && 'text-primary', props.className); ``` ## Related - [obj-str](https://github.com/lukeed/obj-str) - A smaller (96B) and similiar utility that only works with Objects. ## License MIT © [Luke Edwards](https://lukeed.com)
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/clsx/readme.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/clsx/readme.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 4001 }
MIT License Copyright (c) 2022 Paco Coursey Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/cmdk/LICENSE.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/cmdk/LICENSE.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1068 }
<p align="center"> <img src="./website/public/og.png" /> </p> # ⌘K [![cmdk minzip package size](https://img.shields.io/bundlephobia/minzip/cmdk)](https://www.npmjs.com/package/cmdk?activeTab=code) [![cmdk package version](https://img.shields.io/npm/v/cmdk.svg?colorB=green)](https://www.npmjs.com/package/cmdk) ⌘K is a command menu React component that can also be used as an accessible combobox. You render items, it filters and sorts them automatically. ⌘K supports a fully composable API <sup><sup>[How?](/ARCHITECTURE.md)</sup></sup>, so you can wrap items in other components or even as static JSX. Demo and examples: [cmdk.paco.me](https://cmdk.paco.me) ## Install ```bash pnpm install cmdk ``` ## Use ```tsx import { Command } from 'cmdk' const CommandMenu = () => { return ( <Command label="Command Menu"> <Command.Input /> <Command.List> <Command.Empty>No results found.</Command.Empty> <Command.Group heading="Letters"> <Command.Item>a</Command.Item> <Command.Item>b</Command.Item> <Command.Separator /> <Command.Item>c</Command.Item> </Command.Group> <Command.Item>Apple</Command.Item> </Command.List> </Command> ) } ``` Or in a dialog: ```tsx import { Command } from 'cmdk' const CommandMenu = () => { const [open, setOpen] = React.useState(false) // Toggle the menu when ⌘K is pressed React.useEffect(() => { const down = (e) => { if (e.key === 'k' && (e.metaKey || e.ctrlKey)) { e.preventDefault() setOpen((open) => !open) } } document.addEventListener('keydown', down) return () => document.removeEventListener('keydown', down) }, []) return ( <Command.Dialog open={open} onOpenChange={setOpen} label="Global Command Menu"> <Command.Input /> <Command.List> <Command.Empty>No results found.</Command.Empty> <Command.Group heading="Letters"> <Command.Item>a</Command.Item> <Command.Item>b</Command.Item> <Command.Separator /> <Command.Item>c</Command.Item> </Command.Group> <Command.Item>Apple</Command.Item> </Command.List> </Command.Dialog> ) } ``` ## Parts and styling All parts forward props, including `ref`, to an appropriate element. Each part has a specific data-attribute (starting with `cmdk-`) that can be used for styling. ### Command `[cmdk-root]` Render this to show the command menu inline, or use [Dialog](#dialog-cmdk-dialog-cmdk-overlay) to render in a elevated context. Can be controlled with the `value` and `onValueChange` props. > **Note** > > Values are always trimmed with the [trim()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/trim) method. ```tsx const [value, setValue] = React.useState('apple') return ( <Command value={value} onValueChange={setValue}> <Command.Input /> <Command.List> <Command.Item>Orange</Command.Item> <Command.Item>Apple</Command.Item> </Command.List> </Command> ) ``` You can provide a custom `filter` function that is called to rank each item. Note that the value will be trimmed. ```tsx <Command filter={(value, search) => { if (value.includes(search)) return 1 return 0 }} /> ``` A third argument, `keywords`, can also be provided to the filter function. Keywords act as aliases for the item value, and can also affect the rank of the item. Keywords are trimmed. ```tsx <Command filter={(value, search, keywords) => { const extendValue = value + ' ' + keywords.join(' ') if (extendValue.includes(search)) return 1 return 0 }} /> ``` Or disable filtering and sorting entirely: ```tsx <Command shouldFilter={false}> <Command.List> {filteredItems.map((item) => { return ( <Command.Item key={item} value={item}> {item} </Command.Item> ) })} </Command.List> </Command> ``` You can make the arrow keys wrap around the list (when you reach the end, it goes back to the first item) by setting the `loop` prop: ```tsx <Command loop /> ``` ### Dialog `[cmdk-dialog]` `[cmdk-overlay]` Props are forwarded to [Command](#command-cmdk-root). Composes Radix UI's Dialog component. The overlay is always rendered. See the [Radix Documentation](https://www.radix-ui.com/docs/primitives/components/dialog) for more information. Can be controlled with the `open` and `onOpenChange` props. ```tsx const [open, setOpen] = React.useState(false) return ( <Command.Dialog open={open} onOpenChange={setOpen}> ... </Command.Dialog> ) ``` You can provide a `container` prop that accepts an HTML element that is forwarded to Radix UI's Dialog Portal component to specify which element the Dialog should portal into (defaults to `body`). See the [Radix Documentation](https://www.radix-ui.com/docs/primitives/components/dialog#portal) for more information. ```tsx const containerElement = React.useRef(null) return ( <> <Command.Dialog container={containerElement.current} /> <div ref={containerElement} /> </> ) ``` ### Input `[cmdk-input]` All props are forwarded to the underlying `input` element. Can be controlled with the `value` and `onValueChange` props. ```tsx const [search, setSearch] = React.useState('') return <Command.Input value={search} onValueChange={setSearch} /> ``` ### List `[cmdk-list]` Contains items and groups. Animate height using the `--cmdk-list-height` CSS variable. ```css [cmdk-list] { min-height: 300px; height: var(--cmdk-list-height); max-height: 500px; transition: height 100ms ease; } ``` To scroll item into view earlier near the edges of the viewport, use scroll-padding: ```css [cmdk-list] { scroll-padding-block-start: 8px; scroll-padding-block-end: 8px; } ``` ### Item `[cmdk-item]` `[data-disabled?]` `[data-selected?]` Item that becomes active on pointer enter. You should provide a unique `value` for each item, but it will be automatically inferred from the `.textContent`. ```tsx <Command.Item onSelect={(value) => console.log('Selected', value)} // Value is implicity "apple" because of the provided text content > Apple </Command.Item> ``` You can also provide a `keywords` prop to help with filtering. Keywords are trimmed. ```tsx <Command.Item keywords={['fruit', 'apple']}>Apple</Command.Item> ``` ```tsx <Command.Item onSelect={(value) => console.log('Selected', value)} // Value is implicity "apple" because of the provided text content > Apple </Command.Item> ``` You can force an item to always render, regardless of filtering, by passing the `forceMount` prop. ### Group `[cmdk-group]` `[hidden?]` Groups items together with the given `heading` (`[cmdk-group-heading]`). ```tsx <Command.Group heading="Fruit"> <Command.Item>Apple</Command.Item> </Command.Group> ``` Groups will not unmount from the DOM, rather the `hidden` attribute is applied to hide it from view. This may be relevant in your styling. You can force a group to always render, regardless of filtering, by passing the `forceMount` prop. ### Separator `[cmdk-separator]` Visible when the search query is empty or `alwaysRender` is true, hidden otherwise. ### Empty `[cmdk-empty]` Automatically renders when there are no results for the search query. ### Loading `[cmdk-loading]` You should conditionally render this with `progress` while loading asynchronous items. ```tsx const [loading, setLoading] = React.useState(false) return <Command.List>{loading && <Command.Loading>Hang on…</Command.Loading>}</Command.List> ``` ### `useCommandState(state => state.selectedField)` Hook that composes [`useSyncExternalStore`](https://reactjs.org/docs/hooks-reference.html#usesyncexternalstore). Pass a function that returns a slice of the command menu state to re-render when that slice changes. This hook is provided for advanced use cases and should not be commonly used. A good use case would be to render a more detailed empty state, like so: ```tsx const search = useCommandState((state) => state.search) return <Command.Empty>No results found for "{search}".</Command.Empty> ``` ## Examples Code snippets for common use cases. ### Nested items Often selecting one item should navigate deeper, with a more refined set of items. For example selecting "Change theme…" should show new items "Dark theme" and "Light theme". We call these sets of items "pages", and they can be implemented with simple state: ```tsx const ref = React.useRef(null) const [open, setOpen] = React.useState(false) const [search, setSearch] = React.useState('') const [pages, setPages] = React.useState([]) const page = pages[pages.length - 1] return ( <Command onKeyDown={(e) => { // Escape goes to previous page // Backspace goes to previous page when search is empty if (e.key === 'Escape' || (e.key === 'Backspace' && !search)) { e.preventDefault() setPages((pages) => pages.slice(0, -1)) } }} > <Command.Input value={search} onValueChange={setSearch} /> <Command.List> {!page && ( <> <Command.Item onSelect={() => setPages([...pages, 'projects'])}>Search projects…</Command.Item> <Command.Item onSelect={() => setPages([...pages, 'teams'])}>Join a team…</Command.Item> </> )} {page === 'projects' && ( <> <Command.Item>Project A</Command.Item> <Command.Item>Project B</Command.Item> </> )} {page === 'teams' && ( <> <Command.Item>Team 1</Command.Item> <Command.Item>Team 2</Command.Item> </> )} </Command.List> </Command> ) ``` ### Show sub-items when searching If your items have nested sub-items that you only want to reveal when searching, render based on the search state: ```tsx const SubItem = (props) => { const search = useCommandState((state) => state.search) if (!search) return null return <Command.Item {...props} /> } return ( <Command> <Command.Input /> <Command.List> <Command.Item>Change theme…</Command.Item> <SubItem>Change theme to dark</SubItem> <SubItem>Change theme to light</SubItem> </Command.List> </Command> ) ``` ### Asynchronous results Render the items as they become available. Filtering and sorting will happen automatically. ```tsx const [loading, setLoading] = React.useState(false) const [items, setItems] = React.useState([]) React.useEffect(() => { async function getItems() { setLoading(true) const res = await api.get('/dictionary') setItems(res) setLoading(false) } getItems() }, []) return ( <Command> <Command.Input /> <Command.List> {loading && <Command.Loading>Fetching words…</Command.Loading>} {items.map((item) => { return ( <Command.Item key={`word-${item}`} value={item}> {item} </Command.Item> ) })} </Command.List> </Command> ) ``` ### Use inside Popover We recommend using the [Radix UI popover](https://www.radix-ui.com/docs/primitives/components/popover) component. ⌘K relies on the Radix UI Dialog component, so this will reduce your bundle size a bit due to shared dependencies. ```bash $ pnpm install @radix-ui/react-popover ``` Render `Command` inside of the popover content: ```tsx import * as Popover from '@radix-ui/react-popover' return ( <Popover.Root> <Popover.Trigger>Toggle popover</Popover.Trigger> <Popover.Content> <Command> <Command.Input /> <Command.List> <Command.Item>Apple</Command.Item> </Command.List> </Command> </Popover.Content> </Popover.Root> ) ``` ### Drop in stylesheets You can find global stylesheets to drop in as a starting point for styling. See [website/styles/cmdk](website/styles/cmdk) for examples. ## FAQ **Accessible?** Yes. Labeling, aria attributes, and DOM ordering tested with Voice Over and Chrome DevTools. [Dialog](#dialog-cmdk-dialog-cmdk-overlay) composes an accessible Dialog implementation. **Virtualization?** No. Good performance up to 2,000-3,000 items, though. Read below to bring your own. **Filter/sort items manually?** Yes. Pass `shouldFilter={false}` to [Command](#command-cmdk-root). Better memory usage and performance. Bring your own virtualization this way. **React 18 safe?** Yes, required. Uses React 18 hooks like `useId` and `useSyncExternalStore`. **Unstyled?** Yes, use the listed CSS selectors. **Hydration mismatch?** No, likely a bug in your code. Ensure the `open` prop to `Command.Dialog` is `false` on the server. **React strict mode safe?** Yes. Open an issue if you notice an issue. **Weird/wrong behavior?** Make sure your `Command.Item` has a `key` and unique `value`. **Concurrent mode safe?** Maybe, but concurrent mode is not yet real. Uses risky approaches like manual DOM ordering. **React server component?** No, it's a client component. **Listen for ⌘K automatically?** No, do it yourself to have full control over keybind context. **React Native?** No, and no plans to support it. If you build a React Native version, let us know and we'll link your repository here. ## History Written in 2019 by Paco ([@pacocoursey](https://twitter.com/pacocoursey)) to see if a composable combobox API was possible. Used for the Vercel command menu and autocomplete by Rauno ([@raunofreiberg](https://twitter.com/raunofreiberg)) in 2020. Re-written independently in 2022 with a simpler and more performant approach. Ideas and help from Shu ([@shuding\_](https://twitter.com/shuding_)). [use-descendants](https://github.com/pacocoursey/use-descendants) was extracted from the 2019 version. ## Testing First, install dependencies and Playwright browsers: ```bash pnpm install pnpm playwright install ``` Then ensure you've built the library: ```bash pnpm build ``` Then run the tests using your local build against real browser engines: ```bash pnpm test ```
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/cmdk/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/cmdk/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 14011 }
# 1.0.0 - 2016-01-07 - Removed: unused speed test - Added: Automatic routing between previously unsupported conversions ([#27](https://github.com/Qix-/color-convert/pull/27)) - Removed: `xxx2xxx()` and `xxx2xxxRaw()` functions ([#27](https://github.com/Qix-/color-convert/pull/27)) - Removed: `convert()` class ([#27](https://github.com/Qix-/color-convert/pull/27)) - Changed: all functions to lookup dictionary ([#27](https://github.com/Qix-/color-convert/pull/27)) - Changed: `ansi` to `ansi256` ([#27](https://github.com/Qix-/color-convert/pull/27)) - Fixed: argument grouping for functions requiring only one argument ([#27](https://github.com/Qix-/color-convert/pull/27)) # 0.6.0 - 2015-07-23 - Added: methods to handle [ANSI](https://en.wikipedia.org/wiki/ANSI_escape_code#Colors) 16/256 colors: - rgb2ansi16 - rgb2ansi - hsl2ansi16 - hsl2ansi - hsv2ansi16 - hsv2ansi - hwb2ansi16 - hwb2ansi - cmyk2ansi16 - cmyk2ansi - keyword2ansi16 - keyword2ansi - ansi162rgb - ansi162hsl - ansi162hsv - ansi162hwb - ansi162cmyk - ansi162keyword - ansi2rgb - ansi2hsl - ansi2hsv - ansi2hwb - ansi2cmyk - ansi2keyword ([#18](https://github.com/harthur/color-convert/pull/18)) # 0.5.3 - 2015-06-02 - Fixed: hsl2hsv does not return `NaN` anymore when using `[0,0,0]` ([#15](https://github.com/harthur/color-convert/issues/15)) --- Check out commit logs for older releases
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/color-convert/CHANGELOG.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/color-convert/CHANGELOG.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1416 }
# color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE).
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/color-convert/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/color-convert/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 2852 }
A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a>
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/color-name/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/color-name/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 372 }
<header class="readme-only"> # Color.js: Let’s get serious about color [![Netlify Status](https://api.netlify.com/api/v1/badges/a6208d72-3d48-43ab-9132-b9f31f828609/deploy-status)](https://app.netlify.com/sites/colorjs/deploys) [![npm](https://img.shields.io/npm/dw/colorjs.io)](https://npmjs.com/package/colorjs.io) [Official website](https://colorjs.io) • [Contribution guide](CONTRIBUTING.md) Color.js is a color conversion and modification library originally created by two of the editors of the CSS Color specifications: Lea Verou and Chris Lilley. They continue to work on it, but are also joined by an exceptional small grassroots team of co-maintainers. ## Features - **Color space agnostic**: Each color object is basically a list of coords and a color space reference. Operations are color space agnostic. Modules for <a href="https://colorjs.io/docs/spaces.html">a wide variety of color spaces</a>, including Lab/LCh, OKLab/OKLCh, sRGB and friends (HSL/HSV/HWB), Display P3, J<sub>z</sub>a<sub>z</sub>b<sub>z</sub>, REC.2100 and many <a href="https://colorjs.io/docs/spaces.html">more</a>. - **Doesn't gloss over color science**: Actual <a href="docs/gamut-mapping.html">gamut mapping</a> instead of naïve clipping, multiple <a href="https://colorjs.io/docs/color-difference.html">DeltaE</a> methods (76, CMC, 2000, J<sub>z</sub>), multiple <a href="https://colorjs.io/docs/adaptation.html">chromatic adaptation</a> methods (von Kries, Bradford, CAT02, CAT16), all with sensible defaults - **Up to date with CSS Color 4**: Every <a href="https://drafts.csswg.org/css-color-4/">CSS Color 4</a> format & color space supported for both <a href="docs/the-color-object.html">input</a> and <a href="https://colorjs.io/docs/output.html">output</a>, whether your browser supports it or not. - **Readable, object-oriented API**: Color objects for multiple operations on the same color, and static `Color.something()` functions for one-off calculations - **Modular & Extensible**: Use only what you need, or a bundle. Client-side or Node. Deep extensibility with <a href="https://colorjs.io/api/#Hooks-hooks.js">hooks</a>. - **Fast & efficient**: <a href="https://colorjs.io/docs/procedural.html">Procedural, tree-shakeable API</a> available for performance sensitive tasks and reduced bundle size </header> <section> ## Impact - Has been used to create demos for several W3C specifications - Has been used by browsers to test their CSS Color 4/5 implementations - Over [2 million total npm downloads](https://limonte.dev/total-npm-downloads/?package=colorjs.io)! - Used by several [high impact projects](https://www.npmjs.com/browse/depended/colorjs.io), including [Sass](https://sass-lang.com/), [Open Props](https://open-props.style/), [axe](https://www.deque.com/axe/) accessibility testing engine, and [OddContrast](https://www.oddcontrast.com/) and [CSS HD Gradients](https://gradient.style/) color tools - Parts of Color.js’s API are used as a testing ground for the design of a [native `Color` object for the Web platform](https://github.com/wicg/color-api). </section> <section class="cn-ignore"> ## Installation Color.js is designed make simple things easy, and complex things possible, and that extends to installation as well. For quick experiments, you can just import Color.js directly from the CDN (kindly provided by the awesome folks at [Netlify](https://netlify.com)) with all modules included: ```js import Color from "https://colorjs.io/dist/color.js"; ``` You can also install via npm if you’d prefer: ``` npm install colorjs.io ``` Whether you’re using NPM, the CDN, or local files, Color.js allows you to also import specific modules by directly importing from `src`: - `https://colorjs.io/src/` for the CDN - `node_modules/colorjs.io/src/ for NPM For example: ```js import Color from "https://colorjs.io/src/color.js"; import p3 from "https://colorjs.io/src/spaces/p3.js"; import rec2020 from "https://colorjs.io/src/spaces/rec2020.js"; import deltaE200 from "https://colorjs.io/src/deltaE/deltaE2000.js"; ``` Warning: To use `import` statements in a browser, your `<script>` needs `type="module"` Are you old school and prefer to simply have a global `Color` variable? We’ve got you covered! Just include the following script in your HTML: ```html <script src="https://colorjs.io/dist/color.global.js"></script> ``` <p class="read-more"><a href="https://colorjs.io/get">Read more about installation</a></p> </section> <section> ## Reading colors Any color from CSS Color Level 4 should work: ```js let color = new Color("slategray"); let color2 = new Color("hwb(60 30% 40% / .5)"); let color3 = new Color("color(display-p3 0 1 0 / .9)"); let color4 = new Color("lch(50% 80 30)"); ``` You can also create `Color` objects manually: ```js let color2 = new Color("hwb", [60, 30, 40], .5); let color3 = new Color({space: "p3", coords: [0, 1, 0], alpha: .9}); ``` <p class="read-more"><a href="https://colorjs.io/docs/the-color-object.html">Read more about color objects</a> </section> <section> <h2>Manipulating colors</h2> You can use properties to modify coordinates of any color space and convert back ```js let color = new Color("slategray"); color.lch.l = 80; // Set coord directly in any color space color.lch.c *= 1.2; // saturate by increasing LCH chroma by 20% color.hwb.w += 10; // any other color space also available ``` To modify coordinates in any color space you use `color.set()` and `color.setAll()`: ```js let color = new Color("slategray"); // Multiple coordinates color.set({ "lch.l": 80, // set lightness to 80 "lch.c": c => c * 1.2 // Relative manipulation }); // Set single coordinate color.set("hwb.w", w => w + 10); ``` Coordinates of the color's color space are available without a prefix: ```js let color = new Color("slategray").to("lch"); // Multiple coordinates color.set({ l: 80, // set lightness to 80 c: c => c * 1.2 // Relative manipulation }); // Set single coordinate color.set("h", 30); ``` Chaining-style modifications are also supported: ```js let color = new Color("lch(50% 50 10)"); color = color.set({ h: h => h + 180, c: 60 }).lighten(); ``` You can also use properties: ```js let color = new Color("slategray"); color.lch.l = 80; // Set coord directly in any color space color.lch.c *= 1.2; // saturate by increasing LCH chroma by 20% color.hwb.w += 10; // any other color space also available ``` Coordinates of the color's color space are available without a prefix: ```js let color = new Color("slategray").to("lch"); color.l = 80; // Set LCH lightness color.c *= 1.2; // saturate by increasing LCH chroma ``` <p class="read-more"><a href="https://colorjs.io/docs/manipulation.html">Read more about color manipulation</a></p> </section> <section> ## Converting between color spaces & stringifying Convert to any color space: ```js let color = new Color("slategray"); color.to("lch") // Convert to LCH ``` Output in any color space ```js let color = new Color("slategray"); color + ""; // default stringification color.to("p3").toString({precision: 3}); ``` Clip to gamut or don't ```js let color = new Color("p3", [0, 1, 0]); color.to("srgb") + ""; // Default toString() color.to("srgb").toString({inGamut: false}); ``` <p class="read-more"><a href="https://colorjs.io/docs/output.html">Read more about output</a></p> </section> <section> ## Interpolation Get a function that accepts a percentage: ```js let color = new Color("p3", [0, 1, 0]); let redgreen = color.range("red", { space: "lch", // interpolation space outputSpace: "srgb" }); redgreen(.5); // midpoint ``` Interpolation by discrete steps: ```js let color = new Color("p3", [0, 1, 0]); color.steps("red", { space: "lch", outputSpace: "srgb", maxDeltaE: 3, // max deltaE between consecutive steps steps: 10 // min number of steps }); ``` Shortcut for specific points in the range: ```js let color = new Color("p3", [0, 1, 0]); let redgreen = color.mix("red", .5, {space: "lch", outputSpace: "srgb"}); let reddishGreen = color.mix("red", .25, {space: "lch", outputSpace: "srgb"}); ``` Static syntax (every color method has a static one too): ```js Color.mix("color(display-p3 0 1 0)", "red", .5); ``` <p class="read-more"><a href="https://colorjs.io/docs/interpolation.html">Read more about interpolation</a></p> </section>
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/colorjs.io/README.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/colorjs.io/README.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 8341 }
# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). (Format adopted after v3.0.0.) <!-- markdownlint-disable MD024 --> ## [4.1.1] (2020-02-02) ### Fixed * TypeScript definition for `.action()` should include Promise for async ([#1157]) ## [4.1.0] (2020-01-06) ### Added * two routines to change how option values are handled, and eliminate name clashes with command properties ([#933] [#1102]) * see storeOptionsAsProperties and passCommandToAction in README * `.parseAsync` to use instead of `.parse` if supply async action handlers ([#806] [#1118]) ### Fixed * Remove trailing blanks from wrapped help text ([#1096]) ### Changed * update dependencies * extend security coverage for Commander 2.x to 2020-02-03 * improvements to README * improvements to TypeScript definition documentation * move old versions out of main CHANGELOG * removed explicit use of `ts-node` in tests ## [4.0.1] (2019-11-12) ### Fixed * display help when requested, even if there are missing required options ([#1091]) ## [4.0.0] (2019-11-02) ### Added * automatically wrap and indent help descriptions for options and commands ([#1051]) * `.exitOverride()` allows override of calls to `process.exit` for additional error handling and to keep program running ([#1040]) * support for declaring required options with `.requiredOptions()` ([#1071]) * GitHub Actions support ([#1027]) * translation links in README ### Changed * dev: switch tests from Sinon+Should to Jest with major rewrite of tests ([#1035]) * call default subcommand even when there are unknown options ([#1047]) * *Breaking* Commander is only officially supported on Node 8 and above, and requires Node 6 ([#1053]) ### Fixed * *Breaking* keep command object out of program.args when action handler called ([#1048]) * also, action handler now passed array of unknown arguments * complain about unknown options when program argument supplied and action handler ([#1049]) * this changes parameters to `command:*` event to include unknown arguments * removed deprecated `customFds` option from call to `child_process.spawn` ([#1052]) * rework TypeScript declarations to bring all types into imported namespace ([#1081]) ### Migration Tips #### Testing for no arguments If you were previously using code like: ```js if (!program.args.length) ... ``` a partial replacement is: ```js if (program.rawArgs.length < 3) ... ``` ## [4.0.0-1] Prerelease (2019-10-08) (Released in 4.0.0) ## [4.0.0-0] Prerelease (2019-10-01) (Released in 4.0.0) ## [2.20.1] (2019-09-29) ### Fixed * Improve tracking of executable subcommands. ### Changed * update development dependencies ## [3.0.2] (2019-09-27) ### Fixed * Improve tracking of executable subcommands. ### Changed * update development dependencies ## [3.0.1] (2019-08-30) ### Added * .name and .usage to README ([#1010]) * Table of Contents to README ([#1010]) * TypeScript definition for `executableFile` in CommandOptions ([#1028]) ### Changed * consistently use `const` rather than `var` in README ([#1026]) ### Fixed * help for sub commands with custom executableFile ([#1018]) ## [3.0.0] / 2019-08-08 * Add option to specify executable file name ([#999]) * e.g. `.command('clone', 'clone description', { executableFile: 'myClone' })` * Change docs for `.command` to contrast action handler vs git-style executable. ([#938] [#990]) * **Breaking** Change TypeScript to use overloaded function for `.command`. ([#938] [#990]) * Change to use straight quotes around strings in error messages (like 'this' instead of `this') ([#915]) * Add TypeScript "reference types" for node ([#974]) * Add support for hyphen as an option argument in subcommands ([#697]) * Add support for a short option flag and its value to be concatenated for action handler subcommands ([#599]) * e.g. `-p 80` can also be supplied as `-p80` * Add executable arguments to spawn in win32, for git-style executables ([#611]) * e.g. `node --harmony myCommand.js clone` * Add parent command as prefix of subcommand in help ([#980]) * Add optional custom description to `.version` ([#963]) * e.g. `program.version('0.0.1', '-v, --vers', 'output the current version')` * Add `.helpOption(flags, description)` routine to customise help flags and description ([#963]) * e.g. `.helpOption('-e, --HELP', 'read more information')` * Fix behavior of --no-* options ([#795]) * can now define both `--foo` and `--no-foo` * **Breaking** custom event listeners: `--no-foo` on cli now emits `option:no-foo` (previously `option:foo`) * **Breaking** default value: defining `--no-foo` after defining `--foo` leaves the default value unchanged (previously set it to false) * allow boolean default value, such as from environment ([#987]) * Increment inspector port for spawned subcommands ([#991]) * e.g. `node --inspect myCommand.js clone` ### Migration Tips The custom event for a negated option like `--no-foo` is `option:no-foo` (previously `option:foo`). ```js program .option('--no-foo') .on('option:no-foo', () => { console.log('removing foo'); }); ``` When using TypeScript, adding a command does not allow an explicit `undefined` for an unwanted executable description (e.g for a command with an action handler). ```js program .command('action1', undefined, { noHelp: true }) // No longer valid .command('action2', { noHelp: true }) // Correct ``` ## 3.0.0-0 Prerelease / 2019-07-28 (Released as 3.0.0) ## 2.20.0 / 2019-04-02 * fix: resolve symbolic links completely when hunting for subcommands (#935) * Update index.d.ts (#930) * Update Readme.md (#924) * Remove --save option as it isn't required anymore (#918) * Add link to the license file (#900) * Added example of receiving args from options (#858) * Added missing semicolon (#882) * Add extension to .eslintrc (#876) ## 2.19.0 / 2018-10-02 * Removed newline after Options and Commands headers (#864) * Bugfix - Error output (#862) * Fix to change default value to string (#856) ## 2.18.0 / 2018-09-07 * Standardize help output (#853) * chmod 644 travis.yml (#851) * add support for execute typescript subcommand via ts-node (#849) ## 2.17.1 / 2018-08-07 * Fix bug in command emit (#844) ## 2.17.0 / 2018-08-03 * fixed newline output after help information (#833) * Fix to emit the action even without command (#778) * npm update (#823) ## 2.16.0 / 2018-06-29 * Remove Makefile and `test/run` (#821) * Make 'npm test' run on Windows (#820) * Add badge to display install size (#807) * chore: cache node_modules (#814) * chore: remove Node.js 4 (EOL), add Node.js 10 (#813) * fixed typo in readme (#812) * Fix types (#804) * Update eslint to resolve vulnerabilities in lodash (#799) * updated readme with custom event listeners. (#791) * fix tests (#794) ## 2.15.0 / 2018-03-07 * Update downloads badge to point to graph of downloads over time instead of duplicating link to npm * Arguments description ## 2.14.1 / 2018-02-07 * Fix typing of help function ## 2.14.0 / 2018-02-05 * only register the option:version event once * Fixes issue #727: Passing empty string for option on command is set to undefined * enable eqeqeq rule * resolves #754 add linter configuration to project * resolves #560 respect custom name for version option * document how to override the version flag * document using options per command ## 2.13.0 / 2018-01-09 * Do not print default for --no- * remove trailing spaces in command help * Update CI's Node.js to LTS and latest version * typedefs: Command and Option types added to commander namespace ## 2.12.2 / 2017-11-28 * fix: typings are not shipped ## 2.12.1 / 2017-11-23 * Move @types/node to dev dependency ## 2.12.0 / 2017-11-22 * add attributeName() method to Option objects * Documentation updated for options with --no prefix * typings: `outputHelp` takes a string as the first parameter * typings: use overloads * feat(typings): update to match js api * Print default value in option help * Fix translation error * Fail when using same command and alias (#491) * feat(typings): add help callback * fix bug when description is add after command with options (#662) * Format js code * Rename History.md to CHANGELOG.md (#668) * feat(typings): add typings to support TypeScript (#646) * use current node ## 2.11.0 / 2017-07-03 * Fix help section order and padding (#652) * feature: support for signals to subcommands (#632) * Fixed #37, --help should not display first (#447) * Fix translation errors. (#570) * Add package-lock.json * Remove engines * Upgrade package version * Prefix events to prevent conflicts between commands and options (#494) * Removing dependency on graceful-readlink * Support setting name in #name function and make it chainable * Add .vscode directory to .gitignore (Visual Studio Code metadata) * Updated link to ruby commander in readme files ## 2.10.0 / 2017-06-19 * Update .travis.yml. drop support for older node.js versions. * Fix require arguments in README.md * On SemVer you do not start from 0.0.1 * Add missing semi colon in readme * Add save param to npm install * node v6 travis test * Update Readme_zh-CN.md * Allow literal '--' to be passed-through as an argument * Test subcommand alias help * link build badge to master branch * Support the alias of Git style sub-command * added keyword commander for better search result on npm * Fix Sub-Subcommands * test node.js stable * Fixes TypeError when a command has an option called `--description` * Update README.md to make it beginner friendly and elaborate on the difference between angled and square brackets. * Add chinese Readme file ## 2.9.0 / 2015-10-13 * Add option `isDefault` to set default subcommand #415 @Qix- * Add callback to allow filtering or post-processing of help text #434 @djulien * Fix `undefined` text in help information close #414 #416 @zhiyelee ## 2.8.1 / 2015-04-22 * Back out `support multiline description` Close #396 #397 ## 2.8.0 / 2015-04-07 * Add `process.execArg` support, execution args like `--harmony` will be passed to sub-commands #387 @DigitalIO @zhiyelee * Fix bug in Git-style sub-commands #372 @zhiyelee * Allow commands to be hidden from help #383 @tonylukasavage * When git-style sub-commands are in use, yet none are called, display help #382 @claylo * Add ability to specify arguments syntax for top-level command #258 @rrthomas * Support multiline descriptions #208 @zxqfox ## 2.7.1 / 2015-03-11 * Revert #347 (fix collisions when option and first arg have same name) which causes a bug in #367. ## 2.7.0 / 2015-03-09 * Fix git-style bug when installed globally. Close #335 #349 @zhiyelee * Fix collisions when option and first arg have same name. Close #346 #347 @tonylukasavage * Add support for camelCase on `opts()`. Close #353 @nkzawa * Add node.js 0.12 and io.js to travis.yml * Allow RegEx options. #337 @palanik * Fixes exit code when sub-command failing. Close #260 #332 @pirelenito * git-style `bin` files in $PATH make sense. Close #196 #327 @zhiyelee ## 2.6.0 / 2014-12-30 * added `Command#allowUnknownOption` method. Close #138 #318 @doozr @zhiyelee * Add application description to the help msg. Close #112 @dalssoft ## 2.5.1 / 2014-12-15 * fixed two bugs incurred by variadic arguments. Close #291 @Quentin01 #302 @zhiyelee ## 2.5.0 / 2014-10-24 * add support for variadic arguments. Closes #277 @whitlockjc ## 2.4.0 / 2014-10-17 * fixed a bug on executing the coercion function of subcommands option. Closes #270 * added `Command.prototype.name` to retrieve command name. Closes #264 #266 @tonylukasavage * added `Command.prototype.opts` to retrieve all the options as a simple object of key-value pairs. Closes #262 @tonylukasavage * fixed a bug on subcommand name. Closes #248 @jonathandelgado * fixed function normalize doesn’t honor option terminator. Closes #216 @abbr ## 2.3.0 / 2014-07-16 * add command alias'. Closes PR #210 * fix: Typos. Closes #99 * fix: Unused fs module. Closes #217 ## 2.2.0 / 2014-03-29 * add passing of previous option value * fix: support subcommands on windows. Closes #142 * Now the defaultValue passed as the second argument of the coercion function. ## 2.1.0 / 2013-11-21 * add: allow cflag style option params, unit test, fixes #174 ## 2.0.0 / 2013-07-18 * remove input methods (.prompt, .confirm, etc) ## Older versions * [1.x](./changelogs/CHANGELOG-1.md) * [0.x](./changelogs/CHANGELOG-0.md) [#599]: https://github.com/tj/commander.js/issues/599 [#611]: https://github.com/tj/commander.js/issues/611 [#697]: https://github.com/tj/commander.js/issues/697 [#795]: https://github.com/tj/commander.js/issues/795 [#806]: https://github.com/tj/commander.js/issues/806 [#915]: https://github.com/tj/commander.js/issues/915 [#938]: https://github.com/tj/commander.js/issues/938 [#963]: https://github.com/tj/commander.js/issues/963 [#974]: https://github.com/tj/commander.js/issues/974 [#980]: https://github.com/tj/commander.js/issues/980 [#987]: https://github.com/tj/commander.js/issues/987 [#990]: https://github.com/tj/commander.js/issues/990 [#991]: https://github.com/tj/commander.js/issues/991 [#993]: https://github.com/tj/commander.js/issues/993 [#999]: https://github.com/tj/commander.js/issues/999 [#1010]: https://github.com/tj/commander.js/pull/1010 [#1018]: https://github.com/tj/commander.js/pull/1018 [#1026]: https://github.com/tj/commander.js/pull/1026 [#1027]: https://github.com/tj/commander.js/pull/1027 [#1028]: https://github.com/tj/commander.js/pull/1028 [#1035]: https://github.com/tj/commander.js/pull/1035 [#1040]: https://github.com/tj/commander.js/pull/1040 [#1047]: https://github.com/tj/commander.js/pull/1047 [#1048]: https://github.com/tj/commander.js/pull/1048 [#1049]: https://github.com/tj/commander.js/pull/1049 [#1051]: https://github.com/tj/commander.js/pull/1051 [#1052]: https://github.com/tj/commander.js/pull/1052 [#1053]: https://github.com/tj/commander.js/pull/1053 [#1071]: https://github.com/tj/commander.js/pull/1071 [#1081]: https://github.com/tj/commander.js/pull/1081 [#1091]: https://github.com/tj/commander.js/pull/1091 [#1096]: https://github.com/tj/commander.js/pull/1096 [#1102]: https://github.com/tj/commander.js/pull/1102 [#1118]: https://github.com/tj/commander.js/pull/1118 [#1157]: https://github.com/tj/commander.js/pull/1157 [Unreleased]: https://github.com/tj/commander.js/compare/master...develop [4.1.1]: https://github.com/tj/commander.js/compare/v4.0.0..v4.1.1 [4.1.0]: https://github.com/tj/commander.js/compare/v4.0.1..v4.1.0 [4.0.1]: https://github.com/tj/commander.js/compare/v4.0.0..v4.0.1 [4.0.0]: https://github.com/tj/commander.js/compare/v3.0.2..v4.0.0 [4.0.0-1]: https://github.com/tj/commander.js/compare/v4.0.0-0..v4.0.0-1 [4.0.0-0]: https://github.com/tj/commander.js/compare/v3.0.2...v4.0.0-0 [3.0.2]: https://github.com/tj/commander.js/compare/v3.0.1...v3.0.2 [3.0.1]: https://github.com/tj/commander.js/compare/v3.0.0...v3.0.1 [3.0.0]: https://github.com/tj/commander.js/compare/v2.20.1...v3.0.0 [2.20.1]: https://github.com/tj/commander.js/compare/v2.20.0...v2.20.1
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/commander/CHANGELOG.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/commander/CHANGELOG.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 15233 }
# Commander.js [![Build Status](https://api.travis-ci.org/tj/commander.js.svg?branch=master)](http://travis-ci.org/tj/commander.js) [![NPM Version](http://img.shields.io/npm/v/commander.svg?style=flat)](https://www.npmjs.org/package/commander) [![NPM Downloads](https://img.shields.io/npm/dm/commander.svg?style=flat)](https://npmcharts.com/compare/commander?minimal=true) [![Install Size](https://packagephobia.now.sh/badge?p=commander)](https://packagephobia.now.sh/result?p=commander) The complete solution for [node.js](http://nodejs.org) command-line interfaces, inspired by Ruby's [commander](https://github.com/commander-rb/commander). Read this in other languages: English | [简体中文](./Readme_zh-CN.md) - [Commander.js](#commanderjs) - [Installation](#installation) - [Declaring program variable](#declaring-program-variable) - [Options](#options) - [Common option types, boolean and value](#common-option-types-boolean-and-value) - [Default option value](#default-option-value) - [Other option types, negatable boolean and flag|value](#other-option-types-negatable-boolean-and-flagvalue) - [Custom option processing](#custom-option-processing) - [Required option](#required-option) - [Version option](#version-option) - [Commands](#commands) - [Specify the argument syntax](#specify-the-argument-syntax) - [Action handler (sub)commands](#action-handler-subcommands) - [Git-style executable (sub)commands](#git-style-executable-subcommands) - [Automated --help](#automated---help) - [Custom help](#custom-help) - [.usage and .name](#usage-and-name) - [.outputHelp(cb)](#outputhelpcb) - [.helpOption(flags, description)](#helpoptionflags-description) - [.help(cb)](#helpcb) - [Custom event listeners](#custom-event-listeners) - [Bits and pieces](#bits-and-pieces) - [Avoiding option name clashes](#avoiding-option-name-clashes) - [TypeScript](#typescript) - [Node options such as --harmony](#node-options-such-as---harmony) - [Node debugging](#node-debugging) - [Override exit handling](#override-exit-handling) - [Examples](#examples) - [License](#license) - [Support](#support) - [Commander for enterprise](#commander-for-enterprise) ## Installation ```bash npm install commander ``` ## Declaring _program_ variable Commander exports a global object which is convenient for quick programs. This is used in the examples in this README for brevity. ```js const program = require('commander'); program.version('0.0.1'); ``` For larger programs which may use commander in multiple ways, including unit testing, it is better to create a local Command object to use. ```js const commander = require('commander'); const program = new commander.Command(); program.version('0.0.1'); ``` ## Options Options are defined with the `.option()` method, also serving as documentation for the options. Each option can have a short flag (single character) and a long name, separated by a comma or space. The options can be accessed as properties on the Command object. Multi-word options such as "--template-engine" are camel-cased, becoming `program.templateEngine` etc. Multiple short flags may be combined as a single arg, for example `-abc` is equivalent to `-a -b -c`. See also optional new behaviour to [avoid name clashes](#avoiding-option-name-clashes). ### Common option types, boolean and value The two most used option types are a boolean flag, and an option which takes a value (declared using angle brackets). Both are `undefined` unless specified on command line. ```js const program = require('commander'); program .option('-d, --debug', 'output extra debugging') .option('-s, --small', 'small pizza size') .option('-p, --pizza-type <type>', 'flavour of pizza'); program.parse(process.argv); if (program.debug) console.log(program.opts()); console.log('pizza details:'); if (program.small) console.log('- small pizza size'); if (program.pizzaType) console.log(`- ${program.pizzaType}`); ``` ```bash $ pizza-options -d { debug: true, small: undefined, pizzaType: undefined } pizza details: $ pizza-options -p error: option '-p, --pizza-type <type>' argument missing $ pizza-options -ds -p vegetarian { debug: true, small: true, pizzaType: 'vegetarian' } pizza details: - small pizza size - vegetarian $ pizza-options --pizza-type=cheese pizza details: - cheese ``` `program.parse(arguments)` processes the arguments, leaving any args not consumed by the options as the `program.args` array. ### Default option value You can specify a default value for an option which takes a value. ```js const program = require('commander'); program .option('-c, --cheese <type>', 'add the specified type of cheese', 'blue'); program.parse(process.argv); console.log(`cheese: ${program.cheese}`); ``` ```bash $ pizza-options cheese: blue $ pizza-options --cheese stilton cheese: stilton ``` ### Other option types, negatable boolean and flag|value You can specify a boolean option long name with a leading `no-` to set the option value to false when used. Defined alone this also makes the option true by default. If you define `--foo` first, adding `--no-foo` does not change the default value from what it would otherwise be. You can specify a default boolean value for a boolean flag and it can be overridden on command line. ```js const program = require('commander'); program .option('--no-sauce', 'Remove sauce') .option('--cheese <flavour>', 'cheese flavour', 'mozzarella') .option('--no-cheese', 'plain with no cheese') .parse(process.argv); const sauceStr = program.sauce ? 'sauce' : 'no sauce'; const cheeseStr = (program.cheese === false) ? 'no cheese' : `${program.cheese} cheese`; console.log(`You ordered a pizza with ${sauceStr} and ${cheeseStr}`); ``` ```bash $ pizza-options You ordered a pizza with sauce and mozzarella cheese $ pizza-options --sauce error: unknown option '--sauce' $ pizza-options --cheese=blue You ordered a pizza with sauce and blue cheese $ pizza-options --no-sauce --no-cheese You ordered a pizza with no sauce and no cheese ``` You can specify an option which functions as a flag but may also take a value (declared using square brackets). ```js const program = require('commander'); program .option('-c, --cheese [type]', 'Add cheese with optional type'); program.parse(process.argv); if (program.cheese === undefined) console.log('no cheese'); else if (program.cheese === true) console.log('add cheese'); else console.log(`add cheese type ${program.cheese}`); ``` ```bash $ pizza-options no cheese $ pizza-options --cheese add cheese $ pizza-options --cheese mozzarella add cheese type mozzarella ``` ### Custom option processing You may specify a function to do custom processing of option values. The callback function receives two parameters, the user specified value and the previous value for the option. It returns the new value for the option. This allows you to coerce the option value to the desired type, or accumulate values, or do entirely custom processing. You can optionally specify the default/starting value for the option after the function. ```js const program = require('commander'); function myParseInt(value, dummyPrevious) { // parseInt takes a string and an optional radix return parseInt(value); } function increaseVerbosity(dummyValue, previous) { return previous + 1; } function collect(value, previous) { return previous.concat([value]); } function commaSeparatedList(value, dummyPrevious) { return value.split(','); } program .option('-f, --float <number>', 'float argument', parseFloat) .option('-i, --integer <number>', 'integer argument', myParseInt) .option('-v, --verbose', 'verbosity that can be increased', increaseVerbosity, 0) .option('-c, --collect <value>', 'repeatable value', collect, []) .option('-l, --list <items>', 'comma separated list', commaSeparatedList) ; program.parse(process.argv); if (program.float !== undefined) console.log(`float: ${program.float}`); if (program.integer !== undefined) console.log(`integer: ${program.integer}`); if (program.verbose > 0) console.log(`verbosity: ${program.verbose}`); if (program.collect.length > 0) console.log(program.collect); if (program.list !== undefined) console.log(program.list); ``` ```bash $ custom -f 1e2 float: 100 $ custom --integer 2 integer: 2 $ custom -v -v -v verbose: 3 $ custom -c a -c b -c c [ 'a', 'b', 'c' ] $ custom --list x,y,z [ 'x', 'y', 'z' ] ``` ### Required option You may specify a required (mandatory) option using `.requiredOption`. The option must be specified on the command line, or by having a default value. The method is otherwise the same as `.option` in format, taking flags and description, and optional default value or custom processing. ```js const program = require('commander'); program .requiredOption('-c, --cheese <type>', 'pizza must have cheese'); program.parse(process.argv); ``` ``` $ pizza error: required option '-c, --cheese <type>' not specified ``` ### Version option The optional `version` method adds handling for displaying the command version. The default option flags are `-V` and `--version`, and when present the command prints the version number and exits. ```js program.version('0.0.1'); ``` ```bash $ ./examples/pizza -V 0.0.1 ``` You may change the flags and description by passing additional parameters to the `version` method, using the same syntax for flags as the `option` method. The version flags can be named anything, but a long name is required. ```js program.version('0.0.1', '-v, --vers', 'output the current version'); ``` ## Commands You can specify (sub)commands for your top-level command using `.command`. There are two ways these can be implemented: using an action handler attached to the command, or as a separate executable file (described in more detail later). In the first parameter to `.command` you specify the command name and any command arguments. The arguments may be `<required>` or `[optional]`, and the last argument may also be `variadic...`. For example: ```js // Command implemented using action handler (description is supplied separately to `.command`) // Returns new command for configuring. program .command('clone <source> [destination]') .description('clone a repository into a newly created directory') .action((source, destination) => { console.log('clone command called'); }); // Command implemented using separate executable file (description is second parameter to `.command`) // Returns top-level command for adding more commands. program .command('start <service>', 'start named service') .command('stop [service]', 'stop named service, or all if no name supplied'); ``` ### Specify the argument syntax You use `.arguments` to specify the arguments for the top-level command, and for subcommands they are included in the `.command` call. Angled brackets (e.g. `<required>`) indicate required input. Square brackets (e.g. `[optional]`) indicate optional input. ```js const program = require('commander'); program .version('0.1.0') .arguments('<cmd> [env]') .action(function (cmd, env) { cmdValue = cmd; envValue = env; }); program.parse(process.argv); if (typeof cmdValue === 'undefined') { console.error('no command given!'); process.exit(1); } console.log('command:', cmdValue); console.log('environment:', envValue || "no environment given"); ``` The last argument of a command can be variadic, and only the last argument. To make an argument variadic you append `...` to the argument name. For example: ```js const program = require('commander'); program .version('0.1.0') .command('rmdir <dir> [otherDirs...]') .action(function (dir, otherDirs) { console.log('rmdir %s', dir); if (otherDirs) { otherDirs.forEach(function (oDir) { console.log('rmdir %s', oDir); }); } }); program.parse(process.argv); ``` The variadic argument is passed to the action handler as an array. (And this also applies to `program.args`.) ### Action handler (sub)commands You can add options to a command that uses an action handler. The action handler gets passed a parameter for each argument you declared, and one additional argument which is the command object itself. This command argument has the values for the command-specific options added as properties. ```js const program = require('commander'); program .command('rm <dir>') .option('-r, --recursive', 'Remove recursively') .action(function (dir, cmdObj) { console.log('remove ' + dir + (cmdObj.recursive ? ' recursively' : '')) }) program.parse(process.argv) ``` You may supply an `async` action handler, in which case you call `.parseAsync` rather than `.parse`. ```js async function run() { /* code goes here */ } async function main() { program .command('run') .action(run); await program.parseAsync(process.argv); } ``` A command's options on the command line are validated when the command is used. Any unknown options will be reported as an error. However, if an action-based command does not define an action, then the options are not validated. Configuration options can be passed with the call to `.command()`. Specifying `true` for `opts.noHelp` will remove the command from the generated help output. ### Git-style executable (sub)commands When `.command()` is invoked with a description argument, this tells commander that you're going to use separate executables for sub-commands, much like `git(1)` and other popular tools. Commander will search the executables in the directory of the entry script (like `./examples/pm`) with the name `program-subcommand`, like `pm-install`, `pm-search`. You can specify a custom name with the `executableFile` configuration option. You handle the options for an executable (sub)command in the executable, and don't declare them at the top-level. ```js // file: ./examples/pm const program = require('commander'); program .version('0.1.0') .command('install [name]', 'install one or more packages') .command('search [query]', 'search with optional query') .command('update', 'update installed packages', {executableFile: 'myUpdateSubCommand'}) .command('list', 'list packages installed', {isDefault: true}) .parse(process.argv); ``` Configuration options can be passed with the call to `.command()`. Specifying `true` for `opts.noHelp` will remove the command from the generated help output. Specifying `true` for `opts.isDefault` will run the subcommand if no other subcommand is specified. Specifying a name with `executableFile` will override the default constructed name. If the program is designed to be installed globally, make sure the executables have proper modes, like `755`. ## Automated --help The help information is auto-generated based on the information commander already knows about your program, so the following `--help` info is for free: ```bash $ ./examples/pizza --help Usage: pizza [options] An application for pizzas ordering Options: -V, --version output the version number -p, --peppers Add peppers -P, --pineapple Add pineapple -b, --bbq Add bbq sauce -c, --cheese <type> Add the specified type of cheese (default: "marble") -C, --no-cheese You do not want any cheese -h, --help output usage information ``` ### Custom help You can display arbitrary `-h, --help` information by listening for "--help". Commander will automatically exit once you are done so that the remainder of your program does not execute causing undesired behaviors, for example in the following executable "stuff" will not output when `--help` is used. ```js #!/usr/bin/env node const program = require('commander'); program .version('0.1.0') .option('-f, --foo', 'enable some foo') .option('-b, --bar', 'enable some bar') .option('-B, --baz', 'enable some baz'); // must be before .parse() since // node's emit() is immediate program.on('--help', function(){ console.log('') console.log('Examples:'); console.log(' $ custom-help --help'); console.log(' $ custom-help -h'); }); program.parse(process.argv); console.log('stuff'); ``` Yields the following help output when `node script-name.js -h` or `node script-name.js --help` are run: ```Text Usage: custom-help [options] Options: -h, --help output usage information -V, --version output the version number -f, --foo enable some foo -b, --bar enable some bar -B, --baz enable some baz Examples: $ custom-help --help $ custom-help -h ``` ### .usage and .name These allow you to customise the usage description in the first line of the help. The name is otherwise deduced from the (full) program arguments. Given: ```js program .name("my-command") .usage("[global options] command") ``` The help will start with: ```Text Usage: my-command [global options] command ``` ### .outputHelp(cb) Output help information without exiting. Optional callback cb allows post-processing of help text before it is displayed. If you want to display help by default (e.g. if no command was provided), you can use something like: ```js const program = require('commander'); const colors = require('colors'); program .version('0.1.0') .command('getstream [url]', 'get stream URL') .parse(process.argv); if (!process.argv.slice(2).length) { program.outputHelp(make_red); } function make_red(txt) { return colors.red(txt); //display the help text in red on the console } ``` ### .helpOption(flags, description) Override the default help flags and description. ```js program .helpOption('-e, --HELP', 'read more information'); ``` ### .help(cb) Output help information and exit immediately. Optional callback cb allows post-processing of help text before it is displayed. ## Custom event listeners You can execute custom actions by listening to command and option events. ```js program.on('option:verbose', function () { process.env.VERBOSE = this.verbose; }); // error on unknown commands program.on('command:*', function () { console.error('Invalid command: %s\nSee --help for a list of available commands.', program.args.join(' ')); process.exit(1); }); ``` ## Bits and pieces ### Avoiding option name clashes The original and default behaviour is that the option values are stored as properties on the program, and the action handler is passed a command object with the options values stored as properties. This is very convenient to code, but the downside is possible clashes with existing properties of Command. There are two new routines to change the behaviour, and the default behaviour may change in the future: - `storeOptionsAsProperties`: whether to store option values as properties on command object, or store separately (specify false) and access using `.opts()` - `passCommandToAction`: whether to pass command to action handler, or just the options (specify false) ```js // file: ./examples/storeOptionsAsProperties.action.js program .storeOptionsAsProperties(false) .passCommandToAction(false); program .name('my-program-name') .option('-n,--name <name>'); program .command('show') .option('-a,--action <action>') .action((options) => { console.log(options.action); }); program.parse(process.argv); const programOptions = program.opts(); console.log(programOptions.name); ``` ### TypeScript The Commander package includes its TypeScript Definition file, but also requires the node types which you need to install yourself. e.g. ```bash npm install commander npm install --save-dev @types/node ``` If you use `ts-node` and git-style sub-commands written as `.ts` files, you need to call your program through node to get the sub-commands called correctly. e.g. ```bash node -r ts-node/register pm.ts ``` ### Node options such as `--harmony` You can enable `--harmony` option in two ways: - Use `#! /usr/bin/env node --harmony` in the sub-commands scripts. (Note Windows does not support this pattern.) - Use the `--harmony` option when call the command, like `node --harmony examples/pm publish`. The `--harmony` option will be preserved when spawning sub-command process. ### Node debugging If you are using the node inspector for [debugging](https://nodejs.org/en/docs/guides/debugging-getting-started/) git-style executable (sub)commands using `node --inspect` et al, the inspector port is incremented by 1 for the spawned subcommand. ### Override exit handling By default Commander calls `process.exit` when it detects errors, or after displaying the help or version. You can override this behaviour and optionally supply a callback. The default override throws a `CommanderError`. The override callback is passed a `CommanderError` with properties `exitCode` number, `code` string, and `message`. The default override behaviour is to throw the error, except for async handling of executable subcommand completion which carries on. The normal display of error messages or version or help is not affected by the override which is called after the display. ``` js program.exitOverride(); try { program.parse(process.argv); } catch (err) { // custom processing... } ``` ## Examples ```js const program = require('commander'); program .version('0.1.0') .option('-C, --chdir <path>', 'change the working directory') .option('-c, --config <path>', 'set config path. defaults to ./deploy.conf') .option('-T, --no-tests', 'ignore test hook'); program .command('setup [env]') .description('run setup commands for all envs') .option("-s, --setup_mode [mode]", "Which setup mode to use") .action(function(env, options){ const mode = options.setup_mode || "normal"; env = env || 'all'; console.log('setup for %s env(s) with %s mode', env, mode); }); program .command('exec <cmd>') .alias('ex') .description('execute the given remote cmd') .option("-e, --exec_mode <mode>", "Which exec mode to use") .action(function(cmd, options){ console.log('exec "%s" using %s mode', cmd, options.exec_mode); }).on('--help', function() { console.log(''); console.log('Examples:'); console.log(''); console.log(' $ deploy exec sequential'); console.log(' $ deploy exec async'); }); program .command('*') .action(function(env){ console.log('deploying "%s"', env); }); program.parse(process.argv); ``` More Demos can be found in the [examples](https://github.com/tj/commander.js/tree/master/examples) directory. ## License [MIT](https://github.com/tj/commander.js/blob/master/LICENSE) ## Support Commander 4.x is supported on Node 8 and above, and is likely to work with Node 6 but not tested. (For versions of Node below Node 6, use Commander 3.x or 2.x.) The main forum for free and community support is the project [Issues](https://github.com/tj/commander.js/issues) on GitHub. ### Commander for enterprise Available as part of the Tidelift Subscription The maintainers of Commander and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-commander?utm_source=npm-commander&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/commander/Readme.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/commander/Readme.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 23465 }
0.5.4 / 2021-12-10 ================== * deps: [email protected] 0.5.3 / 2018-12-17 ================== * Use `safe-buffer` for improved Buffer API 0.5.2 / 2016-12-08 ================== * Fix `parse` to accept any linear whitespace character 0.5.1 / 2016-01-17 ================== * perf: enable strict mode 0.5.0 / 2014-10-11 ================== * Add `parse` function 0.4.0 / 2014-09-21 ================== * Expand non-Unicode `filename` to the full ISO-8859-1 charset 0.3.0 / 2014-09-20 ================== * Add `fallback` option * Add `type` option 0.2.0 / 2014-09-19 ================== * Reduce ambiguity of file names with hex escape in buggy browsers 0.1.2 / 2014-09-19 ================== * Fix periodic invalid Unicode filename header 0.1.1 / 2014-09-19 ================== * Fix invalid characters appearing in `filename*` parameter 0.1.0 / 2014-09-18 ================== * Make the `filename` argument optional 0.0.0 / 2014-09-18 ================== * Initial release
{ "source": "ammaarreshi/Gemini-Search", "title": "node_modules/content-disposition/HISTORY.md", "url": "https://github.com/ammaarreshi/Gemini-Search/blob/main/node_modules/content-disposition/HISTORY.md", "date": "2025-01-04T14:07:19", "stars": 1910, "description": "Perplexity style AI Search engine clone built with Gemini 2.0 Flash and Grounding", "file_size": 1019 }