repoName
stringlengths
7
77
tree
stringlengths
0
2.85M
readme
stringlengths
0
4.9M
esaminu_console-boilerpsdfsdflate-teasfrafsdmplate-rs-25
.github scripts runfe.sh workflows deploy-to-console.yml readme.yml tests.yml .gitpod.yml README.md contract README.md build.sh deploy.sh package-lock.json package.json src contract.ts model.ts utils.ts tsconfig.json integration-tests package-lock.json package.json src main.ava.ts package-lock.json package.json
# Donation 💸 [![](https://img.shields.io/badge/⋈%20Examples-Basics-green)](https://docs.near.org/tutorials/welcome) [![](https://img.shields.io/badge/Gitpod-Ready-orange)](https://gitpod.io/#/https://github.com/near-examples/donation-js) [![](https://img.shields.io/badge/Contract-js-yellow)](https://docs.near.org/develop/contracts/anatomy) [![](https://img.shields.io/badge/Frontend-JS-yellow)](https://docs.near.org/develop/integrate/frontend) [![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fnear-examples%2Fdonation-js%2Fbadge&style=flat&label=Tests)](https://actions-badge.atrox.dev/near-examples/donation-js/goto) Our Donation example enables to forward money to an account while keeping track of it. It is one of the simplest examples on making a contract receive and send money. ![](https://docs.near.org/assets/images/donation-7cf65e5e131274fd1ae9aa34bc465bb8.png) # What This Example Shows 1. How to receive and transfer $NEAR on a contract. 2. How to divide a project into multiple modules. 3. How to handle the storage costs. 4. How to handle transaction results. 5. How to use a `Map`. <br /> # Quickstart Clone this repository locally or [**open it in gitpod**](https://gitpod.io/#/github.com/near-examples/donation-js). Then follow these steps: ### 1. Install Dependencies ```bash npm install ``` ### 2. Test the Contract Deploy your contract in a sandbox and simulate interactions from users. ```bash npm test ``` ### 3. Deploy the Contract Build the contract and deploy it in a testnet account ```bash npm run deploy ``` --- # Learn More 1. Learn more about the contract through its [README](./contract/README.md). 2. Check [**our documentation**](https://docs.near.org/develop/welcome). # Donation Contract The smart contract exposes methods to handle donating $NEAR to a `beneficiary`. ```ts @call donate() { // Get who is calling the method and how much $NEAR they attached let donor = near.predecessorAccountId(); let donationAmount: bigint = near.attachedDeposit() as bigint; let donatedSoFar = this.donations.get(donor) === null? BigInt(0) : BigInt(this.donations.get(donor) as string) let toTransfer = donationAmount; // This is the user's first donation, lets register it, which increases storage if(donatedSoFar == BigInt(0)) { assert(donationAmount > STORAGE_COST, `Attach at least ${STORAGE_COST} yoctoNEAR`); // Subtract the storage cost to the amount to transfer toTransfer -= STORAGE_COST } // Persist in storage the amount donated so far donatedSoFar += donationAmount this.donations.set(donor, donatedSoFar.toString()) // Send the money to the beneficiary const promise = near.promiseBatchCreate(this.beneficiary) near.promiseBatchActionTransfer(promise, toTransfer) // Return the total amount donated so far return donatedSoFar.toString() } ``` <br /> # Quickstart 1. Make sure you have installed [node.js](https://nodejs.org/en/download/package-manager/) >= 16. 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash npm run deploy ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` The contract will be automatically initialized with a default `beneficiary`. To initialize the contract yourself do: ```bash # Use near-cli to initialize contract (optional) near call <dev-account> init '{"beneficiary":"<account>"}' --accountId <dev-account> ``` <br /> ## 2. Get Beneficiary `beneficiary` is a read-only method (`view` method) that returns the beneficiary of the donations. `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash near view <dev-account> beneficiary ``` <br /> ## 3. Get Number of Donations `donate` forwards any attached money to the `beneficiary` while keeping track of it. `donate` is a payable method for which can only be invoked using a NEAR account. The account needs to attach money and pay GAS for the transaction. ```bash # Use near-cli to donate 1 NEAR near call <dev-account> donate --amount 1 --accountId <account> ``` **Tip:** If you would like to `donate` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
near-tips_subgraph-metrics
networks.json package.json src near-tips-testnet.ts tsconfig.json
kasperdoggames_near-mintbase-hack
README.md constants mintbase.ts next-env.d.ts next.config.js package-lock.json package.json pages api validate.ts postcss.config.js public vercel.svg services apolloClient.ts support jsSHA.ts nearUtils.ts tailwind.config.js tsconfig.json
### Mintbase - NEAR - Crypto Tickets Using NEAR signing, Near blockheight and NFT ownership through Mintbase to perform blockchain based tickets through the ownership of a NFT hosted on Mintbase. Build from the `create-mintbase-app` boilerplate. ## How it works It uses a number of criteria to validate an NFT as a ticket. 1. The ability to sign a message (blockheight) with a NEAR access key and validate that at the kiosk end to prove the request came from a specific Near Wallet. 2. The Near Blockheight is used to provide an expiry time by comparing the signed blockheight with the blockheight at the time of verification by the Kiosk. As the block creation time is known and fairly consistent a rough time period before expiration (30s) is defined. 3. Checking the NFT tokenId owner and ensuring it is owned by the same account as that which signed the blockheight in (1). ## Technologies - Mintbase - Next-js - Tailwindcss - Near blockchain/wallet
near_create-near-app
index.js jest.config.js templates frontend next-app next.config.js src app hello-components page.js hello-near page.js layout.js page.js components cards.js navigation.js vm-component.js config.js wallets near-wallet.js next-page next.config.js src components cards.js navigation.js vm-component.js config.js layout.js pages _app.js _document.js hello-components index.js hello-near index.js index.js wallets near-wallet.js
FroVolod_near-social
.github workflows code_style.yml deploy-mainnet.yml deploy-testnet.yml publish-to-npm.yml release-plz.yml release.yml CHANGELOG.md Cargo.toml Cross.toml README.md docs media new-project.svg release-plz.toml src common.rs components delete mod.rs selected mod.rs sign_as mod.rs deploy mod.rs sign_as.rs diff mod.rs download mod.rs mod.rs consts.rs extensions mod.rs self_update mod.rs main.rs project mod.rs new mod.rs social_db data delete mod.rs sign_as.rs mod.rs set data mod.rs with_json.rs with_json_file.rs with_text.rs with_text_file.rs mod.rs sign_as.rs view mod.rs manage_profile mod.rs mod.rs permissions grant_write_access account_id mod.rs function_call_access_key mod.rs mod.rs sign_as mod.rs storage_deposit mod.rs mod.rs prepaid_storage mod.rs profile_management mod.rs view_profile as_json.rs as_text.rs mod.rs socialdb_types.rs
# BOS CLI Command line utility helps to develop components for [NEAR Blockchain Operating System](https://near.org/blog/near-announces-the-blockchain-operating-system/) by allowing developers to use standard developer tools like their best code editor and standard tools for source code version control, and then deploy their components to SocialDB in one command. <p> <img src="docs/media/new-project.svg" alt="" width="1200"> </p> ## Command groups - `project` - Project management - `components` - Working with components (Download, Deploy, etc.) - `socialdb` - SocialDb management ### project - Project management - `new` allows you to initialize, edit and then deploy a new component to your near.social account. ### components - Working with components (Download, Deploy, etc.) - `deploy` allows you to upload/publish components from your local `./src` folder to near.social account. - `diff` shows changes between deployed and local components. - `download` allows you to download the existing components from any near.social account to the local `./src` folder. - `delete` allows you to delete the existing components from any near.social account. > *Note:* > > *By default, the Social DB prefix is computed as `<account-id>/widget/<component-folder>.<component-name>`. > If you wish, you can change the default folder (`widget`) using CLI option: `--social-db-folder`:* > ```sh > bos components --social-db-folder "component_beta" download ... > ``` ### socialdb - SocialDb management #### data - Data management: viewing, adding, updating, deleting information by a given key - `view` allows you to view information by a given key. - `set` allows you to add or update information by a given key. - `delete` allows you to delete information by the specified key. #### manage-profile - Profile management: view, update - `view-profile` allows you to view the profile for an account. - `update-profile` allows you to update profile for the account. #### prepaid-storage - Storage management: deposit, withdrawal, balance review - `view-balance` allows you to view the storage balance for an account. - `deposit` allows you to make a storage deposit for the account. - `withdraw` allows you to make a withdraw a deposit from storage for an account ID. #### permissions - Granting access permissions to a different account - `grant-write-access` allows grant access to the access key to call a function or another account. More commands are still on the way, see the [issues tracker](https://github.com/FroVolod/bos-cli-rs/issues) and propose more features there. ## Install You can find binary releases of `bos` CLI for your OS on the [Releases page](https://github.com/bos-cli-rs/bos-cli-rs/releases/). ### Install prebuilt binaries via shell script (macOS, Linux, WSL) ```sh curl --proto '=https' --tlsv1.2 -LsSf https://github.com/bos-cli-rs/bos-cli-rs/releases/latest/download/bos-cli-installer.sh | sh ``` ### Install prebuilt binaries via powershell script (Windows) ```sh irm https://github.com/bos-cli-rs/bos-cli-rs/releases/latest/download/bos-cli-installer.ps1 | iex ``` ### Run prebuilt binaries with npx (Node.js) ```sh npx bos-cli ``` ### Install prebuilt binaries into your npm project (Node.js) ```sh npm install bos-cli ``` ### Install from source code (Cargo) Before getting to installation, make sure you have [Rust](https://rustup.rs) and system dependencies installed on your computer. To install system dependencies: * on Ubuntu Linux: `apt install pkg-config libudev-dev` * on Fedora Linux: `dnf install pkg-config libudev-devel` Once system dependencies and Rust are installed you can install the latest released `bos-cli` from sources by using the following command: ```bash cargo install bos-cli ``` or, install the most recent version from git repository: ```bash $ cargo install --git https://github.com/bos-cli-rs/bos-cli-rs ``` ### GitHub Actions #### Reusable Workflow This repo contains a reusable workflow which you can directly leverage from your component repository 1. Prepare access key that will be used for components deployment. It is recommended to use a dedicated function-call-only access key, so you need to: 1.1. Add a new access key to your account, explicitly adding permissions to call the `set` method. Here is [near CLI](https://near.cli.rs) command to do that: ```bash near account add-key "ACCOUNT_ID" grant-function-call-access --allowance '1 NEAR' --receiver-account-id social.near --method-names 'set' autogenerate-new-keypair print-to-terminal network-config mainnet ``` 1.2. Grant write permission to the key (replace `PUBLIC_KEY` with the one you added to the account on the previous step, and `ACCOUNT_ID` with the account id where you want to deploy BOS components): ```bash near contract call-function as-transaction social.near grant_write_permission json-args '{"public_key": "PUBLIC_KEY", "keys": ["ACCOUNT_ID/widget"]}' prepaid-gas '100.000 TeraGas' attached-deposit '1 NEAR' sign-as "ACCOUNT_ID" network-config mainnet ``` > Note: The attached deposit is going to be used to cover the storage costs associated with the data you store on BOS, 1 NEAR is enough to store 100kb of data (components code, metadata, etc). 2. In your repo, go to _Settings > Secrets and Variables > Actions_ and create a new repository secret named `SIGNER_PRIVATE_KEY` with the private key in `ed25519:<private_key>` format (if you followed (1.1), it is be printed in your terminal) 3. Create a file at `.github/workflows/deploy-mainnet.yml` in your component repo with the following contents. See the [workflow definition](./github/workflows/deploy-mainnet.yml) for explanations of the inputs ```yml name: Deploy Components to Mainnet on: push: branches: [main] jobs: deploy-mainnet: uses: bos-cli-rs/bos-cli-rs/.github/workflows/deploy-mainnet.yml@master with: deploy-account-address: <FILL> signer-account-address: <FILL> signer-public-key: <FILL> secrets: SIGNER_PRIVATE_KEY: ${{ secrets.SIGNER_PRIVATE_KEY }} ``` 4. Commit and push the workflow 5. On changes to the `main` branch, updated components in `src` will be deployed! #### Custom Workflow Copy the contents of `.github/workflows/deploy-mainnet.yml` to your repo as a starting point
near-examples_coingecko-oracle
README.md contract babel.config.json jsconfig.json package-lock.json package.json src helpers.js index.js tests contract.ava.js scripts dev-delete.sh dev-deploy.sh extract-env-keys.sh oracle_deploy.sh server requirements.txt src main.py utils.py tests e2e test_oracle.py integration api test_coingecko.py
# Simple Coingecko Oracle This repo contains a simple Coingecko oracle contract written in JavaScript (with [near-sdk-js](https://github.com/near/near-sdk-js)) that can be used to query the price of a cryptocurrency from within the network, in this case NEAR. The goal behind this example is to show how oracles can obtain their data from the outside world, and how they can be used to provide data to the network. The project is divided into two parts: - The first part is the oracle, which is a smart contract that can be queried from within the network for information and is deployed to `testnet`. - The second part is the server, which holds the keys to the only account that can push data to the oracle. The folder structure matches this pattern too: ``` ├── .github ⟵ contains GitHub Actions yml files for testing and running the code that pushes data to the testnet oracle ├── contract └── src ⟵ oracle contract code └── tests ⟵ integration tests for the oracle ├── scripts ⟵ contains helper scripts, e.g. deploying contract in GitHub CI ├── server └── src ⟵ server code that pushes data to the oracle └── tests └── e2e ⟵ end-to-end tests for the server and contract └── integration/api ⟵ integration tests for the server and source data ``` ## Schematic Overview This diagram demonstrates the interaction between the oracle, the network (other smart contracts and accounts) and the server: ![Schematic Overview](./.assets/schematic.png "Schematic Overview") ## The Oracle Contract This contract is deployed to `oracle.idea404.testnet` and can be queried by calling the `getPrices` method. You can try it out by running this command on your shell (requires having the [near-cli](https://github.com/near/near-cli) installed): ```sh near view oracle.idea404.testnet getPrices '' ``` ## Feeding Data to the Oracle Calling the `main.py` module in the `server` folder will push data to the oracle. It requires a parameter for the oracle account ID to which it will send the price data. This is run by the machine holding the keys for the `coingecko-feed.idea404.testnet` account as such from project root with (you can also see how it is invoked by `run.yaml` in `.github/workflows` by a fixed time period with a cron schedule): ```sh python -m server.src.main -o oracle.idea404.testnet ```
andrew-sol_workspaces-reproduction
Cargo.toml README.md run.sh src lib.rs staking_farm.rs types.rs utils.rs validator.rs
## Setup ```sh chmod +x run.sh ``` ## Run the tests ```bash ./run.sh ``` or ```bash cargo run --example integration-tests ```
leohhhn_fetch_USN_price
.gitpod.yml .theia settings.json .travis.yml README-Gitpod.md README.md contract Cargo.toml src lib.rs frontend assets css global.css js main.js near config.js utils.js index.html integration-tests README.md rs Cargo.toml src tests.rs ts main.ava.ts neardev dev-account.env shared-test-staging test.near.json shared-test test.near.json package.json
Smart Contract for fetching USN's price via priceoracle.near =================================
evienear_evie_front
README.md babel.config.js changelog.md package.json public index.html themes dark back.svg buy-limit-outline.svg buy-limit.svg buy-outline.svg buy.svg calendar-outline.svg calendar.svg education-outline.svg education.svg faq-outline.svg faq.svg games-outline.svg games.svg logo.svg search.svg sell-outline.svg sell.svg stats-outline.svg stats.svg theme.css toggle.svg light back.svg buy-limit-outline.svg buy-limit.svg buy-outline.svg buy.svg calendar-outline.svg calendar.svg education-outline.svg education.svg faq-outline.svg faq.svg games-outline.svg games.svg logo.svg search.svg sell-outline.svg sell.svg stats-outline.svg stats.svg theme.css toggle.svg src Routes.js assets buttons auto.svg dlt.svg doge.svg mintbase.svg paras.svg xdn.svg chains algorand.svg aptos.svg aurora.svg avax.svg binance.svg cardano.svg ethereum.svg near.svg polygon.svg solana.svg doggy-slider thumb.svg icons camera.svg close.svg select.svg trash-can-outline.svg logo my-near-wallet-icon.svg near-wallet-icon.svg near.svg sender-icon.svg markets apollo42.near.svg higgsfield.near.svg market.tradeport.near.svg marketplace.paras.near.svg zoneart.near.svg upload.svg components Layout i18n.js config.js main.js plugins i18n.js vuetify.js services api.js store index.js vue.config.js
# Evie
Learn-NEAR_NCD--Connect-IoT
.cache 0a ef6a5257c569e0a3869d2bd4358962.json 0b 46b0f669dbdf7973b1861cff27eabf.json 0d a1f476b9a50fce603bd425e84c11ed.json 11 15e2bbcbaacab4516e5f83df779c2b.json 14 a904ff3b797f3e027ace8c2cab5e84.json 15 2f0d4ea5db93a16bb214cc4f2b92bc.json 70f6a510e5aa6f83597222c744c9d9.json 16 54f41d4f544b3ff8ac87937810d4f0.json 17 8951bb94a5f0fd733a11ca0203a12d.json c25464edc8d4471cb0edb8118ced91.json 25 c3870514da06e78ca653c74c6eef0d.json c568638344cba4b155949f59e413cd.json 27 88b6e98e46dfdc2531a3cb259374be.json eb3c821682cb31f37bcbe2e95577b9.json 28 93bada5d6b7826d00ded3d77ac1158.json 29 03937b2914d7d20aedd9a700182e2b.json 2c ac8c343ca060d86cbf4d7d77494e93.json 2f 451cdec484f773ac370525536010f3.json a72407af3f31909f9f185ddd101655.json 31 b379b13655d81e90a53b4bcd88d013.json 32 212d0f8dffa36a997e1c3bbb071f09.json 2c24f07217787157180eefb8ca4b28.json 33 58c2a505cf5f67b62562e1c3620491.json e86f8b5bd06725fb35785bd8e19f05.json 3a 68bbcaf090726d58530a575aa820f4.json 3c 74748148d94748548ec7aee984008e.json 3d 30577862439fe97c7f8bc355f809ca.json 42 2fa699704ecb01a09fa029f61b70ee.json 44 00c7305adc0e69e5f2801a52012bc1.json e228745a72238843916564f7d9939a.json 46 9e4def45a1c257e34187481b50a9d0.json 49 de346b9d167cedbab28cb786376581.json 4b 0c167d069d7cc83a25e4dfe0212a3c.json 4c 48070024e7b43445576dda4a2211ce.json 4f e0daa870dc93c38d8ef13fa3f69d04.json 50 24f544ad6e215f8ce449300dc3f79d.json 52 d8e97d9e6f28f988f0b3739cce55cf.json 57 bec642824ebe0124a890ee04654323.json 59 4a1e7ab2baf1696b2533b10aecf3dc.json 5a 22551b884dd1fcc79a07209e215db1.json 62 14ce70ba972abb5f7ed080fcc042e9.json 67 f242d4ce39f73502b8513021565e81.json 6a 232b56b2425781267a6bbc0286052c.json 70 70eaeffb79b0eb279196781bfb9192.json 73 d621b800d5136ced9880104ad781df.json 76 d5ae003d43d273b9f0a6345862881b.json 78 fb18435a887896ecc89afce5760f35.json 7a b383320498574694be4379fd94b1db.json 82 4fd1648c87b0407c5a1fd43ea85e62.json 83 86e02dc1b3278facf03fd2bd963d83.json 8a 13cacbae6e30cd68b148ca0ec8edbc.json f45a35c3b7a767f47fb37af697a201.json 8f 8ba4cb88bb34fabce8ad9479228d85.json 91 20dfd55b2bfe73c29e26062662c181.json 961fc45ef4452f24bab5ec3d77fb57.json fbb7477cad406ba33f756383abc805.json 98 42f9c5a53580bb8a3c7870f0a6d2ab.json 99 936dc9062702bfcb7ba121cb3f06c5.json 9a d8582b802edc43614ec160c6922aa0.json 9d ffe2003cb75e512c8d3383ef455d48.json a2 3c5e8616e7393b53dd3515f19e634f.json a4 38c1e818ed9429b2cdd14e4e1df553.json ad 68a33b58d686b81d3546345f01c054.json 8e4da9b487c2b042ea64c2198213d8.json b1 32f06d2a375c359729945da50058b5.json 42b6856aa2439f373fede82cde9a48.json e36b44b3d068d816874dcf106872ac.json b2 c5e7964ea8344e69059eb938e77911.json bb 841168d71d3b9b75a595825910da7a.json bf a0da81e162611d6d01b6b75f46ce9f.json c1 4b59faa75f91c0694041df414fd583.json c3 69fd3f57ea7f972827315a7af892fc.json 8d319de6f7dc74f17045eb815bb6e9.json c6 91cd034927c2e2b1375e8c99f46fad.json aea2806ddf449ff7f82bacd1fa1dbd.json c8 123f23b57d40461868adee05a1b1f2.json cb 7d98648a4b55a045ba31e820234105.json 818405f66f547f9e60990f9176e05d.json cc 4d796fcc3f3bb7b23c5572741cfd80.json cd db1b674168f095c27710359c1fab99.json d3 98f53950c8d5c0e92ca838d0a35dda.json d9 5c152b271c7072b295e917c4b79d8b.json df acdad9c26c5576a5054021332401b8.json e1 b680c33ce9a6de6414cd3f89d4fd38.json e3 61e30af4e41dc501b225ce1dc0fe35.json f3 96956346662717e45bdd67cc3a395a.json fa 81a133bb00510bf0040514ed253447.json fc c7372786cdf98a522b71254439d2ab.json fd 793b7b78c6a76ce456df2f14b48f0e.json README.md contract README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts index.ts model.ts tsconfig.json compile.js package-lock.json package.json neardev dev-account.env shared-test-staging test.near.json shared-test test.near.json package.json
# 🚧 connectiot ================== > Proyecto realizado para el NCD Bootcamp NEAR Hispano. # ConnectIoT es un servicio que nos permite tener acceso, por medio de la blockchain ,a diferentes dispositivos IoT y monitorearlos de acuerdo a los datos que se de van tomando con el tiempo # ConnectIoT permite realizar las siguientes operaciones: 1. crear nuevos dispositivos 2. cambiar argumentos de los dispositivos ya creados 3. ver el tipo de dispositivo 4. borrar dispositivos 5. autenticar usuarios que quieran ingresar a los dispositivos 6. pedir permiso para entrar a un dispositivo 7. validar el tipo de dispositivo de acuerdo a los datos que arroja 8. ver las solicitudes de acceso 🏁Prerrequisitos 1. node.js >=12 instalado (https://nodejs.org) 2. yarn instalado ```bash npm install --global yarn ``` 3. instalar dependencias ```bash yarn install --frozen-lockfile ``` 4. crear una cuenta de NEAR en [testnet](https://docs.near.org/docs/develop/basics/create-account#creating-a-testnet-account) 5. instalar NEAR CLI ```bash yarn install --global near-cli ``` 6. autorizar app para dar acceso a la cuenta de NEAR ```bash near login ``` 🐑 Clonar el Repositorio ```bash git clone https://github.com/EbanCuMo/ConnectIoT cd ConnectIoT ``` 🏗 instalar y compilar el contrato ```bash yarn install yarn build:contract:debug ``` 🚀 Deployar el contrato ```bash yarn dev:deploy:contract ``` 🚂 Correr comandos Una vez deployado el contrato, usaremos el Account Id devuelto por la operacion para ejecutar los comandos, que será el account Id del contrato [será utilizado como CONTRACT_ACCOUNT_ID en los ejemplos de comandos] Utilizaremos OWNER_ACCOUNT_ID para identificar el account Id que utilizamos para ser dueños de un dispositivo. Utilizaremos YOUR_ACCOUNT_ID para identificar el account Id que utilizamos para solicitar acceso a un dispositivo. ### Crear Dispositivo Nuevo ```bash near call CONTRACT_ACCOUNT_ID setState '{"ownerId": "OWNER_ACCOUNT_ID","deviceId": "myOximeter","deviceType": "Oximeter","timestamp": "Thu Sep 30 2021 20:09:33 GMT-0500","args": {"bpm":75,"spo2":99}}' --accountId OWNER_ACCOUNT_ID ``` ### Cambiar argumentos de un dispositivo ```bash near call CONTRACT_ACCOUNT_ID updateState '{"deviceId":"myOximeter","deviceType": "Oximeter","timestamp": "Thu Sep 30 2021 20:09:33 GMT-0500","args": {"bpm":70,"spo2":98}}' --accountId OWNER_ACCOUNT_ID ``` ### Ver arguementos de dispositivo ```bash near call CONTRACT_ACCOUNT_ID getState '{"deviceId":"myOximeter","deviceType": "Oximeter"}' --accountId OWNER_ACCOUNT_ID ``` ### Borrar un dispositivo ```bash near call CONTRACT_ACCOUNT_ID deleteDevice'{"deviceId":"myOximeter","deviceType": "Oximeter"}' --accountId OWNER_ACCOUNT_ID ``` ### Autenticar usuarios que quieren entrar al dispositivo ```bash near call CONTRACT_ACCOUNT_ID authenticate '{"deviceId": "myOximeter","deviceType": "Oximeter","accountId": "ACCOUNT_ID"}' --accountId OWNER_COOUNT_ID ``` ### Pedir permiso para acceder a un dispositivo ```bash near call CONTRACT_ACCOUNT_ID askForPermission '{"deviceId": "myOximeter","deviceType": "Oximeter"}' --accountId YOUR_ACCOUNT_ID ``` ### Validar tipo de dispositivo de acuerdo a sus argumentos ```bash near call CONTRACT_ACCOUNT_ID validateData '{"deviceId": "myOximeter","deviceType": "Oximeter","jsonArgs": "{bpm:70,spo2:98}"}' --accountId OWNER_ACCOUNT_ID ``` ### Ver solicitudes de acceso ```bash near call CONTRACT_ACCOUNT_ID getRequests '{"deviceId": "myOximeter","deviceType": "Oximeter"}' --accountId OWNER_ACOUNT_ID ``` ### Caso de uso: ConnectIoT ayudará mucho al sector médico y a los servicios que ofrecen, ya que con este smart contract se puede acceder a los datos continuos qur toman los Smart Devices de los pacientes. Con esto los Médicos podrán saber niveles de oxigenación, temperatura, peso, hidratación, actividad fisica minima, entre muchos más. Con esto los servicios médicos podran atacar de manera más eficiente a los problemas que se enfrenten y tendran todo un registro de datos validados y reales de sus pacientes. Los diseños de esta aplicación se pueden ver en el siguiente link: https://marvelapp.com/project/5880174 connectiot Smart Contract ================== A [smart contract] written in [AssemblyScript] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install [Node.js] ≥ 12 Exploring The Code ================== 1. The main smart contract code lives in `assembly/index.ts`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard AssemblyScript tests using [as-pect]. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [AssemblyScript]: https://www.assemblyscript.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [as-pect]: https://www.npmjs.com/package/@as-pect/cli
open-web-academy_NEAR-Social-Base
README.md config paths.js presets loadPreset.js webpack.analyze.js webpack.development.js webpack.production.js functions [[accountId]] widget [[index]].js common.js magic img account [index].js nft [[index]].js sitemap index.js posts [index].js profiles index.js sources [index].js widgets index.js package-lock.json package.json public index.html manifest.json robots.txt src App.js components Editor FileTab.js OpenModal.js RenameModal.js common buttons BlueButton.js Button.js GrayBorderButton.js icons ArrowUpRight.js Book.js Close.js Code.js Diff.js Fork.js Home.js LogOut.js NearSocialLogo.js Pretend.js QR.js StopPretending.js User.js UserCircle.js Withdraw.js navigation Logotype.js MobileQRModal.js NavigationButton.js NavigationWrapper.js NotificationWidget.js PretendModal.js SignInButton.js StarButton.js desktop DesktopNavigation.js DevActionsDropdown.js UserDropdown.js mobile Menu.js MobileMenuButton.js MobileNavigation.js Navigation.js data near.js web3.js widgets.js hooks useHashRouterLegacy.js useQuery.js useScrollBlock.js images near_social_combo.svg near_social_icon.svg index.css index.js pages EditorPage.js EmbedPage.js SignInPage.js ViewPage.js webpack.config.js
# NEAR Social Gateway Base El siguiente repositorio contiene el código necesario para correr el gateway de NEAR Social de manera local. ## Instalación de dependencias: Dentro de la carpeta raíz del proyectom deberas ejecutar el siguiente comando npm install ## Correr Servidor: npm start
mattlockyer_near-bp
README.md babel.config.js contract Cargo.toml README.md build.js src lib.rs gulpfile.js package.json src App.css App.js App.test.js __mocks__ fileMock.js assets gray_near_logo.svg logo.svg near.svg config.js index.html index.js jest.init.js main.test.js state near.js store.js wallet login index.html
<br /> <br /> <p> <img src="https://nearprotocol.com/wp-content/themes/near-19/assets/img/logo.svg?t=1553011311" width="240"> </p> <br /> <br /> ## Template for NEAR dapps ### Requirements ##### IMPORTANT: Make sure you have the latest version of NEAR Shell and Node Version > 10.x 1. [Node.js](https://nodejs.org/en/download/package-manager/) 2. (optional) near-shell ``` npm i -g near-shell ``` 3. (optional) yarn ``` npm i -g yarn ``` ### To run on NEAR testnet ```bash npm install && npm dev ``` with yarn: ```bash yarn && yarn dev ``` The server that starts is for static assets and by default serves them to http://localhost:1234. Navigate there in your browser to see the app running! NOTE: Both contract and client-side code will auto-reload once you change source files. ### To run tests ```bash npm test ``` with yarn: ```bash yarn test ``` ### Deploy #### Step 1: Create account for the contract You'll now want to authorize NEAR shell on your NEAR account, which will allow NEAR Shell to deploy contracts on your NEAR account's behalf \(and spend your NEAR account balance to do so\). Type the command `near login` which opens a webpage at NEAR Wallet. Follow the instructions there and it will create a key for you, stored in the `/neardev` directory. #### Step 2: Modify `src/config.js` line that sets the account name of the contract. Set it to the account id from step 1. NOTE: When you use [create-near-app](https://github.com/nearprotocol/create-near-app) to create the project it'll infer and pre-populate name of contract based on project folder name. ```javascript const CONTRACT_NAME = 'react-template'; /* TODO: Change this to your contract's name! */ const DEFAULT_ENV = 'development'; ... ``` #### Step 3: Check the scripts in the package.json, for frontend and backend both, run the command: ```bash npm run deploy ``` with yarn: ```bash yarn deploy ``` NOTE: This uses [gh-pages](https://github.com/tschaub/gh-pages) to publish resulting website on GitHub pages. It'll only work if project already has repository set up on GitHub. Feel free to modify `deploy:pages` script in `package.json` to deploy elsewhere. ### To Explore - `assembly/main.ts` for the contract code - `src/index.html` for the front-end HTML - `src/index.js` for the JavaScript front-end code and how to integrate contracts - `src/App.js` for the main React component - `src/main.test.js` for the JavaScript integration tests of smart contract - `src/App.test.js` for the main React component tests # Status Message Records the status messages of the accounts that call this contract. ## Testing To test run: ```bash cargo test --package status-message -- --nocapture ```
near_near-sdk-abi
.github workflows test.yml CHANGELOG.md Cargo.toml README.md examples delegator-generation Cargo.toml build.rs build.sh src adder.json lib.rs tests workspaces.rs delegator-macro Cargo.toml build.sh src adder.json lib.rs tests workspaces.rs near-sdk-abi-impl Cargo.toml README.md src lib.rs near-sdk-abi-macros Cargo.toml README.md src lib.rs near-sdk-abi Cargo.toml README.md src lib.rs res adder.json
<!-- markdownlint-disable MD014 --> <div align="center"> <h1><code>near-sdk-abi</code></h1> <p> <strong>Utility library for making typesafe cross-contract calls with <a href="https://github.com/near/near-sdk-rs">near-sdk-rs</a> smart contracts</strong> </p> <p> <a href="https://github.com/near/near-sdk-abi/actions/workflows/test.yml?query=branch%3Amain"><img src="https://github.com/near/near-sdk-abi/actions/workflows/test.yml/badge.svg" alt="Github CI Build" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/v/near-sdk-abi.svg?style=flat-square" alt="Crates.io version" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/d/near-sdk-abi.svg?style=flat-square" alt="Downloads" /></a> </p> </div> ## Release notes **Release notes and unreleased changes can be found in the [CHANGELOG](CHANGELOG.md)** ## Usage This crate supports two sets of APIs for users with different needs: * **Macro-driven**. Gives you a cross-contract binding in a single macro invocation. * **Generation-based**. Gives you more control and is transparent about what code you end up using, but requires more setup. ### Macro API Checkout the [`delegator-macro`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-macro) example for a standalone smart contract using macro API to make a cross-contract call. To generate a trait named `ContractName` with ext interface named `ext_name` based on ABI located at `path/to/abi.json` (relative to the current file's directory): ```rust near_abi_ext! { mod ext_name trait ContractName for "path/to/abi.json" } ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ### Generation API Checkout the [`delegator-generation`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-generation) example for a standalone project using generation API to make a cross-contract call. First, we need our package to have a `build.rs` file that runs the generation step. The following snippet will generate the contract trait in `abi.rs` under `path/to/out/dir`: ```rust fn main() -> anyhow::Result<()> { near_sdk_abi::Generator::new("path/to/out/dir".into()) .file(near_sdk_abi::AbiFile::new("path/to/abi.json")) .generate()?; Ok(()) } ``` The resulting file, however, is not included in your source set by itself. You have to include it manually; the recommended way is to create a mod with a custom path: ```rust #[path = "path/to/out/dir/abi.rs"] mod mymod; ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ## Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as below, without any additional terms or conditions. ## License Licensed under either of * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <http://www.apache.org/licenses/LICENSE-2.0>) * MIT license ([LICENSE-MIT](LICENSE-MIT) or <http://opensource.org/licenses/MIT>) at your option. <!-- markdownlint-disable MD014 --> <div align="center"> <h1><code>near-sdk-abi</code></h1> <p> <strong>Utility library for making typesafe cross-contract calls with <a href="https://github.com/near/near-sdk-rs">near-sdk-rs</a> smart contracts</strong> </p> <p> <a href="https://github.com/near/near-sdk-abi/actions/workflows/test.yml?query=branch%3Amain"><img src="https://github.com/near/near-sdk-abi/actions/workflows/test.yml/badge.svg" alt="Github CI Build" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/v/near-sdk-abi.svg?style=flat-square" alt="Crates.io version" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/d/near-sdk-abi.svg?style=flat-square" alt="Downloads" /></a> </p> </div> ## Release notes **Release notes and unreleased changes can be found in the [CHANGELOG](CHANGELOG.md)** ## Usage This crate supports two sets of APIs for users with different needs: * **Macro-driven**. Gives you a cross-contract binding in a single macro invocation. * **Generation-based**. Gives you more control and is transparent about what code you end up using, but requires more setup. ### Macro API Checkout the [`delegator-macro`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-macro) example for a standalone smart contract using macro API to make a cross-contract call. To generate a trait named `ContractName` with ext interface named `ext_name` based on ABI located at `path/to/abi.json` (relative to the current file's directory): ```rust near_abi_ext! { mod ext_name trait ContractName for "path/to/abi.json" } ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ### Generation API Checkout the [`delegator-generation`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-generation) example for a standalone project using generation API to make a cross-contract call. First, we need our package to have a `build.rs` file that runs the generation step. The following snippet will generate the contract trait in `abi.rs` under `path/to/out/dir`: ```rust fn main() -> anyhow::Result<()> { near_sdk_abi::Generator::new("path/to/out/dir".into()) .file(near_sdk_abi::AbiFile::new("path/to/abi.json")) .generate()?; Ok(()) } ``` The resulting file, however, is not included in your source set by itself. You have to include it manually; the recommended way is to create a mod with a custom path: ```rust #[path = "path/to/out/dir/abi.rs"] mod mymod; ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ## Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as below, without any additional terms or conditions. ## License Licensed under either of * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <http://www.apache.org/licenses/LICENSE-2.0>) * MIT license ([LICENSE-MIT](LICENSE-MIT) or <http://opensource.org/licenses/MIT>) at your option. <!-- markdownlint-disable MD014 --> <div align="center"> <h1><code>near-sdk-abi</code></h1> <p> <strong>Utility library for making typesafe cross-contract calls with <a href="https://github.com/near/near-sdk-rs">near-sdk-rs</a> smart contracts</strong> </p> <p> <a href="https://github.com/near/near-sdk-abi/actions/workflows/test.yml?query=branch%3Amain"><img src="https://github.com/near/near-sdk-abi/actions/workflows/test.yml/badge.svg" alt="Github CI Build" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/v/near-sdk-abi.svg?style=flat-square" alt="Crates.io version" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/d/near-sdk-abi.svg?style=flat-square" alt="Downloads" /></a> </p> </div> ## Release notes **Release notes and unreleased changes can be found in the [CHANGELOG](CHANGELOG.md)** ## Usage This crate supports two sets of APIs for users with different needs: * **Macro-driven**. Gives you a cross-contract binding in a single macro invocation. * **Generation-based**. Gives you more control and is transparent about what code you end up using, but requires more setup. ### Macro API Checkout the [`delegator-macro`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-macro) example for a standalone smart contract using macro API to make a cross-contract call. To generate a trait named `ContractName` with ext interface named `ext_name` based on ABI located at `path/to/abi.json` (relative to the current file's directory): ```rust near_abi_ext! { mod ext_name trait ContractName for "path/to/abi.json" } ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ### Generation API Checkout the [`delegator-generation`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-generation) example for a standalone project using generation API to make a cross-contract call. First, we need our package to have a `build.rs` file that runs the generation step. The following snippet will generate the contract trait in `abi.rs` under `path/to/out/dir`: ```rust fn main() -> anyhow::Result<()> { near_sdk_abi::Generator::new("path/to/out/dir".into()) .file(near_sdk_abi::AbiFile::new("path/to/abi.json")) .generate()?; Ok(()) } ``` The resulting file, however, is not included in your source set by itself. You have to include it manually; the recommended way is to create a mod with a custom path: ```rust #[path = "path/to/out/dir/abi.rs"] mod mymod; ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ## Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as below, without any additional terms or conditions. ## License Licensed under either of * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <http://www.apache.org/licenses/LICENSE-2.0>) * MIT license ([LICENSE-MIT](LICENSE-MIT) or <http://opensource.org/licenses/MIT>) at your option. <!-- markdownlint-disable MD014 --> <div align="center"> <h1><code>near-sdk-abi</code></h1> <p> <strong>Utility library for making typesafe cross-contract calls with <a href="https://github.com/near/near-sdk-rs">near-sdk-rs</a> smart contracts</strong> </p> <p> <a href="https://github.com/near/near-sdk-abi/actions/workflows/test.yml?query=branch%3Amain"><img src="https://github.com/near/near-sdk-abi/actions/workflows/test.yml/badge.svg" alt="Github CI Build" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/v/near-sdk-abi.svg?style=flat-square" alt="Crates.io version" /></a> <a href="https://crates.io/crates/near-sdk-abi"><img src="https://img.shields.io/crates/d/near-sdk-abi.svg?style=flat-square" alt="Downloads" /></a> </p> </div> ## Release notes **Release notes and unreleased changes can be found in the [CHANGELOG](CHANGELOG.md)** ## Usage This crate supports two sets of APIs for users with different needs: * **Macro-driven**. Gives you a cross-contract binding in a single macro invocation. * **Generation-based**. Gives you more control and is transparent about what code you end up using, but requires more setup. ### Macro API Checkout the [`delegator-macro`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-macro) example for a standalone smart contract using macro API to make a cross-contract call. To generate a trait named `ContractName` with ext interface named `ext_name` based on ABI located at `path/to/abi.json` (relative to the current file's directory): ```rust near_abi_ext! { mod ext_name trait ContractName for "path/to/abi.json" } ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ### Generation API Checkout the [`delegator-generation`](https://github.com/near/near-sdk-abi/tree/main/examples/delegator-generation) example for a standalone project using generation API to make a cross-contract call. First, we need our package to have a `build.rs` file that runs the generation step. The following snippet will generate the contract trait in `abi.rs` under `path/to/out/dir`: ```rust fn main() -> anyhow::Result<()> { near_sdk_abi::Generator::new("path/to/out/dir".into()) .file(near_sdk_abi::AbiFile::new("path/to/abi.json")) .generate()?; Ok(()) } ``` The resulting file, however, is not included in your source set by itself. You have to include it manually; the recommended way is to create a mod with a custom path: ```rust #[path = "path/to/out/dir/abi.rs"] mod mymod; ``` Now, assuming you have an `ext_account_id: near_sdk::AccountId` representing the contract account id, you can make a cross-contract call like this: ```rust let promise = ext_adder::ext(ext_account_id).my_method_name(arg1, arg2); ``` ## Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as below, without any additional terms or conditions. ## License Licensed under either of * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <http://www.apache.org/licenses/LICENSE-2.0>) * MIT license ([LICENSE-MIT](LICENSE-MIT) or <http://opensource.org/licenses/MIT>) at your option.
hangleang_nft-marketplace-factory
.commitlintrc.yml .prettierrc.yml .release-it.json .solhint.json .vscode extensions.json settings.json README.md contract .eslintrc.yml .mocharc.json .prettierrc.yml .solcover.js .solhint.json .vscode extensions.json settings.json LICENSE.md README.md deploy 000_deploy_mocks.ts 001_deploy_greeter.ts 002_deploy_token_factory.ts 003_deploy_contract_registry.ts 004_deploy_contract_factory.ts 005_deploy_marketplace_only.ts 006_add_impls.ts 007_deploy_tests.ts implementations deploy_erc1155drop.ts deploy_erc1155token.ts deploy_erc721drop.ts deploy_erc721token.ts docs index.md exports goerli.json localhost.json mumbai.json hardhat.config.ts helper-config.ts helper-functions.ts package.json scripts upgrade_marketplace.ts tasks accounts.ts test drop erc1155drop.test.ts factory.test.ts greeter.test.ts marketplace.test.ts token erc721token.test.ts utils.ts tsconfig.json | frontend .eslintrc.json .yarnrc.yml README.md next.config.js package.json public vercel.svg src assets icons book.svg code.svg moon.svg sun.svg pages api hello.ts styles Header.module.css Home.module.css globals.css tsconfig.json package.json tsconfig.json
# blank-project This app was initialized with [create-evm-app] # Quick Start If you haven't installed dependencies during setup: yarn install Build and Deploy your contract to local network: yarn deploy Test your contract: yarn test If you have a frontend, run `yarn start`. This will run a dev server. # Exploring The Code 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `yarn test`, this will run the tests in `integration-tests` directory. # Hardhat Template [![Github Actions][gha-badge]][gha] [![Hardhat][hardhat-badge]][hardhat] [![License: MIT][license-badge]][license] [gha]: https://github.com/paulrberg/hardhat-template/actions [gha-badge]: https://github.com/paulrberg/hardhat-template/actions/workflows/ci.yml/badge.svg [hardhat]: https://hardhat.org/ [hardhat-badge]: https://img.shields.io/badge/Built%20with-Hardhat-FFDB1C.svg [license]: https://opensource.org/licenses/MIT [license-badge]: https://img.shields.io/badge/License-MIT-blue.svg A Hardhat-based template for developing Solidity smart contracts, with sensible defaults. - [Hardhat](https://github.com/nomiclabs/hardhat): compile, run and test smart contracts - [TypeChain](https://github.com/ethereum-ts/TypeChain): generate TypeScript bindings for smart contracts - [Deploy](https://github.com/wighawag/hardhat-deploy): replicable deployments and easy testing - [Ethers](https://github.com/ethers-io/ethers.js/): renowned Ethereum library and wallet implementation - [Solhint](https://github.com/protofire/solhint): code linter - [SolCoverage](https://github.com/sc-forks/solidity-coverage): code coverage - [Prettier](https://github.com/prettier-solidity/prettier-plugin-solidity): code formatter - [DocsGen](https://github.com/OpenZeppelin/solidity-docgen): docs generator ## Getting Started Click the [`Use this template`](https://github.com/paulrberg/hardhat-template/generate) button at the top of the page to create a new repository with this repo as the initial state. ## Features This template builds upon the frameworks and libraries mentioned above, so for details about their specific features, please consult their respective documentations. For example, for Hardhat, you can refer to the [Hardhat Tutorial](https://hardhat.org/tutorial) and the [Hardhat Docs](https://hardhat.org/docs). You might be in particular interested in reading the [Testing Contracts](https://hardhat.org/tutorial/testing-contracts) section. ### Sensible Defaults This template comes with sensible default configurations in the following files: ```text ├── .editorconfig ├── .eslintignore ├── .eslintrc.yml ├── .gitignore ├── .prettierignore ├── .prettierrc.yml ├── .solcover.js ├── .solhintignore ├── .solhint.json ├── .npmrc └── hardhat.config.ts ``` ### GitHub Actions This template comes with GitHub Actions pre-configured. Your contracts will be linted and tested on every push and pull request made to the `main` branch. Note though that to make this work, you must se your `INFURA_API_KEY` and your `MNEMONIC` as GitHub secrets. You can edit the CI script in [.github/workflows/ci.yml](./.github/workflows/ci.yml). ## Usage ### Pre Requisites Before being able to run any command, you need to create a `.env` file and set a BIP-39 compatible mnemonic as an environment variable. You can follow the example in `.env.example`. If you don't already have a mnemonic, you can use this [website](https://iancoleman.io/bip39/) to generate one. Then, proceed with installing dependencies: ```sh $ pnpm install ``` ### Build Compile & Build the smart contracts with Hardhat: ```sh $ pnpm build ``` ### TypeChain Compile the smart contracts and generate TypeChain bindings: ```sh $ pnpm typechain ``` ### Lint Solidity Lint the Solidity code: ```sh $ pnpm lint:sol ``` ### Lint TypeScript Lint the TypeScript code: ```sh $ pnpm lint:ts ``` ### Test Run the tests with Hardhat: ```sh $ pnpm test ``` ### Coverage Generate the code coverage report: ```sh $ pnpm coverage ``` ### Generate Docs Generate docs for smart contracts: ```sh $ pnpm docgen ``` ### Report Gas See the gas usage per unit test and average gas per method call: ```sh $ pnpm report-gas ``` ### Up Localnet Startup local network: ```sh $ pnpm localnet ``` ### Deploy Deploy the contracts to default network (Hardhat): ```sh $ pnpm run deploy ``` For specific network, we need to add `network` flag ```sh $ pnpm run deploy --network <network-name> ``` ### Export Export the contract deployed to a file with a simple format containing only contract addresses and abi: ```sh $ pnpm export <filepath> --network <network-name> ``` ### Clean Delete the smart contract artifacts, the coverage reports and the Hardhat cache: ```sh $ pnpm clean ``` ## Tips ### Command-line completion If you want more flexible over the existing scripts in the template, but tired to typing `npx hardhat` all the time and lazy to remember all the commands. Let's try this [hardhat-completion](https://hardhat.org/hardhat-runner/docs/guides/command-line-completion#command-line-completion) packages. Instead of typing `npx hardhat`, use `hh` then tab to see available commands. ### Syntax Highlighting If you use VSCode, you can get Solidity syntax highlighting with the [hardhat-solidity](https://marketplace.visualstudio.com/items?itemName=NomicFoundation.hardhat-solidity) extension. ## Using GitPod [GitPod](https://www.gitpod.io/) is an open-source developer platform for remote development. To view the coverage report generated by `pnpm coverage`, just click `Go Live` from the status bar to turn the server on/off. ## License [MIT](./LICENSE.md) © Paul Razvan Berg This is a [Next.js](https://nextjs.org/) project bootstrapped with [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app). ## Getting Started First, run the development server: ```bash npm run dev # or yarn dev ``` Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. You can start editing the page by modifying `pages/index.tsx`. The page auto-updates as you edit the file. [API routes](https://nextjs.org/docs/api-routes/introduction) can be accessed on [http://localhost:3000/api/hello](http://localhost:3000/api/hello). This endpoint can be edited in `pages/api/hello.ts`. The `pages/api` directory is mapped to `/api/*`. Files in this directory are treated as [API routes](https://nextjs.org/docs/api-routes/introduction) instead of React pages. ## Learn More To learn more about Next.js, take a look at the following resources: - [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API. - [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial. You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js/) - your feedback and contributions are welcome! ## Deploy on Vercel The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js. Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details.
geepara_edu-near
README.md build.sh contract Cargo.toml README.md neardev dev-account.env src approval.rs counter.rs enumeration.rs events.rs internal.rs lib.rs metadata.rs nft_core.rs owner.rs royalty.rs series.rs tests enumeration_tests.rs lib_tests.rs mint_tests.rs mod.rs nft_core_tests.rs frontend babel.config.js next-env.d.ts next.config.js public assets badge.svg banner-icon-2.svg banner-icon.svg discover_content.svg generate_resume.svg issue_badges.svg location.svg search.svg sell_courses.svg student_icon.svg take_classes.svg teach_classes.svg teacher.svg vercel.svg src hooks useRedux.ts useScroll.ts useSearch.ts json footer.json home.json jobfilter.json menu.json near config.ts utils.ts pages api jobs.ts redux slices searchSlice.ts store.ts styles Mint.module.css globals.css spinner_ripple.css types index.d.ts index.ts utils getEvents.ts getTimeDifference.ts job.ts toSlug.ts view Mint skills_updated.json index.ts tsconfig.json integration-tests package-lock.json package.json src main.ava.ts neardev dev-account.env package-lock.json package.json
# Welcome To Educoin's Near Minting Smart Contract! ================================= # Available Functions * nft_mint * nft_metadata * nft_transfer * nft_token * nft_approve * nft_is_approved * nft_revoke * nft_revoke_all * nft_total_supply * nft_tokens * nft_supply_for_owner * nft_tokens_for_owner # Functions ## nft_mint ### Description This function is the power horse of this contract. This function will mint a token and store it on the blockchain. ### Arguments ```shell { token_id: String, metadata: { title: Optional<String>, description: Optional<String>, media: Optional<String>, media_hash: Optional<String>, copies: Optional<Number>, issued_at: Optional<Number>, expires_at: Optional<Number>, starts_at: Optional<Number>, updated_at: Optional<Number>, extra: Optional<String>, reference: Optional<String>, reference_hash: Optional<Base64VecU8>, token_type: Optional<"Content" | "Badge"> }, receiver_id: String, perpetual_royalties: Optional<{AccountId: String, Amount: Number}> } ``` ### Usage ```console $ near call $NFT_CONTRACT nft_mint '{"token_id": "TestToken", "receiver_id": "reciever.testnet", "metadata": { "title": "Welcome" } }' --accountId myaccount.testnet --deposit 0.1 ``` ## nft_metadata ### Description Returns the Metadata for the contract ### Arguments None ### Usage ```console $ near call $NFT_CONTRACT nft_metadata ``` ## nft_transfer ### Description Transfer an nft from your account to another ### Arguments ``` { receiver_id: String, token_id: String, approval_id: Optional<Number>, memo: Optional<String> } ``` ### Usage ```console $ near call $NFT_CONTRACT nft_transfer '{ "receiver_id": "another.testnet", "token_id": "tokenid" }' --accountId myAccount.testnet --deposit 0.1 ``` ## nft_token ### Description Get token information for a given token id ### Arguments ``` { token_id: String } ``` ### Usage ```console $ near view $NFT_CONTRACT nft_token '{"token_id": "an_exsiting_id"}' ``` ## nft_approve ### Description Let an account id transfer your tokens on your behalf ### Arguments ``` { token_id: String, account_id: String, msg: Optional<String> } ``` ### Usage ```console $ near call $NFT_CONTRACT nft_approve '{"token_id": "an_exsiting_id", "account_id": "an_account.testnet"}' --accountId myAccount.testnet --deposit 0.1 ``` ## nft_is_approved ### Description Check to see if a passed in account has access to approve the token id ### Arguments ``` { token_id: String, approved_account_id: String, approval_id: Optional<Number> } ``` ### Usage ```console $ near call $NFT_CONTRACT nft_is_approved '{"token_id": "an_exsiting_id", "approved_account_id": "hello.testnet"}' --accountId myAccount.testnet --deposit 0.1 ``` ## nft_revoke ### Description Remove a specific account from transferring the token on your behalf ### Arguments ``` { token_id: String, account_id: String } ``` ### Usage ```console $ near call $NFT_CONTRACT nft_revoke '{ "token_id": "anToken", "account_id": "anaccount.testnet" }' --accountId myaccount.testnet --deposit 0.1 ``` ## nft_revoke_all ### Description Revoke all accounts from transferring the token on your behalf ### Arguments ``` { token_id: String } ``` ### Usage ```console $ near call $NFT_CONTRACT nft_revoke_all '{ "token_id": "anToken" }' --accountId myaccount.testnet --deposit 0.1 ``` ## nft_total_supply ### Description Get the number of NFTS on the contract ### Arguments None ### Usage ```console $ near view $NFT_CONTRACT nft_total_supply ``` ## nft_tokens ### Description Query for nft tokens ono the contract regardless of the owner ### Arguments ``` { from_index: Optional<Number>, limit: Optional<Number> } ``` ### Usage ```console $ near view $NFT_CONTRACT nft_tokens ``` ## nft_supply_for_owner ### Description Get the total supply of NFTs for a given owner ### Arguments ``` { account_id: String } ``` ### Usage ```console $ near view $NFT_CONTRACT nft_supply_for_owner ``` ## nft_tokens_for_owner ### Description Query for all the tokens for an owner ### Arguments ``` { account_id: String, from_index: Optional<Number>, limit: Optional<Number> } ``` ### Usage ```console $ near view $NFT_CONTRACT nft_tokens_for_owner '{"account_id": "myaccount.testnet"}' ``` # hackathon-mint-page ## How to Run - First Time: - `yarn install` - `yarn dev` - Every time after: - `yarn dev`
NEARWEEK_ui.news
.eslintrc.js .github ISSUE_TEMPLATE BOUNTY.yml README.md __tests__ MANUAL_TESTING.md mocks browser.ts handlers.ts index.ts server.ts config currency.ts grants.ts near.ts constants index.ts contexts GrantContext.ts form-schemas fullMilestoneSubmissionFormSchema.ts grantApplicationFormSchema.ts milestoneSubmissionFormSchema.ts hooks useAccountSignature.ts useDaoContract.ts useGrant.ts useGrantStatus.ts useMilestonesStatus.ts useOnceCall.ts modules hellosign-embedded-react useHellosignEmbedded.ts near-api-react config index.ts context NearContext.ts hooks useContract.ts useNear.ts useNetworkId.ts useSigner.ts useWallet.ts next-env.d.ts next-i18next.config.js next.config.js package-lock.json package.json prettier.config.js public images logo.svg locales en common.json grant.json grants.json home.json login.json logout.json milestone.json mockServiceWorker.js vercel.svg services apiService.ts currencyConverter.ts helloSignService.ts invoiceService.ts sputnikContractService.ts styles AccountDropdown.module.css Home.module.css Login.module.css globals.css tsconfig.json types AttachmentInterface.ts GrantApplicationInterface.ts MilestoneInterface.ts NearApiSignatureInterface.ts SputnikContractInterface.ts utilities budgetCalculator.ts createProposalDescription.ts createValidationUtilities.ts getCountry.ts grantFormCustomExtraValidators.ts parseCookies.ts parseMilestonesDates.ts
# ui.grants [![NEAR](https://img.shields.io/badge/NEAR-%E2%8B%88-111111.svg)](https://near.org/) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](LICENSE) > Easy to set up end to end grant application form for DAOs on NEAR Protocol ## Repositories - [ui.grants](https://github.com/NEARFoundation/ui.grants) - [api.grants](https://github.com/NEARFoundation/api.grants) - [admin.grants](https://github.com/NEARFoundation/admin.grants) ## Technology stack - Blockchain: **[NEAR](https://near.org/)** - Smart Contracts: **[Sputnik DAO Factory V2](https://github.com/near-daos/sputnik-dao-contract/tree/main/sputnikdao-factory2), [Sputnik DAO V2](https://github.com/near-daos/sputnik-dao-contract/tree/main/sputnikdao2)** - Package manager: **[NPM](https://www.npmjs.com/)** - Core programming language: **[TypeScript](https://www.typescriptlang.org/)** - Application framework: **[NextJS](https://nextjs.org/)** - Code quality: **[Eslint](https://eslint.org/), [Prettier](https://prettier.io/)** - UI Framework: **[Mantine UI](https://mantine.dev/)** - Internationalization: **[next-i18nnext](https://github.com/isaachinman/next-i18next)** - Form validation: **[zod](https://github.com/colinhacks/zod)** - HTTP client: **[Axios](https://github.com/axios/axios)** - Query hooks: **[React Query](https://react-query.tanstack.com/)** - Contract Signature: **[hellosign](https://github.com/HelloFax/hellosign-nodejs-sdk)** - KYC: **[KYC DAO](https://github.com/kycdao)** - Scheduling: **[Calendly](https://developer.calendly.com/)** ## Guides ### Configuration ```bash cp .env.dist .env # set up variables on .env ``` ### Installation ```bash npm install ``` Set up .env ### Development ```bash npm run dev ``` Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. ### Deployment ```bash npm install && npm run build npm run start ``` ### Testing No tested are implemented yet. Check the [Manual Testing](__tests__/MANUAL_TESTING.md) guide. ## Authors - [Sandoche](https://github.com/sandoche)
near-everything_organize
README.md package.json postcss.config.js public index.html manifest.json robots.txt src App.js app api.js firebase.js store.js components AccessibleNavigationAnnouncer.js Badge.js Button.js Card.js CardBody.js Cards ImageCard.js ItemCard.js RequestCard.js DarkMode.js FileUpload.js Header.js Input.js MainHeader.js RoundIcon.js Select.js ThemedSuspense.js Typography PageTitle.js SectionTitle.js containers Layout.js context ThemeContext.js features auth PhoneNumberVerification.js SubmitPhoneNumberButton.js authApi.js authSlice.js itemDeck itemDeckApi.js itemDeckSlice.js labels labelsSlice.js requestDeck requestDeckApi.js requestDeckSlice.js hooks useItems.js icons bell.svg buttons.svg cards.svg cart.svg charts.svg chat.svg dropdown.svg edit.svg forbidden.svg forms.svg github.svg heart.svg home.svg image.svg index.js mail.svg menu.svg modals.svg money.svg moon.svg outlineCog.svg outlineLogout.svg outlinePerson.svg pages.svg people.svg search.svg sun.svg tables.svg trash.svg twitter.svg index.css index.js logo.svg pages 404.js Item.js ItemDeck.js Login.js Main.js Organize.js Request.js RequestDeck.js routes index.js serviceWorker.js setupTests.js themes default.js utils categories.js mergeDeep.js useDarkMode.js tailwind.config.js
This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app), using the [Redux](https://redux.js.org/) and [Redux Toolkit](https://redux-toolkit.js.org/) template. ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.<br /> Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.<br /> You will also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.<br /> See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.<br /> It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.<br /> Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (Webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/).
JuEnPeHa_global_near_form
Cargo.toml src insert_functions.rs lib.rs modifier_functions.rs questions.rs view_functions.rs
Pasidalee_Near-Election-Platform
README.md contract asconfig.json assembly as_types.d.ts index.ts model.ts tsconfig.json package-lock.json package.json package.json public index.html manifest.json robots.txt src App.css App.js App.test.js components Wallet.js utils Cover.js Loader.js Notifications.js votingPlatform AddCandidate.js AddVoter.js Candidate.js CandidatesandVoters.js index.css index.js reportWebVitals.js utils config.js near.js votingPlatform.js
# NEAR Election Platform [link to dapp](https://Pasidalee.github.io/Near-Election-Platform) ## 1. Tech Stack This boilerplate uses the following tech stack: [React](https://reactjs.org/) - A JavaScript library for building user interfaces. [near-sdk-as](https://github.com/near/near-sdk-as) - A frontend library for interacting with the Near Protocol Testnet. [Bootstrap](https://getbootstrap.com/) - A CSS framework that provides responsive, mobile-first layouts. ## 2. Quick Start To get this project up running locally, follow these simple steps: ### 2.1 Clone the repository: git clone https://github.com/Pasidalee/Near-Election-Platform.git ### 2.2 Navigate to the directory: cd Near-Election-Platform ### 2.3 Install the dependencies: npm install ### 2.4 Run the dapp: npm start To properly test the dapp you will need to have a Near testnet account. [Create Account](https://wallet.testnet.near.org/) ## 3. Smart-Contract Deployment ### 3.1 Navigate to the contract directory: cd contract ### 3.2 Install the dependencies: yarn install npm install near-cli ### 3.3 Compile the smart contract yarn asb ### 3.4 Deploy the smart contract to the Near Protocol Testnet near deploy --accountId={ACCOUNT_ID} --wasmFile=build/release/near-marketplace-voting.wasm This command will deploy the contract to the accountId on the testnet. The accountId now becomes the contract name ## Admin Section This section contains node-js terminal scripts to be run to control the operation of the voting platform. ### 4.1 Setting the admin near call {contractname} init '{"admin":"{adminAccount}"}' --accountId={contractname} This sets the admin account giving that account access for functions like registering voters and adding new election candidates to the platform.
nearsend_nearsend-core
Cargo.toml LICENSE.md Migrate.md Readme.md ft Cargo.toml src lib.rs package.json scripts config.js migrate.mainnet.js migrate.testnet.js shell-script build.sh src events.rs lib.rs tests sim macros.rs main.rs utils.rs
monocrowd_near-try-vote
.gitpod.yml README.md babel.config.js contract Cargo.toml README.md compile.js src lib.rs package.json src App.js __mocks__ fileMock.js assets logo-black.svg logo-white.svg config.js global.css index.html index.js jest.init.js main.test.js utils.js wallet login index.html
near-try-vote ================== This [React] app was initialized with [create-near-app] Quick Start =========== near view voting.happybits.testnet view_candidates {} ``` [ { candidate_id: 'Tom', metadata: null, votes: 1 }, { candidate_id: 'Bill', metadata: null, votes: 1 }, { candidate_id: 'Peter', metadata: null, votes: 0 } ] ``` # `try-vote` will check current candidates info return by `near view voting.happybits.testnet view_candidates {}`, and how will the impact of your vote - 'game changing vote!!' - near call try-vote.happybits.testnet try_vote '{"candidate_id": "Tom"}' --accountId $YOUR_ACCOUNT - 'thanks vote' - near call try-vote.happybits.testnet try_vote '{"candidate_id": "Peter"}' --accountId $YOUR_ACCOUNT - 'you can be an candidate to win' - if no candidates - 'You have no choice' - if only one candidate near-try-vote Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html
gagdiez_influences
README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts main.ts model.ts tsconfig.json package.json src assets style.css blockchain.js config.js index.html index.js
# Influences ![banner](imgs/banner.png) Influences is a decentralized social network that allow fans to subscribe to their favourite influencers for a monthly fee. Influencers in turn share images and videos. The main advantage of Influences is that it's completely decentralized. It's front-end is hosted in skynet, the decentralized storage platform of SIA. Moreover, all the content that the influencers upload also goes to skynet! Meanwhile, the back-end is implemented as a smart contract hosted in NEAR (currently in the testnet). Visit it now at: [Influences](https://siasky.net/hns/influences) See the video of how it works here: [video](https://siasky.net/AAAC6btJgA0btFFpdz5zIaTxCdkeJgxIzX8G_Y_1Rmtr7w)
near_abomonation
.github ISSUE_TEMPLATE BOUNTY.yml .travis.yml Cargo.toml README.md benches bench.rs clone.rs recycler.rs serde.rs src abomonated.rs lib.rs tests tests.rs
# Abomonation A mortifying serialization library for Rust Abomonation (spelling intentional) is a serialization library for Rust based on the very simple idea that if someone presents data for serialization it will copy those exact bits, and then follow any pointers and copy those bits, and so on. When deserializing it recovers the exact bits, and then corrects pointers to aim at the serialized forms of the chased data. **Warning**: Abomonation should not be used on any data you care strongly about, or from any computer you value the data on. The `encode` and `decode` methods do things that may be undefined behavior, and you shouldn't stand for that. Specifically, `encode` exposes padding bytes to `memcpy`, and `decode` doesn't much respect alignment. Please consult the [abomonation documentation](https://frankmcsherry.github.com/abomonation) for more specific information. Here is an example of using Abomonation. It is very easy to use. Frighteningly easy. ```rust extern crate abomonation; use abomonation::{encode, decode}; // create some test data out of abomonation-approved types let vector = (0..256u64).map(|i| (i, format!("{}", i))).collect(); // encode vector into a Vec<u8> let mut bytes = Vec::new(); unsafe { encode(&vector, &mut bytes); } // unsafely decode a &Vec<(u64, String)> from binary data (maybe your utf8 are lies!). if let Some((result, remaining) = unsafe { decode::<Vec<(u64, String)>>(&mut bytes) } { assert!(result == &vector); assert!(remaining.len() == 0); } ``` When you use Abomonation things may go really fast. That is because it does so little work, and mostly just copies large hunks of memory. Typing cargo bench will trigger Rust's benchmarking infrastructure (or an error if you are not using nightly. bad luck). The tests repeatedly encode `Vec<u64>`, `Vec<String>`, and `Vec<Vec<(u64, String)>>` giving numbers like: test u64_enc ... bench: 131 ns/iter (+/- 58) = 62717 MB/s test string10_enc ... bench: 8,784 ns/iter (+/- 2,791) = 3966 MB/s test vec_u_s_enc ... bench: 8,964 ns/iter (+/- 1,439) = 4886 MB/s They also repeatedly decode the same data, giving numbers like: test u64_dec ... bench: 2 ns/iter (+/- 1) = 4108000 MB/s test string10_dec ... bench: 1,058 ns/iter (+/- 349) = 32930 MB/s test vec_u_s_dec ... bench: 1,232 ns/iter (+/- 223) = 35551 MB/s These throughputs are so high because there is very little to do: internal pointers need to be corrected, but in their absence (*e.g.* `u64`) there is literally nothing to do. Be warned that these numbers are not *goodput*, but rather the total number of bytes moved, which is equal to the in-memory representation of the data. On a 64bit system, a `String` requires 24 bytes plus one byte per character, which can be a lot of overhead for small strings. ## unsafe_abomonate! Abomonation comes with the `unsafe_abomonate!` macro implementing `Abomonation` for structs which are essentially equivalent to a tuple of other `Abomonable` types. To use the macro, you must put the `#[macro_use]` modifier before `extern crate abomonation;`. Please note that `unsafe_abomonate!` synthesizes unsafe implementations of `Abomonation`, and it is should be considered unsafe to invoke. ```rust #[macro_use] extern crate abomonation; use abomonation::{encode, decode}; #[derive(Eq, PartialEq)] struct MyStruct { pub a: String, pub b: u64, pub c: Vec<u8>, } // (type : field1, field2 .. ) unsafe_abomonate!(MyStruct : a, b, c); // create some test data out of abomonation-approved types let record = MyStruct{ a: "test".to_owned(), b: 0, c: vec![0, 1, 2] }; // encode vector into a Vec<u8> let mut bytes = Vec::new(); unsafe { encode(&record, &mut bytes); } // decode a &Vec<(u64, String)> from binary data if let Some((result, remaining)) = unsafe { decode::<MyStruct>(&mut bytes) } { assert!(result == &record); assert!(remaining.len() == 0); } ``` Be warned that implementing `Abomonable` for types can be a giant disaster and is entirely discouraged.
Panasthetik_near-art-exhibitions
README.md contract Cargo.toml build.bat build.sh src lib.rs models.rs utils.rs test.sh package.json src App.js components AddArtists.js AddEndorsement.js ArtistNews.js CreateExhibitions.js DeleteExhibitionForm.js ExhibitionNews.js ExhibitionsIndex.js ListArtists.js ListExhibitions.js config.js global.css index.html index.js pages AddArtistPage.js AddExhibition.js ArtistsPage.js DeleteExhibition.js Exhibition.js ExhibitionsMain.js ExhibitionsPage.js HomePage.js utils.js
# Near Art Exhibitions Art exhibitions and artist portfolio site on Near Protocol with Rust, WASM, Parcel and React. Based on the 'Near Starter Template' also posted on this GitHub and documented in my Medium articles. Can be expanded / adapted to suit a variety of needs in Fine Art or Photography for those looking to build on Near Protocol. Permissions and other functions should be added for production deployment. Also possible would be a native token with DAO functionality for the curation and voting, but it's beyond the scope of this template. # Features (high-level): 1) User can add an exhibition with title, description and 5 image links. 2) User can add an artist to an exhibition with portfolio (5 image links). 3) Users can 'Endorse' artists and 'Vote' on exhibitions. 4) News feed on Home page updates when an Artist or Exhibition is added (UNIX timestamp, date needs to be re-formatted in Rust to be properly displayable in React). 5) User can delete an exhibition (Rust indexing doesn't update (-1 value) ID in the vector - to fix). 6) Artists page lists all artists by exhibition (by ID). # Testing You can run "cargo build" in the 'contract' folder, and it will compile a debug target in regular Rust. You can then run "bash ./test.sh" in the 'contract' folder and it will run the 5 unit tests. # In Progress (to-do's) 1) Redo the exhibitions ID to be a unique number. 2) Reformat the date in Rust (from "block.timestamp()" to UTC). 3) Allow-list, "only owner" for 'Delete Exhibition'. 4) Add donations to Artist, Exhibition or both. 5) News feed lists donations and endorsements. 6) IPFS or Pinata for the images. # Instructions In progress - install and deploy instructions coming soon. You can compile the Rust code to WASM and re-deploy a fresh copy of this contract to the Near Testnet by following the instructions in the Near Starter Template articles posted elsewhere on this GitHub. Be sure to include the contract address in the "config.js" Near configuration file before launching the React app with "npm start".
kiethase_Contract-depo
Cargo.toml build.sh src account_deposit.rs errors.rs lib.rs storage_impl.rs token_receivers.rs
# Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) # Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify)
here-wallet_subscriptions-service
README.md example index.html package-lock.json package.json tsconfig.json here-sdk package.json src index.ts tsconfig.json here-service index.html package-lock.json package.json src Account.ts assets cabinet-grotesk README.md index.css icons back.svg close.svg done.svg global.svg warning.svg index.css manrope README.md index.css styled.ts templates.ts types.ts tsconfig.json tsconfig.json
# Installing Webfonts Follow these simple Steps. ## 1. Put `cabinet-grotesk/` Folder into a Folder called `fonts/`. ## 2. Put `cabinet-grotesk.css` into your `css/` Folder. ## 3. (Optional) You may adapt the `url('path')` in `cabinet-grotesk.css` depends on your Website Filesystem. ## 4. Import `cabinet-grotesk.css` at the top of you main Stylesheet. ``` @import url('cabinet-grotesk.css'); ``` ## 5. ``` font-family: 'CabinetGrotesk-Variable'; font-family: 'CabinetGrotesk-Thin'; font-family: 'CabinetGrotesk-Extralight'; font-family: 'CabinetGrotesk-Light'; font-family: 'CabinetGrotesk-Regular'; font-family: 'CabinetGrotesk-Medium'; font-family: 'CabinetGrotesk-Bold'; font-family: 'CabinetGrotesk-Extrabold'; font-family: 'CabinetGrotesk-Black'; ``` # Installing Webfonts Follow these simple Steps. ## 1. Put `manrope/` Folder into a Folder called `fonts/`. ## 2. Put `manrope.css` into your `css/` Folder. ## 3. (Optional) You may adapt the `url('path')` in `manrope.css` depends on your Website Filesystem. ## 4. Import `manrope.css` at the top of you main Stylesheet. ``` @import url('manrope.css'); ``` ## 5. ``` font-family: 'Manrope-Variable'; font-family: 'Manrope-ExtraLight'; font-family: 'Manrope-Light'; font-family: 'Manrope-Regular'; font-family: 'Manrope-Medium'; font-family: 'Manrope-SemiBold'; font-family: 'Manrope-Bold'; font-family: 'Manrope-ExtraBold'; ``` # Subscriptions services
NEAR-Edu_stats.gallery-server
.github pull_request_template.md workflows check.yml docker.yml README.md build.js docker-compose.yml lint-staged.config.js migrations 20210928143424-create-cache.js 20211008192100-account-modified.js 20211017120050-add-badge-caching.js 20211106113018-add-caching-for-leaderboards.js 20211229004058-add-action-receipt-actions.js 20220121104939-add-pk-to-account-table.js 20220125021655-add-badges.js 20220213010535-add-unique-index-account-badge.js 20220213091252-correct-deploy-badge-require-value.js 20220221053909-correct-stake-badge-required-value.js package.json sqls 20210928143424-create-cache-down.sql 20210928143424-create-cache-up.sql 20211008192100-account-modified-down.sql 20211008192100-account-modified-up.sql 20211017120050-add-badge-caching-down.sql 20211017120050-add-badge-caching-up.sql 20211106113018-add-caching-for-leaderboards-down.sql 20211106113018-add-caching-for-leaderboards-up.sql 20211229004058-add-action-receipt-actions-down.sql 20211229004058-add-action-receipt-actions-up.sql 20220121104939-add-pk-to-account-table-down.sql 20220121104939-add-pk-to-account-table-up.sql 20220125021655-add-badges-down.sql 20220125021655-add-badges-up.sql 20220213010535-add-unique-index-account-badge-down.sql 20220213010535-add-unique-index-account-badge-up.sql 20220213091252-correct-deploy-badge-require-value-down.sql 20220213091252-correct-deploy-badge-require-value-up.sql 20220221053909-correct-stake-badge-required-value-down.sql 20220221053909-correct-stake-badge-required-value-up.sql package.json package-lock.json package.json prettier.config.js src badges.ts controllers badge index.ts crons CronJob.ts appActionReceipts.ts appActionReceiptsInvalidator.ts cache.ts index.ts onChainTransactions.ts transactionInvalidator.ts image-renderers drawBadge.ts drawLevel.ts drawRoundedRectangle.ts drawScore.ts image.ts index.ts poll.ts queries Params.ts access-keys.sql.ts account-activity-distribution.sql.ts account-creation.sql.ts account-relation-strength.sql.ts actions.sql.ts all-accounts.sql.ts badge-deploy.sql.ts badge-function-details.sql.ts badge-nft.sql.ts badge-service insertOrUpdate.sql.ts badge-stake.sql.ts badge-transfer.sql.ts cache leaderboard-balance.sql.ts leaderboard-score.sql.ts distinct-receivers.sql.ts distinct-senders.sql.ts gas-spent.sql.ts gas-tokens-spent.sql.ts method-usage.sql.ts most-active-nft-within-range.sql.ts most-active-wallet-within-range.sql.ts new-accounts-count.sql.ts new-accounts-list.sql.ts received-transaction-count.sql.ts recent-transaction-actions.sql.ts score-calculate.sql.ts score-from-cache.sql.ts sent-transaction-count.sql.ts top-accounts.sql.ts total-received.sql.ts total-sent.sql.ts routes.ts services badge badgeService.ts deploy.ts determineAchievedBadges.ts nft.ts stake.ts transfer.ts utils clipString.ts constants.ts humanize.ts level.ts retry.ts sleep.ts tsconfig.json typings.d.ts
# Set up ## Runtime Dependencies 1. node.js 16.x 2. npm v7 and above ## Connect to cache database & run migrations You will need a PostgreSQL connection string with full permissions. Create a file named `database.json` at the project root with the following contents: ```json { "dev": { "driver": "pg", "user": "<user>", "password": "<password>", "host": "<host>", "database": "<database>", "ssl": true } } ``` [See `db-migrate` docs for more information](https://db-migrate.readthedocs.io/en/latest/Getting%20Started/configuration/). Note that this database can take some time to update the first time, so feel free to use [one of the dumps](https://drive.google.com/drive/folders/1pzmvYi7aMAAceuItqza7LfwOQQdNYoxi?usp=sharing) to get started. ## Configure environment Create a `.env` file with the following contents: ```env ENDPOINT=<comma-separated list of endpoint names> DB_CONNECTION=<comma-separated list of indexer connection strings> PORT=<optional, default 3000> CACHE_DB_CONNECTION=<cache connection string> REDIS_URL=<redis connection string> # Don't run cache updates, optional # NO_UPDATE_CACHE=1 ``` For example: ```env ENDPOINT=mainnet,testnet DB_CONNECTION=postgres://user:pass@mainnet_host/db_name,postgres://user:pass@testnet_host/db_name CACHE_DB_CONNECTION=postgres://user:pass@cache_host/db_name REDIS_URL=redis://host:port NO_UPDATE_CACHE=1 ``` You can use the public indexer endpoints found [here](https://github.com/near/near-indexer-for-explorer#shared-public-access). # Run Node: ``` $ npm run start ``` Docker: ``` $ docker-compose up ```
LoboVH_NFT-contract-NEAR-PROTOCOL
README.md nft-contract Cargo.toml README.md build.sh src approval.rs enumeration.rs events.rs internal.rs lib.rs metadata.rs mint.rs nft_core.rs royalty.rs package.json
# NFT-contract-NEAR-PROTOCOL ## create a NFT contract implementing all NEAR blockchain NFT standards ### PreRequisites [Rust](https://docs.near.org/docs/develop/contracts/rust/intro#installing-the-rust-toolchain) [NEAR Wallet](https://docs.near.org/docs/develop/basics/create-account) [NEAR-CLI](https://docs.near.org/docs/tools/near-cli#setup) ### Clone this Repo Use `git clone https://github.com/LoboVH/NFT-contract-NEAR-PROTOCOL.git` to get this file in your localmachine. `cd NFT-contract-NEAR-PROTOCOL` Run ```yarn yarn build ``` to build the contract. Log in to your newly created account with near-cli by running the following command in your terminal ```near near login ``` set an environment variable for your account ID. In the command below, replace `YOUR_ACCOUNT_NAME` with the account name you just logged in with including the `.testnet` portion: ```near export NFT_CONTRACT_ID="YOUR_ACCOUNT_NAME" ``` Test that the environment variable is set correctly by running: ```near echo $NFT_CONTRACT_ID ``` In the root of your NFT project run the following command to deploy your smart contract. ```near near deploy --wasmFile out/main.wasm --accountId $NFT_CONTRACT_ID ``` ### Initializing the Contract Initialize with default metadata. ```near near call $NFT_CONTRACT_ID new_default_meta '{"owner_id": "'$NFT_CONTRACT_ID'"}' --accountId $NFT_CONTRACT_ID ``` To view the contracts metadata. ```near near view $NFT_CONTRACT_ID nft_metadata ``` ### Minting NFT There are many fields that could potentially be stored on-chain: Check `TokenMetadata` struct at `nft-contract/src/metadata.rs`. To mint an NFT with a title, description update respective fields in the following command, ```near near call $NFT_CONTRACT_ID nft_mint '{"token_id": "token-1", "metadata": {"title": "ADD TITLE", "description": "ADD DESCRIPTION", "media": "ADD MEDIA LINK"}, "receiver_id": "'$NFT_CONTRACT_ID'"}' --accountId $NFT_CONTRACT_ID --amount 0.1 ``` To view information about the NFT. ```near near view $NFT_CONTRACT_ID nft_token '{"token_id": "token-1"}' ``` Navigate to the collectibles tab in the `NEAR wallet`, this should list all the NFTs that you own To get the total supply of NFTs owned by the your account, call the nft_supply_for_owner function and set the account_id parameter: ```near near view $NFT_CONTRACT_ID nft_supply_for_owner '{"account_id": "YOUR ACCOUNT NAME"}' ``` ### Transfer function Make a new account and transfer the token to that account Replace `"NEW ACCOUNT NAME"` with new account id. If you run the following command, it will transfer the token "token-1" to the account `"NEW ACCOUNT"` with the memo . Take note that you're also attaching exactly 1 yoctoNEAR by using the --depositYocto flag. ```near near call $NFT_CONTRACT_ID nft_transfer '{"receiver_id": "NEW ACCOUNT NAME", "token_id": "token-1", "memo": "ADD YOUR MEMO"}' --accountId $NFT_CONTRACT_ID --depositYocto 1 ``` ### Royalties To mint a token with perpetual royalties, Run: ```near near call $NFT_CONTRACT_ID nft_mint '{"token_id": "royalty-token", "metadata": {"title": "ADD TITLE", "description": "ADD DESCRIPTION", "media": "ADD MEDIA LINK"}, "receiver_id": "'$NFT_CONTRACT_ID'", "perpetual_royalties": {"ROYALTY ACCOUNT NAME": ROYALTY_POINTS, "ROYALTY ACCOUNT NAME": ROYALTY_POINTS, "ROYALTY ACCOUNT NAME": ROYALTY_POINTS}}' --accountId $NFT_CONTRACT_ID --amount 0.1 ``` Update `token_id` and `metadata` field accordingly, Replace `"ROYALTY ACCOUNT NAME"` with suitable Royalty account name also replace the `ROYALTY_POINTS` with the royalty percntage for the respective account. Minimum percentage you can give out is 0.01%, or 1 i.e. 10000 ROYALTY_POINTS is 100%. To calculate NFT Payout for given balance: ```near near view $NFT_CONTRACT_ID nft_payout '{"token_id": "royalty-token", "balance": "BALANCE_IN_YOCTO-NEAR", "max_len_payout": 100}' ``` Note that `balance` is in yoctoNEAR. ### Approve account Export an environment variable for the approval account(Any other account or [create a sub account](https://docs.near.org/docs/tutorials/contracts/nfts/approvals#creating-sub-account)) : ```near export APPROVAL_NFT_CONTRACT_ID="APPROVAL_ACCOUNT_NAME" ``` mint a token with a token ID "approval-token". Update the metadata ```near near call $NFT_CONTRACT_ID nft_mint '{"token_id": "approval-token", "metadata": {"title": "Title", "description": "Description", "media": "Media Link"}, "receiver_id": "'$NFT_CONTRACT_ID'"}' --accountId $NFT_CONTRACT_ID --amount 0.1 ``` ### Approving an account At this point, you should have two accounts. One stored under $NFT_CONTRACT_ID and the other under the $APPROVAL_NFT_CONTRACT_ID environment variable. You can use both of these accounts to test things out. Execute the following command to approve the account stored under $APPROVAL_NFT_CONTRACT_ID to have access to transfer your NFT with an ID "approval-token". ```near near call $NFT_CONTRACT_ID nft_approve '{"token_id": "approval-token", "account_id": "'$APPROVAL_NFT_CONTRACT_ID'"}' --accountId $NFT_CONTRACT_ID --deposit 0.1 ``` Run this command to check the new approved account ID being returned. ```near near view $NFT_CONTRACT_ID nft_tokens_for_owner '{"account_id": "'$NFT_CONTRACT_ID'", "limit": 10}' ``` #### Transferring an NFT as an approved account You should be able to use the other account to transfer the NFT to itself by which the approved account IDs should be reset: pass the correct approval ID which is 0. ```near near call $NFT_CONTRACT_ID nft_transfer '{"receiver_id": "'$APPROVAL_NFT_CONTRACT_ID'", "token_id": "approval-token", "approval_id": 0}' --accountId $APPROVAL_NFT_CONTRACT_ID --depositYocto 1 ``` # TBD
NEARBuilders_maps
README.md bos.config.json data.json package-lock.json package.json
# maps.near ## Getting started 1. Install packages ```cmd npm install ``` 2. Start dev environment ```cmd npm run dev ``` This will start a gateway at [127.0.0.1:8080](http://127.0.0.1:8080) which will render your local widgets. The entry point for this app is [maps.near/widget/app](http://127.0.0.1:8080/maps.near/widget/app)
PiparHQ_merchant
Cargo.toml README.md build.sh src affiliate.rs approval.rs enumeration.rs events.rs factory.rs internal.rs lib.rs metadata.rs nft_core.rs owner.rs reward.rs royalty.rs series.rs target .rustc_info.json debug .fingerprint Inflector-e80133f0d873a89e lib-inflector.json ahash-0ea456d44955ceb2 build-script-build-script-build.json ahash-146413374edcc6be lib-ahash.json ahash-75de2a8567409b0a run-build-script-build-script-build.json arrayref-c13f4daf9141ff10 lib-arrayref.json arrayvec-4b25329b5c477333 lib-arrayvec.json arrayvec-5ed8e9a0f838a07f lib-arrayvec.json autocfg-3936825b0293837c lib-autocfg.json base64-0c3f16fadc1da635 lib-base64.json base64-7f77bc7365859151 lib-base64.json bitvec-83ca9af5df3aa113 lib-bitvec.json blake2-3042a9a886fcdf87 lib-blake2.json block-buffer-102429723281fa2e lib-block-buffer.json block-buffer-20b3646e53e662e8 lib-block-buffer.json borsh-1168a506675739d0 lib-borsh.json borsh-derive-f416a1eeafdb6031 lib-borsh-derive.json borsh-derive-internal-2c4448ac5ffebae2 lib-borsh-derive-internal.json borsh-schema-derive-internal-81458834846d6b34 lib-borsh-schema-derive-internal.json bs58-a1ee981bf5fa2a05 lib-bs58.json byte-slice-cast-e98726a35bb6c61f lib-byte-slice-cast.json byteorder-c1f379e1522f1ef6 lib-byteorder.json bytesize-e9bb087e1eaabb72 lib-bytesize.json c2-chacha-f22e8ea2f9962c5e lib-c2-chacha.json cc-7f8e6df7ed2cade2 lib-cc.json cfg-if-1cc4882d2e315207 lib-cfg-if.json cfg-if-30d7f2d28beeeb1b lib-cfg-if.json cfg-if-7372481e614c58dd lib-cfg-if.json chrono-3f3ed1eb1a4be3f9 lib-chrono.json cipher-3fdc9af8c4abdfb9 lib-cipher.json convert_case-5edeb42cc79bfdbf lib-convert_case.json core-foundation-sys-0ae7d9beeb7d3b58 lib-core-foundation-sys.json cpufeatures-317cb3e604dbdcfa lib-cpufeatures.json crunchy-0322e332b3ec8289 lib-crunchy.json crunchy-1e4810b237e7579b build-script-build-script-build.json crunchy-6050ff8e1cd01d84 run-build-script-build-script-build.json crypto-common-fbd876bb7100e04c lib-crypto-common.json crypto-mac-2ec09d725d6bdcbd lib-crypto-mac.json curve25519-dalek-b5037398d7577b7d lib-curve25519-dalek.json derive_more-c9233a087f5c9746 lib-derive_more.json digest-4fedda34bb2d75d0 lib-digest.json digest-7db87c22153058bf lib-digest.json dyn-clone-d2e27b301c705649 lib-dyn-clone.json easy-ext-7a928631cacd1e93 lib-easy-ext.json ed25519-961d1528f0b9cbc5 lib-ed25519.json ed25519-dalek-52b3cf0fc124abc5 lib-ed25519-dalek.json equivalent-ea0d860523f0903a lib-equivalent.json fixed-hash-b8a524a08d9fc683 lib-fixed-hash.json funty-5c293ef059cd9097 lib-funty.json generic-array-14d371e49e7d702d build-script-build-script-build.json generic-array-46aa735c1d9d8d90 run-build-script-build-script-build.json generic-array-d7bfb0603b875341 lib-generic_array.json getrandom-6a73bb18c9683d61 build-script-build-script-build.json getrandom-7d34e7d8dbe9731d lib-getrandom.json getrandom-a020d3a53d85421e run-build-script-build-script-build.json getrandom-c28c71914f7477d9 lib-getrandom.json hashbrown-7302087157b4c8e7 lib-hashbrown.json hashbrown-f8a5b34dd80811fd lib-hashbrown.json heck-f53c7857da10975a lib-heck.json hex-fb12a001e1673ece lib-hex.json iana-time-zone-b0919317c75cf72a lib-iana-time-zone.json impl-codec-d2ec6bbba2dbf9b9 lib-impl-codec.json impl-trait-for-tuples-914bf5e96206cd03 lib-impl-trait-for-tuples.json indexmap-1d50fbf8846ab25c lib-indexmap.json itoa-48345fbcae012e71 lib-itoa.json keccak-e294fd1d5314e1dd lib-keccak.json lazy_static-f922ee928b4d529a lib-lazy_static.json libc-2d2cc7e09de2199b run-build-script-build-script-build.json libc-57eb845a56a45465 build-script-build-script-build.json libc-ee78d775f691ab1e lib-libc.json memory_units-65568a0325aa157f lib-memory_units.json near-abi-2ec0e8a98f4f402a lib-near-abi.json near-account-id-e6cc965963d6d8de lib-near-account-id.json near-crypto-75943256abadd40c lib-near-crypto.json near-primitives-core-d3e464049b28fbad lib-near-primitives-core.json near-primitives-d6d89eef7bf6c1ac lib-near-primitives.json near-rpc-error-core-fc57b0aa33e606f5 lib-near-rpc-error-core.json near-rpc-error-macro-49134a363dc3b239 lib-near-rpc-error-macro.json near-sdk-dad30d35fc0d6f83 lib-near-sdk.json near-sdk-macros-5d6cc0e32b26ad32 lib-near-sdk-macros.json near-sys-fe41686c1c2c82f0 lib-near-sys.json near-vm-errors-a2847bbfdcfef355 lib-near-vm-errors.json near-vm-logic-5afe57f408e6486f lib-near-vm-logic.json num-bigint-10cada73a2856123 run-build-script-build-script-build.json num-bigint-ba46fa44fc1222cd build-script-build-script-build.json num-bigint-da03afbaff77d9c6 lib-num-bigint.json num-integer-51afd3190b5476b7 build-script-build-script-build.json num-integer-bc3d41f9e551238c run-build-script-build-script-build.json num-integer-eff989353409442e lib-num-integer.json num-rational-479e7f733c15ad37 lib-num-rational.json num-rational-47ca0984fed0294d build-script-build-script-build.json num-rational-62e125b49c275883 run-build-script-build-script-build.json num-traits-1274095eb20247e7 build-script-build-script-build.json num-traits-67c608e60482db69 run-build-script-build-script-build.json num-traits-de7bfecb7da142b4 lib-num-traits.json once_cell-43a9c7b59790676b lib-once_cell.json once_cell-6c76acca5e158921 lib-once_cell.json opaque-debug-a3337a4b35c8c00e lib-opaque-debug.json parity-scale-codec-28c91308536b173b lib-parity-scale-codec.json parity-scale-codec-derive-5987aeb7787708cf lib-parity-scale-codec-derive.json parity-secp256k1-51329ce022143282 run-build-script-build-script-build.json parity-secp256k1-730eca21e68640b3 lib-secp256k1.json parity-secp256k1-7b9f7592437e7b25 build-script-build-script-build.json pipar_store-acbff1c277e708fc lib-pipar_store.json pipar_store-d4b08b0e01d5dd10 test-lib-pipar_store.json ppv-lite86-acd224a756fd208c lib-ppv-lite86.json primitive-types-069c17cb7f597ed3 lib-primitive-types.json proc-macro-crate-2fcc90dba2f8adaf lib-proc-macro-crate.json proc-macro-crate-f021ab89088df26e lib-proc-macro-crate.json proc-macro2-08f586a118d95ee2 build-script-build-script-build.json proc-macro2-172b2919012ba177 run-build-script-build-script-build.json proc-macro2-3b1bcd019f70cd98 lib-proc-macro2.json quote-325d01d8306a3eb2 lib-quote.json quote-7b6e73978ac0b6ef build-script-build-script-build.json quote-ca0d5a0d3167125f run-build-script-build-script-build.json radium-6dd089b3862f651e run-build-script-build-script-build.json radium-83d04fdd0665788c lib-radium.json radium-f89e6679aacd06d2 build-script-build-script-build.json rand-1856dd7ce07b7cbc lib-rand.json rand-9b9dfa87bbb4d1c8 lib-rand.json rand_chacha-9fbbcb62c25aa148 lib-rand_chacha.json rand_chacha-bec625f92f57e942 lib-rand_chacha.json rand_core-0872be3c5c3200ba lib-rand_core.json rand_core-541c95c45ecc393d lib-rand_core.json reed-solomon-erasure-7053241e968cb1b9 lib-reed-solomon-erasure.json reed-solomon-erasure-88fcd5d11b5cbe2b run-build-script-build-script-build.json reed-solomon-erasure-efe9a6831b061980 build-script-build-script-build.json ripemd-f8c74091c0056f0b lib-ripemd.json rustc-hex-7cde8fc5dbf5ad74 lib-rustc-hex.json rustversion-b3a61a41f09e36a6 run-build-script-build-script-build.json rustversion-c5bbcbc909b16387 build-script-build-script-build.json rustversion-e89fbe7185e070ac lib-rustversion.json ryu-6537036c019955a8 lib-ryu.json schemars-60d408c2179e5ff1 build-script-build-script-build.json schemars-6c0ece18e3f13da0 lib-schemars.json schemars-a738730651ae1951 run-build-script-build-script-build.json schemars_derive-48f323cbbcda1f41 lib-schemars_derive.json semver-058cc4b94d2e30e6 run-build-script-build-script-build.json semver-0b5402e1103ab6ad build-script-build-script-build.json semver-5774ecc6d1c897d6 lib-semver.json serde-043578ca7e169bd2 lib-serde.json serde-11e2f49771ab6554 run-build-script-build-script-build.json serde-34e192e7879d3f35 run-build-script-build-script-build.json serde-385477f7320e8c7a build-script-build-script-build.json serde-842bc66b7f1eb134 build-script-build-script-build.json serde-8a16bd8235b2f2fa lib-serde.json serde_derive-7d8373340457b830 lib-serde_derive.json serde_derive_internals-9eeea32b23032ca2 lib-serde_derive_internals.json serde_json-1ead7b7b9774179f lib-serde_json.json serde_json-29f8d346b66dbca0 build-script-build-script-build.json serde_json-32cf63492bb02aad run-build-script-build-script-build.json sha2-41ad12c6639ba0e7 lib-sha2.json sha2-d95acb25a943160f lib-sha2.json sha3-91239e5aa44b5cf3 lib-sha3.json signature-3b44bc0765318baf lib-signature.json smallvec-cf249ee32454c859 lib-smallvec.json smart-default-aebdd7c15dbb5ec9 lib-smart-default.json spin-1eada6c21182491e lib-spin.json static_assertions-a6e951094df62b70 lib-static_assertions.json strum-aec86fc3adb5c6a1 lib-strum.json strum_macros-5048e0ba0fa4a9df lib-strum_macros.json subtle-705566a687a5fcaa lib-subtle.json syn-239286cdeabdb889 run-build-script-build-script-build.json syn-7a8dc973d4ce2cb6 build-script-build-script-build.json syn-c5f6c7dc24b7db29 lib-syn.json syn-f73d634c51382ee8 lib-syn.json tap-eaedd486a44964e7 lib-tap.json thiserror-2edee1b7d05273f7 build-script-build-script-build.json thiserror-702ac101d09cc068 lib-thiserror.json thiserror-d70f0c6d16a46703 run-build-script-build-script-build.json thiserror-impl-452caf318aafdea8 lib-thiserror-impl.json time-71d0c09981794c75 lib-time.json toml-6935792611f22c18 lib-toml.json toml_datetime-4a5291c26fbeb8a6 lib-toml_datetime.json toml_edit-3f532429235ab58a lib-toml_edit.json typenum-4c28b1aa938baf81 run-build-script-build-script-main.json typenum-6a8b853b5be3561d lib-typenum.json typenum-7c52df23ac9e694f build-script-build-script-main.json uint-5f5583b2a7345354 lib-uint.json unicode-ident-1a7a93f5f418835f lib-unicode-ident.json version_check-9f5b1902f3f8fdcd lib-version_check.json wee_alloc-8edfc0e4af906ee2 build-script-build-script-build.json wee_alloc-b939c7b00afe81d3 run-build-script-build-script-build.json wee_alloc-c99bc597947bcdce lib-wee_alloc.json winnow-302c038725304bb7 lib-winnow.json wyz-65fe601d21da6e6a lib-wyz.json zeroize-792461883bb3affb lib-zeroize.json zeroize_derive-05852810e6099ec6 lib-zeroize_derive.json zeropool-bn-f69a8e71a2fae23e lib-zeropool-bn.json
Merchant Store Smart Contract For Pipar Marketplace ===================================================
NearNet_near-nft-transfer
README.md | | market-contract Cargo.toml README.md build.sh src external.rs internal.rs lib.rs nft_callbacks.rs sale.rs sale_views.rs nft-contract Cargo.toml README.md build.sh src approval.rs enumeration.rs events.rs internal.rs lib.rs metadata.rs mint.rs nft_core.rs royalty.rs package.json
# NEAR NFT-Tutorial with an allow list Welcome to NEAR's NFT tutorial, where we will help you parse the details around NEAR's [NEP-171 standard](https://nomicon.io/Standards/NonFungibleToken/Core.html) (Non-Fungible Token Standard), and show you how to build your own NFT smart contract from the ground up, improving your understanding about the NFT standard along the way. ## Prerequisites * [NEAR Wallet Account](wallet.testnet.near.org) * [Rust Toolchain](https://docs.near.org/docs/develop/contracts/rust/intro#installing-the-rust-toolchain) * [NEAR-CLI](https://docs.near.org/docs/tools/near-cli#setup) * [yarn](https://classic.yarnpkg.com/en/docs/install#mac-stable) ## Tutorial Stages Each branch you will find in this repo corresponds to various stages of this tutorial with a partially completed contract at each stage. You are welcome to start from any stage you want to learn the most about. | Branch | Docs Tutorial | Description | | ------------- | ------------------------------------------------------------------------------------------------ | ----------- | | 1.skeleton | [Contract Architecture](https://docs.near.org/docs/tutorials/contracts/nfts/skeleton) | You'll learn the basic architecture of the NFT smart contract, and you'll compile this skeleton code with the Rust toolchain. | | 2.minting | [Minting](https://docs.near.org/docs/tutorials/contracts/nfts/minting) |Here you'll flesh out the skeleton so the smart contract can mint a non-fungible token | | 3.enumeration | [Enumeration](https://docs.near.org/docs/tutorials/contracts/nfts/enumeration) | Here you'll find different enumeration methods that can be used to return the smart contract's states. | | 4.core | [Core](https://docs.near.org/docs/tutorials/contracts/nfts/core) | In this tutorial you'll extend the NFT contract using the core standard, which will allow you to transfer non-fungible tokens. | | 5.approval | [Approval](https://docs.near.org/docs/tutorials/contracts/nfts/approvals) | Here you'll expand the contract allowing other accounts to transfer NFTs on your behalf. | | 6.royalty | [Royalty](https://docs.near.org/docs/tutorials/contracts/nfts/royalty) |Here you'll add the ability for non-fungible tokens to have royalties. This will allow people to get a percentage of the purchase price when an NFT is purchased. | | 7.events | ----------- | This allows indexers to know what functions are being called and make it easier and more reliable to keep track of information that can be used to populate the collectibles tab in the wallet for example. (tutorial docs have yet to be implemented ) | | 8.marketplace | ----------- | ----------- | The tutorial series also contains a very helpful section on [**Upgrading Smart Contracts**](https://docs.near.org/docs/tutorials/contracts/nfts/upgrade-contract). Definitely go and check it out as this is a common pain point. # Quick-Start If you want to see the full completed contract go ahead and clone and build this repo using ```=bash git clone https://github.com/near-examples/nft-tutorial.git cd nft-tutorial git switch 6.royalty yarn build ``` Now that you've cloned and built the contract we can try a few things. ## Mint An NFT Once you've created your near wallet go ahead and login to your wallet with your cli and follow the on-screen prompts ```=bash near login ``` Once your logged in you have to deploy the contract. Make a subaccount with the name of your choosing ```=bash near create-account nft-example.your-account.testnet --masterAccount your-account.testnet --initialBalance 10 ``` After you've created your sub account deploy the contract to that sub account, set this variable to your sub account name ```=bash NFT_CONTRACT_ID=nft-example.your-account.testnet MAIN_ACCOUNT=your-account.testnet ``` Verify your new variable has the correct value ```=bash echo $NFT_CONTRACT_ID echo $MAIN_ACCOUNT ``` ### Deploy Your Contract ```=bash near deploy --accountId $NFT_CONTRACT_ID --wasmFile out/main.wasm ``` ### Initialize Your Contract ```=bash near call $NFT_CONTRACT_ID new_default_meta '{"owner_id": "'$NFT_CONTRACT_ID'"}' --accountId $NFT_CONTRACT_ID ``` ### View Contracts Meta Data ```=bash near view $NFT_CONTRACT_ID nft_metadata ``` ### Minting Token ```bash= near call $NFT_CONTRACT_ID nft_mint '{"token_id": "token-1", "metadata": {"title": "My Non Fungible Team Token", "description": "The Team Most Certainly Goes :)", "media": "https://bafybeiftczwrtyr3k7a2k4vutd3amkwsmaqyhrdzlhvpt33dyjivufqusq.ipfs.dweb.link/goteam-gif.gif"}, "receiver_id": "'$MAIN_ACCOUNT'"}' --accountId $MAIN_ACCOUNT --amount 0.1 ``` After you've minted the token go to wallet.testnet.near.org to `your-account.testnet` and look in the collections tab and check out your new sample NFT! ## View NFT Information After you've minted your NFT you can make a view call to get a response containing the `token_id` `owner_id` and the `metadata` ```bash= near view $NFT_CONTRACT_ID nft_token '{"token_id": "token-1"}' ``` ## Transfering NFTs To transfer an NFT go ahead and make another [testnet wallet account](https://wallet.testnet.near.org). Then run the following ```bash= MAIN_ACCOUNT_2=your-second-wallet-account.testnet ``` Verify the correct variable names with this ```=bash echo $NFT_CONTRACT_ID echo $MAIN_ACCOUNT echo $MAIN_ACCOUNT_2 ``` To initiate the transfer.. ```bash= near call $NFT_CONTRACT_ID nft_transfer '{"receiver_id": "$MAIN_ACCOUNT_2", "token_id": "token-1", "memo": "Go Team :)"}' --accountId $MAIN_ACCOUNT --depositYocto 1 ``` In this call you are depositing 1 yoctoNEAR for security and so that the user will be redirected to the NEAR wallet. ## Errata Large Changes: * **2022-02-12**: updated the enumeration methods `nft_tokens` and `nft_tokens_for_owner` to no longer use any `to_vector` operations to save GAS. In addition, the default limit was changed from 0 to 50. PR found [here](https://github.com/near-examples/nft-tutorial/pull/17). Small Changes: * **2022-02-22**: changed `token_id` parameter type in nft_payout from `String` to `TokenId` for consistency as per pythonicode's suggestion # TBD # TBD
NEAR-Analytics_NEAR-ATLAS-VM
CHANGELOG.md config paths.js presets loadPreset.js webpack.analyze.js webpack.development.js webpack.production.js dist index.js index.js.LICENSE.txt package.json postcss.config.js readme.md src index.js lib components Commit.js ConfirmTransactions.js Markdown.js SecureIframe.js Widget.js graphs BarEl.js BubbleEl.js ChartEl.js CohortEl.js LineEl.js PieEl.js ScatterEl.js react_chart ResizableBox.js components Area.js Band.js Bar.js BarHorizontal.js BarHorizontalStacked.js BarStacked.js Bubble.js CustomStyles.js DarkMode.js DynamicContainer.js InteractionMode.js Line.js MultipleAxes.js custom_charts Bar.js main.js useDemoConfig.js remark hashtags.js mentions.js tables BasicTable.js Table.js data account.js cache.js commitData.js near.js utils.js vm vm.js styles global.css tailwind.config.js webpack.config.js
keypom_nft-tutorial-series
.github workflows tests.yml README.md | | integration-tests rs Cargo.toml src helpers.rs tests.rs ts package.json src assertions.ts main.ava.ts utils.ts market-contract Cargo.toml README.md build.sh src external.rs internal.rs lib.rs nft_callbacks.rs sale.rs sale_views.rs tests.rs nft-contract Cargo.toml README.md build.sh src approval.rs enumeration.rs events.rs internal.rs lib.rs metadata.rs nft_core.rs owner.rs royalty.rs series.rs package.json
# TBD # NEAR NFT-Tutorial Welcome to NEAR's NFT tutorial, where we will help you parse the details around NEAR's [NEP-171 standard](https://nomicon.io/Standards/NonFungibleToken/Core.html) (Non-Fungible Token Standard), and show you how to build your own NFT smart contract from the ground up, improving your understanding about the NFT standard along the way. ## Prerequisites * [NEAR Wallet Account](wallet.testnet.near.org) * [Rust Toolchain](https://docs.near.org/develop/prerequisites) * [NEAR-CLI](https://docs.near.org/tools/near-cli#setup) * [yarn](https://classic.yarnpkg.com/en/docs/install#mac-stable) ## Tutorial Stages Each branch you will find in this repo corresponds to various stages of this tutorial with a partially completed contract at each stage. You are welcome to start from any stage you want to learn the most about. | Branch | Docs Tutorial | Description | | ------------- | ------------------------------------------------------------------------------------------------ | ----------- | | 1.skeleton | [Contract Architecture](https://docs.near.org/docs/tutorials/contracts/nfts/skeleton) | You'll learn the basic architecture of the NFT smart contract, and you'll compile this skeleton code with the Rust toolchain. | | 2.minting | [Minting](https://docs.near.org/docs/tutorials/contracts/nfts/minting) |Here you'll flesh out the skeleton so the smart contract can mint a non-fungible token | | 3.enumeration | [Enumeration](https://docs.near.org/docs/tutorials/contracts/nfts/enumeration) | Here you'll find different enumeration methods that can be used to return the smart contract's states. | | 4.core | [Core](https://docs.near.org/docs/tutorials/contracts/nfts/core) | In this tutorial you'll extend the NFT contract using the core standard, which will allow you to transfer non-fungible tokens. | | 5.approval | [Approval](https://docs.near.org/docs/tutorials/contracts/nfts/approvals) | Here you'll expand the contract allowing other accounts to transfer NFTs on your behalf. | | 6.royalty | [Royalty](https://docs.near.org/docs/tutorials/contracts/nfts/royalty) |Here you'll add the ability for non-fungible tokens to have royalties. This will allow people to get a percentage of the purchase price when an NFT is purchased. | | 7.events | ----------- | This allows indexers to know what functions are being called and make it easier and more reliable to keep track of information that can be used to populate the collectibles tab in the wallet for example. (tutorial docs have yet to be implemented ) | | 8.marketplace | ----------- | ----------- | The tutorial series also contains a very helpful section on [**Upgrading Smart Contracts**](https://docs.near.org/docs/tutorials/contracts/nfts/upgrade-contract). Definitely go and check it out as this is a common pain point. # Quick-Start If you want to see the full completed contract go ahead and clone and build this repo using ```=bash git clone https://github.com/near-examples/nft-tutorial.git cd nft-tutorial git switch 6.royalty yarn build ``` Now that you've cloned and built the contract we can try a few things. ## Mint An NFT Once you've created your near wallet go ahead and login to your wallet with your cli and follow the on-screen prompts ```=bash near login ``` Once your logged in you have to deploy the contract. Make a subaccount with the name of your choosing ```=bash near create-account nft-example.your-account.testnet --masterAccount your-account.testnet --initialBalance 10 ``` After you've created your sub account deploy the contract to that sub account, set this variable to your sub account name ```=bash NFT_CONTRACT_ID=nft-example.your-account.testnet MAIN_ACCOUNT=your-account.testnet ``` Verify your new variable has the correct value ```=bash echo $NFT_CONTRACT_ID echo $MAIN_ACCOUNT ``` ### Deploy Your Contract ```=bash near deploy --accountId $NFT_CONTRACT_ID --wasmFile out/main.wasm ``` ### Initialize Your Contract ```=bash near call $NFT_CONTRACT_ID new_default_meta '{"owner_id": "'$NFT_CONTRACT_ID'"}' --accountId $NFT_CONTRACT_ID ``` ### View Contracts Meta Data ```=bash near view $NFT_CONTRACT_ID nft_metadata ``` ### Minting Token ```bash= near call $NFT_CONTRACT_ID nft_mint '{"token_id": "token-1", "metadata": {"title": "My Non Fungible Team Token", "description": "The Team Most Certainly Goes :)", "media": "https://bafybeiftczwrtyr3k7a2k4vutd3amkwsmaqyhrdzlhvpt33dyjivufqusq.ipfs.dweb.link/goteam-gif.gif"}, "receiver_id": "'$MAIN_ACCOUNT'"}' --accountId $MAIN_ACCOUNT --amount 0.1 ``` After you've minted the token go to wallet.testnet.near.org to `your-account.testnet` and look in the collections tab and check out your new sample NFT! ## View NFT Information After you've minted your NFT you can make a view call to get a response containing the `token_id` `owner_id` and the `metadata` ```bash= near view $NFT_CONTRACT_ID nft_token '{"token_id": "token-1"}' ``` ## Transfering NFTs To transfer an NFT go ahead and make another [testnet wallet account](https://wallet.testnet.near.org). Then run the following ```bash= MAIN_ACCOUNT_2=your-second-wallet-account.testnet ``` Verify the correct variable names with this ```=bash echo $NFT_CONTRACT_ID echo $MAIN_ACCOUNT echo $MAIN_ACCOUNT_2 ``` To initiate the transfer.. ```bash= near call $NFT_CONTRACT_ID nft_transfer '{"receiver_id": "$MAIN_ACCOUNT_2", "token_id": "token-1", "memo": "Go Team :)"}' --accountId $MAIN_ACCOUNT --depositYocto 1 ``` In this call you are depositing 1 yoctoNEAR for security and so that the user will be redirected to the NEAR wallet. ## Errata Large Changes: * **2022-06-21**: updated the rust SDK to version 4.0.0. PR found [here](https://github.com/near-examples/nft-tutorial/pull/32) * **2022-02-12**: updated the enumeration methods `nft_tokens` and `nft_tokens_for_owner` to no longer use any `to_vector` operations to save GAS. In addition, the default limit was changed from 0 to 50. PR found [here](https://github.com/near-examples/nft-tutorial/pull/17). Small Changes: * **2022-02-22**: changed `token_id` parameter type in nft_payout from `String` to `TokenId` for consistency as per pythonicode's suggestion # TBD
MiguelIslasH_CompaniesPhoneDirectory
README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts models company.ts enums.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ```
gurambalavadze_near-integration-blog-contract
README.md asconfig.json assembly as_types.d.ts index.ts model.ts tsconfig.json package.json
# Near Integration with ButterCMS Blog Contract ## Blog smart contract written in AssemblyScript used in [NEAR integration with ButterCMS project](https://github.com/gurambalavadze/near-integration-buttercms). This is a simple smart contract which users can purchase articles by paying NEAR tokens (at least one NEAR) to the author of the article and they also can see the list of their purchased articles. When users want to read an article in the website the smart contract is in charge of identifying user access to the article. ## Project Structure ``` src\ |--assembly\ # assemblyscript codes |--as_types.d.ts # typescript type definition |--index.ts # smart contract methods |--model.ts # payment model definition |--tsconfig.json # typescript config file |--build\ # build directory |--release\ |--blog-contract.wasm # web assembly build |--asconfig.json # assemblyscript config file |--package.json # package config file ``` ## Contract methods There are 3 functions that we have to call them using "call" method (these mothods need authentication so we can't call any of them with "view" even though two of them will not change the storage) - authorizeUser(postId) this function checks if the user has access to the article with the specified postId or not. the result is either Payment or null - buyPost(postId, author) this function will transfer certain amount of NEAR token (at least one) to the account id of the author of the post and store the payment in a hash table so the buyer can have access to the article - getPayments() returns a list of user payments informations like article name, amount of tokens, ... . ## Deployment In order to deploy the contract into your NEAR account first you have to clone the repository: ```bash git clone https://github.com/gurambalavadze/near-integration-blog-contract.git ``` Then go to the directory ```bash cd near-integration-blog-contract ``` Install npm packages ```bash yarn ``` Build the smart contract ```bash yarn asb ``` Now you have to see the generated .wasm file in the ./build/relase directory. In order to test the smart contract you need to have an account on testnet domain. Go to [Testnet Wallet](https://wallet.testnet.near.org/) and create an account if you don't have one. We will assume the account name is myaccount.testnet for the rest of deployment stages. Please replace that with your real account name. Now login to your top level account ```bash yarn near login ``` This will redirect you to NEAR wallet and you will be asked to grant permissions to the near-cli to access your account. give permission to the app in order to continue deployment. If you install near-cli globally ( ` yarn add global near-cli` ) you don't need to use yarn for running near commands. Because each near account can only have one contract deployed on it we create a sub-account for this smart contract ```bash yarn near create-account ${SUBACCOUNT_ID}.${ACCOUNT_ID} --masterAccount ${ACCOUNT_ID} --initialBalance ${INITIAL_BALANCE} ``` For example ```bash yarn near create-account mycontract.myaccount.testnet --masterAccount myaccount.testnet --initialBalance 1 ``` The amount of initialBalance doesn't matter for this contract. Now we can deploy the smart contract ```bash yarn near deploy --accountId=mycontract.myaccount.testnet --wasmFile=build/release/blog-contract.wasm ``` Congratulations! You deployed the smart contract. Now lets test it! ## Commands Before we test the smart contract let's create two sub-accounts: one for author and one for buyer ```bash yarn near create-account author.myaccount.testnet --masterAccount myaccount.testnet --initialBalance 1 yarn near create-account buyer.myaccount.testnet --masterAccount myaccount.testnet --initialBalance 5 ``` We give the buyer 5 NEAR initialBalance in order to buy some articles. ### Buy an article We have buyPost function in ./assembly/index.ts with two parameters post, author that we want to call ```bash yarn near call mycontract.myaccount.testnet buyPost '{ "post": "test", "author": "author.myaccount.testnet" }' --depositYocto=1000000000000000000000000 --accountId=buyer.testnet ``` We pay 1 NEAR for "test" article to the author. we specified the amount in Yocto which is 1/10^24 a NEAR token. If you run ```bash yarn near state author.myaccount.testnet ``` you will see the amount is increased by one NEAR. ### Authorize user Now that we bought an article we can have access to it ```bash yarn near call mycontract.myaccount.testnet authorizeUser '{"post": "test"}' --accountId=buyer.myaccount.testnet ``` You will see the payment details ### Get list of user payments Run ```bash yarn near call mycontract.myaccount.testnet getPayments --accountId=buyer.myaccount.testnet ``` to see the list of buyer payments
noemk2_crossword_zero_to_hero
.gitpod.yml README.md contract Cargo.toml README.md src lib.rs frontend App.js __mocks__ fileMock.js assets css global.css img logo-black.svg logo-white.svg js near config.js utils.js index.html index.js integration-tests rs Cargo.toml src tests.rs ts package.json src main.ava.ts package-lock.json package.json
near-blank-project Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html near-blank-project ================== This [React] app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/assets/js/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages
near-rocket-rpc_rocket-dashboard
.vscode extensions.json README.md index.html package-lock.json package.json src assets base.css logo.svg main.css main.js socketio.js utils near.js vite.config.js
# rocket-dashboard This template should help get you started developing with Vue 3 in Vite. ## Recommended IDE Setup [VSCode](https://code.visualstudio.com/) + [Volar](https://marketplace.visualstudio.com/items?itemName=Vue.volar) (and disable Vetur) + [TypeScript Vue Plugin (Volar)](https://marketplace.visualstudio.com/items?itemName=Vue.vscode-typescript-vue-plugin). ## Customize configuration See [Vite Configuration Reference](https://vitejs.dev/config/). ## Project Setup ```sh npm install ``` ### Compile and Hot-Reload for Development ```sh npm run dev ``` ### Compile and Minify for Production ```sh npm run build ```
near_docs-generator
README.md action.yml builder .eslintrc.yml babel.config.js docusaurus.config.js near-api-js.sh near-sdk-js.sh package.json dev-attach.sh dev.sh docker-compose.yml docs-bot README.md api on-docs-build.js on-release.js app.yml package.json entrypoint.sh shell-scripts funcs.sh test.sh
# Docs Bot See [README](../README.md) for Docs Generator. # docs-generator This is: - A GitHub Action that should run on the docs repo (`near/docs`) - A GitHub app (`./docs-bot`) that should be installed on the docs repo (`near/docs`) ### GitHub Action This is a containerized action (see `Dockerfile`). Inputs: - `source_repo`: Source repo to generate docs for (`near/near-api-js` and others. Or your fork - ex: `maxhr/near--near-api-js`) - `release_version`: The git tag to check out, this should match the release version of the package (`v1.0.0`) - `builder_name`: Name of builder file in `./builder`. Today: `near-api-js`. Soon also: `near-cli | near-sdk-js` - `github_token`: If you run `dev.sh` it's your Personal Access Token with repos permissions. When running in GitHub workflow - GH provides it automatically as an env var. `entrypoint.sh`: - Pulls source and docs - Builds doc - in `/builder` dir there are build files that match the `builder_name` input (ex: `builder/near-api-js.sh`) - Creates a PR in the docs repo (the repo that this action runs on) ### GitHub App (Docs Bot) The app (`./docs-bot`), is published on Vercel (https://docs-bot.vercel.app). It's purpose is to trigger `repository_dispatch` and create PRs in the docs repo. It should be installed on the docs repo and its `https://docs-bot.vercel.app/api/on-release` endpoint can be called from source-code repos workflow when a new version get released. This is to be able to trigger docs build automatically. You can also invoke the GitHub action (described above) manually with `workflow_dispatch` event. See the workflows in the docs repo to see how it's configured for manual and automatic listeners. See the workflows in `near-api-js` repo to see how it's being triggered automatically. ## Contributing You need a GitHub access token with repos permissions to run `./dev.sh`. Make sure you have it in your `~/.github-token`. `./dev.sh` will run docker container with the needed params. - `GITHUB_REPOSITORY_OWNER` - should be `near` or you if you forked - `GITHUB_REPOSITORY` - `near/docs` or your fork - `SOURCE_REPO` - for example `near/near-api-js` - `BUILDER_NAME` - at the moment `near-api-js` others soon. This will run `builder/near-api-js.sj` - `SOURCE_TAG` - the published package version to checkout (ex: `v1.0.0`) - `GITHUB_TOKEN` - access token. GitHub provides it in Action Workflow. For local dev you need a Personal Access Token. `./dev-attach.sh` will run attach to the container, without running the entrypoint file. You can use it to run `entrypoint.sh` manually for debugging.
Learn-NEAR-Club_token-sale
README.md contract Cargo.toml compile.js src lib.rs package.json
Token Sale ================== This is a smart contract running on NEAR Protocol. It could be used to run a token sale. # Sale rules * There are 2 periods: Sale and Grace. * In Sale period: users can deposit and withdraw. * In Grace period: users can only withdraw. * At any point of time, the price is calculated by the total deposit divided by the total number of tokens for sale. * After the grace period ends, the sale finishes and the price is finalized. * At that point, tokens are allocated to users based on their deposit. Users can redeem to their wallets.
nsejim_js-rust-near-smart-contract-state-compatibility-test
.gitpod.yml README.md contracts js README.md babel.config.json build.sh deploy.sh neardev dev-account.env package-lock.json package.json src contract.ts tsconfig.json rust Cargo.toml README.md build.sh deploy.sh neardev dev-account.env src lib.rs integration-tests js package-lock.json package.json src main.ava.ts rust Cargo.toml src tests.rs package-lock.json package.json
# Introduction This repo tests if RUST and JS smart contracts can by default share the same contract's state since at the end of the day they both run on NEAR blockchain as WASM code. By default, the response is NO. The reason is that the two SDKs save data in a different way. A solution could be a converter from the state structure from one SDK to another. To test that, first we deploy a JS smart contract, change the state and then deploy a RUST smart contract and try to read the state. ## Test procedure ### Install Dependencies yarn ### Build both JS and RUST smart contracts yarn build ### Deploy JS Smart Contract Build and deploy your JS contract to TestNet with a temporary dev account: yarn deploy:js ### Change the greeting message near call dev-1667426209688-37311552113285 set_greeting '{"message": "Hi"}' --accountId ${ACCOUNT_ID} ### View the greeting message near view dev-1667426209688-37311552113285 get_greeting ### Deploy RUST Smart Contract Build and deploy your RURST contract to TestNet on the same dev account: yarn deploy:rust ### Try to view the greeting message near view dev-1667426209688-37311552113285 get_greeting <pre> The contract will panick due a deserialization bug. JS and RUST smart contracts cannot share the same state by default due to the state serialization/deserialization feature by JS contracts. </pre> # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`. # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```ts @NearBindgen({}) class HelloNear { greeting: string = "Hello"; @view // This method is read-only and can be called for free get_greeting(): string { return this.greeting; } @call // This method changes the state, for which it cost gas set_greeting({ greeting }: { greeting: string }): void { // Record a log permanently to the blockchain! near.log(`Saving greeting ${greeting}`); this.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [node.js](https://nodejs.org/en/download/package-manager/) >= 16. 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash npm run deploy ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `call` method. `Call` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"greeting":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
IcarusCan_my-learn-near-nft
.gitpod.yml README.md contract README.md babel.config.json build.sh build approval.js builder.c code.h contract.js enumeration.js internal.js metadata.js methods.h mint.js nft_core.js royalty.js deploy.sh package-lock.json package.json src approval.ts contract.ts enumeration.ts internal.ts metadata.ts mint.ts nft_core.ts royalty.ts tsconfig.json package-lock.json package.json
near-blank-project ================== This app was initialized with [create-near-app] Quick Start =========== If you haven't installed dependencies during setup: npm install Build and deploy your contract to TestNet with a temporary dev account: npm run deploy Test your contract: npm test If you have a frontend, run `npm start`. This will run a dev server. Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```ts @NearBindgen({}) class HelloNear { greeting: string = "Hello"; @view // This method is read-only and can be called for free get_greeting(): string { return this.greeting; } @call // This method changes the state, for which it cost gas set_greeting({ greeting }: { greeting: string }): void { // Record a log permanently to the blockchain! near.log(`Saving greeting ${greeting}`); this.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [node.js](https://nodejs.org/en/download/package-manager/) >= 16. 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash npm run deploy ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `call` method. `Call` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"greeting":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
LiboShen_mooncake-nft
.gitpod.yml README.md contract Cargo.toml README.md src facai_gen.rs karma.rs lib.rs linkdrop.rs frontend assets e_0.svg e_1.svg e_2.svg e_3.svg e_4.svg e_5.svg e_6.svg e_7.svg e_8.svg e_9.svg facai.svg index.html package.json src NftImages.js main.css tailwind.config.js vite.config.js integration-tests Cargo.toml src tests.rs package.json scripts init.js migrate.js mint_2022.js
Hello NEAR! ================================= A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. 2. There are two functions to the smart contract: `get_greeting` and `set_greeting`. 3. Tests: You can run smart contract tests with the `cargo test`. [smart contract]: https://docs.near.org/develop/welcome [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://docs.near.org/develop/prerequisites#rust-and-wasm [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html 🥮 Mooncake NFT ================== https://mooncakenft.art/ [Mooncake](https://en.wikipedia.org/wiki/Mooncake) themed Algorithm Generated Art NFT. - Uniquely generated glitch art for every minted NFT. - A karma system (with a leaderboard) to gamify the NFT gifting.
kendricktan_ynatm
.circleci check_npm_version.sh config.yml README.md index.js package.json tests common.js contracts StateMachine.json ethers.test.js web3.test.js
# You Need A Transaction Manager (YNATM) [![circleci](https://badgen.net/circleci/github/kendricktan/ynatm)](https://app.circleci.com/pipelines/github/kendricktan/ynatm) [![npm](https://badgen.net/npm/v/ynatm)](https://www.npmjs.com/package/ynatm) **(For Ethereum)** With the recent spike in gas prices, you can't just send a 1 GWEI gas price for your Ethereum tx and hope that it will get mined. This small module helps you guarantee that your transaction gets mined within a reasonable time frame, by bumping up the gas price (up till a threshold) until your transaction gets mined. ## Examples ### Quickstart ```bash npm install ynatm ``` ```javascript const ynatm = require("ynatm"); const nonce = provider.getTransactionCount(SENDER_ADDRESS); const txOptions = { from: SENDER_ADDRESS, to: RECIPIENT_ADDRESS, nonce } const tx = await ynatm.send({ sendTransactionFunction: (gasPrice) => wallet.sendTransaction({ ...txOptions, gasPrice }), minGasPrice: ynatm.toGwei(1), maxGasPrice: ynatm.toGwei(20), gasPriceScalingFunction: ynatm.LINEAR(5), // Scales by 5 GWEI in gasPrice between each try delay: 15000, // Waits 15 second between each try }); ``` ### Contract Interaction Since `ynatm` is framework agnostic, you can also use it for contract interaction like so: ```javascript const ynatm = require("ynatm"); const nonce = provider.getTransactionCount(SENDER_ADDRESS); const options = { from: SENDER_ADDRESS, nonce, } const ethersSendContractFunction = (gasPrice) => { const tx = MyContract.functionName(params, { ...options, gasPrice }); const txRecp = await tx.wait(1); // wait for 1 confirmations return txRecp; }; const web3SendContractFunction = (gasPrice) => { // Web3 by default waits for the receipt return MyContract.methods.functionName(params).send({ ...options, gasPrice }); }; const tx = await ynatm.send({ sendTransactionFunction: ethersSendContractFunction, // or web3SendContractFunction minGasPrice: ynatm.toGwei(1), maxGasPrice: ynatm.toGwei(20), gasPriceScalingFunction: ynatm.LINEAR(5), // Scales by 5 GWEI in gasPrice between each try delay: 15000, // Waits 15 second between each try }); ``` ### Custom `gasPriceScalingFunction` You can define your own `gasPriceScalingFunction`, which takes in a destructured object containing the following keys: - `x`: X'th number of try - `y`: Current gasPrice - `c`: Constant, `minGasPrice` ```javascript const customGasScalingFunction = ({ x, y, c }) => { return ... } ``` ### Immediate Error Handling with `rejectImmediatelyOnCondition` The expected behavior when the transaction manager hits an error is to: 1. Check if the error meets the condition specified in `rejectImmediatelyOnCondition` (Defaults to checking for reverts) - If the condition is met, all future transactions are cancelled the the promise is rejected 2. Checks to see if all the transactions have failed - If all transactions have failed, reject the last error 3. Keep trying That means that if you're queued up 5 invalid transactions, all 5 of them will need to fail before you can thrown an error. If you'd like to speed up the process and immediately throw an error when the first invalid transaction is thrown matches a certain criteria, you can do so by overriding the `rejectImmediatelyOnCondition` like so: ```javascript const ynatm = require("ynatm"); const rejectOnTheseMessages = (err) => { const errMsg = err.toString().toLowerCase(); const conditions = ["revert", "gas", "nonce", "invalid"]; for (const i of conditions) { if (errMsg.includes(i)) { return true; } } return false; }; const nonce = await provider.getTransactionCount(SENDER_ADDRESS); const tx = { from: SENDER_ADDRESS, to: RECIPIENT_ADDRESS, nonce, data: '0x' } await ynatm.send({ sendTransactionFunction: (gasPrice) => wallet.sendTransaction({ ...tx, gasPrice }), minGasPrice: ynatm.toGwei(1), maxGasPrice: ynatm.toGwei(20), gasPriceScalingFunction: ynatm.LINEAR(5), delay: 15000, rejectImmediatelyOnCondition: rejectOnTheseMessages, }); ``` ## Testing ```bash # Terminal 1 yes '' | geth --dev --dev.period 15 --http --http.addr '0.0.0.0' --http.port 8545 --http.api 'eth,net,web3,account,admin,personal' --unlock '0' --allow-insecure-unlock # Terminal 2 yarn test ``` If you don't have `geth` installed locally, you can also use `docker` ```bash # Terminal 1 docker run -p 127.0.0.1:8545:8545/tcp --entrypoint /bin/sh ethereum/client-go:v1.9.14 -c "yes '' | geth --dev --dev.period 15 --http --http.addr '0.0.0.0' --http.port 8545 --http.api 'eth,net,web3,account,admin,personal' --unlock '0' --allow-insecure-unlock" # Terminal 2 yarn test ```
kasodon_near-todo-app
README.md config-overrides.js contract babel.config.json build.sh build builder.c code.h contract.js methods.h deploy.sh package-lock.json package.json src index.js index.ts model.js model.ts test main.ava.js tsconfig.json package.json public index.html manifest.json robots.txt src App.js App.test.js delete.svg index.js logo.svg near-wallet.js preloader.js reportWebVitals.js setupTests.js
# Getting Started with NEAR Todo App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Project Structure This project has `contract` folder.\ as submodule. This is where your smart contract code lives. Root folder has the project main files which was generated by create-react-app. ## Running this project In the root project directory, run: ### `npm install` Installs all the project dependencies. ### `npm run build` Builds the smart contract code and compiles it to WASM, making it ready for deployment. Builds the app frontend for production to the `build` folder.\ ### `npm run deploy` Before you run this command, goto the `contract/deploy.sh` folder.\ and replace the <your-example-contract.testnet-account-id> with your testnet accountId Then login to the testnet account using [NEAR CLI](https://docs.near.org/tools/near-cli) to get account full access key Then run the command The compiled WASM file will be deployed to the NEAR testnet ### `npm start` Before starting the project, go to `src/index.ts` and insert your <your-example-contract.testnet-account-id> in the CONTRACT_ADDRESS variable. Then run npm start.s ### `npm run test` This tests the contract code and also the frontend code ## Learn More You can learn more in the [NEAR official docs](https://docs.near.org). To learn React, check out the [React documentation](https://reactjs.org/).
onchezz_guessing-game-contract
Cargo.toml README.md build.bat build.sh src lib.rs test.sh
# Rust Smart Contract Template ## Getting started To get started with this template: 1. Click the "Use this template" button to create a new repo based on this template 2. Update line 2 of `Cargo.toml` with your project name 3. Update line 4 of `Cargo.toml` with your project author names 4. Set up the [prerequisites](https://github.com/near/near-sdk-rs#pre-requisites) 5. Begin writing your smart contract in `src/lib.rs` 6. Test the contract `cargo test -- --nocapture` 8. Build the contract `RUSTFLAGS='-C link-arg=-s' cargo build --target wasm32-unknown-unknown --release` **Get more info at:** * [Rust Smart Contract Quick Start](https://docs.near.org/docs/develop/contracts/rust/intro) * [Rust SDK Book](https://www.near-sdk.io/)
etasdemir_leasify
README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts leasify __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly Constants.ts index.ts models Asset.ts Lessee.ts Lessor.ts utils.ts tsconfig.json
## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # Welcome to Leasify! **Leasify** is a trustless asset leasing smart contract deployed on NEAR testnet. It is written in AssemblyScript. With Leasify, you can buy asset and lease them. With that, you can get periodic income from your asset. Alternatively, you might want to lease an asset instead of buying. You only need to pay lease price periodically. # Installation 1. Clone this repository: <br/> `git clone https://github.com/etasdemir/leasify.git` 2. Install dependencies: <br/> `yarn install` 4. Build and deploy the contract: <br/> `yarn dev-deploy` > It will make a release build and deploy the smart contract. 5. Contract successfully deployed. Now, firstly you need to generate some mock assets: <br/> `near call $CONTRACT_ID generateMockAssets --accountId $ACCOUNT_ID --gas=300000000000000 ` > **Note:** You need to increase gas amount for some commands by adding --gas=300000000000000 to the CLI command 7. At this step, you can freely interact with the contract. # Methods and Usage ## Usage Every functions can be called with: <br/> `near call $CONTRACT_ID methodName '{"argName1": "argValue1"}' --accountId $ACCOUNT_ID --gas=300000000000000` ## Contract Methods 1. Generate mock assets. It will be used only in development environment. Before interacting with the contract, this method should be called. <br/> `generateMockAssets(): string` 2. Get buyable assets. Buyable means not owned by anybody. <br/> `getBuyableAssets(): Array<Asset>` 3. Get leasable assets. Leasable means not leased by anybody. <br/> `getLeasebleAssets(): Array<Asset>` 4. Get asset info with id. <br/> `getAssetById(assetId: string): Asset` ## Lessor Methods 1. Get lessor by public account id. Returns Lessor. <br/> `getLessor(sender: AccountId): Lessor` <br/> Usage: <br/> `near call dev-1650223408078-59961683836754 getLessor '{"sender": "erentasdemir.testnet"}' --accountId erentasdemir.testnet --gas=300000000000000` 2. Buy assets. Return true if success <br/> `buyAsset(assetId: string): bool` 3. Get accumulated income. <br/> `getAccumulatedIncome(): Balance` 4. Get assets owned by contract caller account. <br/> `getOwnedAssets(): Array<Asset>` 5. Transfer accumulated income from contract to asset owner account. <br/> `transferAccumulatedIncome(amount: Money): bool` 6. Sell the asset. <br/> `sellAsset(assetId: string): bool` ## Lessee Methods 1. Get lessee by public account id. <br/> `getLessee(sender: AccountId): Lessee` > **Note:** When an asset leased, lessee pay the asset deposit amount. It will be used for future implementations. After releasing the asset, deposit will be transfered back to lessee account. Future implementations 2nd and 3rd item. 3. Lease the asset given with id. <br/> `leaseAsset(assetId: string): bool` 4. Pay periodic lease amount. <br/> `payLease(assetId: string): bool` 5. Get assets leased by contract caller account. <br/> `getLeasedAssets(): Array<Asset>` 6. Release the asset. <br/> `releaseAsset(assetId: string): bool` # Future Implementations 1. Periodic income can be dynamic like percentage in case of asset price changes. 2. If a lessee do not pay its periodic lease amount, an interest rate should be added to the next lease amount. 3. If a lessee still not paying, when lease amount with the interest reaches deposit amount, automatically release asset and transfer deposit to the asset owner. 4. Partial lessor system. Instead of only one lessor for an asset, more than one people can buy a part of the asset. N Lessor - 1 Asset. 5. Adding new assets to the contract by users. 6. Gas price usage should be reduced. 7. Mobile app and web implementation to interact with the contract more easily.
profullstackdeveloper_NFT-marketplace-launchbar-vue
.eslintrc.js README.md babel.config.js deploy.sh package.json public index.html src assets images add.svg arrow-left.svg arrow.svg blockchain-avalanche.svg blockchain-bsc-theme.svg blockchain-bsc.svg blockchain-ethereum.svg blockchain-flow.svg blockchain-immux-theme.svg blockchain-immux.svg blockchain-moonriver.svg blockchain-near-dark.svg blockchain-near-theme.svg blockchain-near.svg blockchain-polygon.svg blockchain-solana.svg copy.svg discord--theme.svg discord.svg logo--light.svg logo.svg phone.svg search.svg select-caret--theme.svg select-caret.svg telegram--theme.svg telegram.svg trash.svg twitter--theme.svg twitter.svg user.svg main.js mixins maskTextMixin.js titleMixin.js router index.js store index.js modules blockchains.js filters.js theme.js user.js vue.config.js
# launchbar Install node 14.16.0 ## Project setup ``` npm install OR yarn install ``` ### Compiles and hot-reloads for development ``` npm run serve OR yarn serve ``` ### Compiles and minifies for production ``` npm run build OR yarn build ``` ### Lints and fixes files ``` npm run lint OR yarn lint ``` ### Customize configuration See [Configuration Reference](https://cli.vuejs.org/config/).
maxhr_hello-near-rs
.github workflows tests.yml .gitpod.yml contract Cargo.toml README.md src lib.rs frontend assets css global.css img logo-black.svg logo-white.svg js index.js near config.js utils.js index.html integration-tests rs Cargo.toml src tests.rs ts main.ava.ts neardev shared-test test.near.json package.json
near-blank-project Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html
Learn-NEAR_NCD--BlockLab
README.md as-pect.config.js asconfig.json assembly __test__ as-pect.d.ts index.unit.spec.ts as_types.d.ts index.ts models.ts tsconfig.json USUARIOS SERVICIOS COMENTARIOS VALORACIONES Métodos del smart contract de USUARIOS Métodos del smart contract de ANALITICAS neardev dev-account.env node_modules @as-covers assembly CONTRIBUTING.md README.md index.ts package.json tsconfig.json core CONTRIBUTING.md README.md package.json glue README.md lib index.d.ts index.js package.json transform README.md lib index.d.ts index.js util.d.ts util.js node_modules visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js toString.d.ts toString.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformRange.d.ts transformRange.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json package.json @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json @babel code-frame README.md lib index.js package.json helper-validator-identifier README.md lib identifier.js index.js keyword.js package.json scripts generate-identifier-regex.js highlight README.md lib index.js node_modules ansi-styles index.js package.json readme.md chalk index.js package.json readme.md templates.js types index.d.ts color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name .eslintrc.json README.md index.js package.json test.js escape-string-regexp index.js package.json readme.md has-flag index.js package.json readme.md supports-color browser.js index.js package.json readme.md package.json @eslint eslintrc CHANGELOG.md README.md conf config-schema.js environments.js eslint-all.js eslint-recommended.js lib cascading-config-array-factory.js config-array-factory.js config-array config-array.js config-dependency.js extracted-config.js ignore-pattern.js index.js override-tester.js flat-compat.js index.js shared ajv.js config-ops.js config-validator.js deprecation-warnings.js naming.js relative-module-resolver.js types.js package.json @humanwhocodes config-array README.md api.js package.json object-schema .eslintrc.js .travis.yml README.md package.json src index.js merge-strategy.js object-schema.js validation-strategy.js tests merge-strategy.js object-schema.js validation-strategy.js @types uuid README.md index.d.ts package.json acorn-jsx README.md index.d.ts index.js package.json xhtml.js acorn CHANGELOG.md README.md dist acorn.d.ts acorn.js acorn.mjs.d.ts bin.js package.json ajv .tonic_example.js README.md dist ajv.bundle.js ajv.min.js lib ajv.d.ts ajv.js cache.js compile async.js equal.js error_classes.js formats.js index.js resolve.js rules.js schema_obj.js ucs2length.js util.js data.js definition_schema.js dotjs README.md _limit.js _limitItems.js _limitLength.js _limitProperties.js allOf.js anyOf.js comment.js const.js contains.js custom.js dependencies.js enum.js format.js if.js index.js items.js multipleOf.js not.js oneOf.js pattern.js properties.js propertyNames.js ref.js required.js uniqueItems.js validate.js keyword.js refs data.json json-schema-draft-04.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json package.json scripts .eslintrc.yml bundle.js compile-dots.js ansi-colors README.md index.js package.json symbols.js types index.d.ts ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md argparse CHANGELOG.md README.md index.js lib action.js action append.js append constant.js count.js help.js store.js store constant.js false.js true.js subparsers.js version.js action_container.js argparse.js argument error.js exclusive.js group.js argument_parser.js const.js help added_formatters.js formatter.js namespace.js utils.js package.json as-bignum README.md assembly __tests__ as-pect.d.ts i128.spec.as.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts package.json asbuild README.md dist cli.d.ts cli.js commands build.d.ts build.js fmt.d.ts fmt.js index.d.ts index.js init cmd.d.ts cmd.js files asconfigJson.d.ts asconfigJson.js aspecConfig.d.ts aspecConfig.js assembly_files.d.ts assembly_files.js eslintConfig.d.ts eslintConfig.js gitignores.d.ts gitignores.js index.d.ts index.js indexJs.d.ts indexJs.js packageJson.d.ts packageJson.js test_files.d.ts test_files.js index.d.ts index.js interfaces.d.ts interfaces.js run.d.ts run.js test.d.ts test.js index.d.ts index.js main.d.ts main.js utils.d.ts utils.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md assembly JSON.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts index.js package.json temp-docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md assemblyscript-regex .eslintrc.js .github workflows benchmark.yml release.yml test.yml README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __spec_tests__ generated.spec.ts __tests__ alterations.spec.ts as-pect.d.ts boundary-assertions.spec.ts capture-group.spec.ts character-classes.spec.ts character-sets.spec.ts characters.ts empty.ts quantifiers.spec.ts range-quantifiers.spec.ts regex.spec.ts utils.ts char.ts env.ts index.ts nfa matcher.ts nfa.ts types.ts walker.ts parser node.ts parser.ts string-iterator.ts walker.ts regexp.ts tsconfig.json util.ts benchmark benchmark.js package.json spec test-generator.js ts index.ts tsconfig.json assemblyscript-temporal .github workflows node.js.yml release.yml .vscode launch.json README.md as-pect.config.js asconfig.empty.json asconfig.json assembly __tests__ README.md as-pect.d.ts date.spec.ts duration.spec.ts empty.ts plaindate.spec.ts plaindatetime.spec.ts plainmonthday.spec.ts plaintime.spec.ts plainyearmonth.spec.ts timezone.spec.ts zoneddatetime.spec.ts constants.ts date.ts duration.ts enums.ts env.ts index.ts instant.ts now.ts plaindate.ts plaindatetime.ts plainmonthday.ts plaintime.ts plainyearmonth.ts timezone.ts tsconfig.json tz __tests__ index.spec.ts rule.spec.ts zone.spec.ts iana.ts index.ts rule.ts zone.ts utils.ts zoneddatetime.ts development.md package.json tzdb README.md iana theory.html zoneinfo2tdf.pl assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts uri.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts uri.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json astral-regex index.d.ts index.js package.json readme.md axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match .github FUNDING.yml LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json callsites index.d.ts index.js package.json readme.md camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js cross-spawn CHANGELOG.md README.md index.js lib enoent.js parse.js util escape.js readShebang.js resolveCommand.js package.json csv-stringify README.md lib browser index.js sync.js es5 index.d.ts index.js sync.d.ts sync.js index.d.ts index.js sync.d.ts sync.js package.json debug README.md package.json src browser.js common.js index.js node.js decamelize index.js package.json readme.md deep-is .travis.yml example cmp.js index.js package.json test NaN.js cmp.js neg-vs-pos-0.js diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js doctrine CHANGELOG.md README.md lib doctrine.js typed.js utility.js package.json emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js enquirer CHANGELOG.md README.md index.d.ts index.js lib ansi.js combos.js completer.js interpolate.js keypress.js placeholder.js prompt.js prompts autocomplete.js basicauth.js confirm.js editable.js form.js index.js input.js invisible.js list.js multiselect.js numeral.js password.js quiz.js scale.js select.js snippet.js sort.js survey.js text.js toggle.js render.js roles.js state.js styles.js symbols.js theme.js timer.js types array.js auth.js boolean.js index.js number.js string.js utils.js package.json env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js escape-string-regexp index.d.ts index.js package.json readme.md eslint-scope CHANGELOG.md README.md lib definition.js index.js pattern-visitor.js reference.js referencer.js scope-manager.js scope.js variable.js node_modules estraverse README.md estraverse.js gulpfile.js package.json package.json eslint-utils README.md index.js package.json eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json eslint CHANGELOG.md README.md bin eslint.js conf category-list.json config-schema.js default-cli-options.js eslint-all.js eslint-recommended.js replacements.json lib api.js cli-engine cli-engine.js file-enumerator.js formatters checkstyle.js codeframe.js compact.js html.js jslint-xml.js json-with-metadata.js json.js junit.js stylish.js table.js tap.js unix.js visualstudio.js hash.js index.js lint-result-cache.js load-rules.js xml-escape.js cli.js config default-config.js flat-config-array.js flat-config-schema.js rule-validator.js eslint eslint.js index.js init autoconfig.js config-file.js config-initializer.js config-rule.js npm-utils.js source-code-utils.js linter apply-disable-directives.js code-path-analysis code-path-analyzer.js code-path-segment.js code-path-state.js code-path.js debug-helpers.js fork-context.js id-generator.js config-comment-parser.js index.js interpolate.js linter.js node-event-generator.js report-translator.js rule-fixer.js rules.js safe-emitter.js source-code-fixer.js timing.js options.js rule-tester index.js rule-tester.js rules accessor-pairs.js array-bracket-newline.js array-bracket-spacing.js array-callback-return.js array-element-newline.js arrow-body-style.js arrow-parens.js arrow-spacing.js block-scoped-var.js block-spacing.js brace-style.js callback-return.js camelcase.js capitalized-comments.js class-methods-use-this.js comma-dangle.js comma-spacing.js comma-style.js complexity.js computed-property-spacing.js consistent-return.js consistent-this.js constructor-super.js curly.js default-case-last.js default-case.js default-param-last.js dot-location.js dot-notation.js eol-last.js eqeqeq.js for-direction.js func-call-spacing.js func-name-matching.js func-names.js func-style.js function-call-argument-newline.js function-paren-newline.js generator-star-spacing.js getter-return.js global-require.js grouped-accessor-pairs.js guard-for-in.js handle-callback-err.js id-blacklist.js id-denylist.js id-length.js id-match.js implicit-arrow-linebreak.js indent-legacy.js indent.js index.js init-declarations.js jsx-quotes.js key-spacing.js keyword-spacing.js line-comment-position.js linebreak-style.js lines-around-comment.js lines-around-directive.js lines-between-class-members.js max-classes-per-file.js max-depth.js max-len.js max-lines-per-function.js max-lines.js max-nested-callbacks.js max-params.js max-statements-per-line.js max-statements.js multiline-comment-style.js multiline-ternary.js new-cap.js new-parens.js newline-after-var.js newline-before-return.js newline-per-chained-call.js no-alert.js no-array-constructor.js no-async-promise-executor.js no-await-in-loop.js no-bitwise.js no-buffer-constructor.js no-caller.js no-case-declarations.js no-catch-shadow.js no-class-assign.js no-compare-neg-zero.js no-cond-assign.js no-confusing-arrow.js no-console.js no-const-assign.js no-constant-condition.js no-constructor-return.js no-continue.js no-control-regex.js no-debugger.js no-delete-var.js no-div-regex.js no-dupe-args.js no-dupe-class-members.js no-dupe-else-if.js no-dupe-keys.js no-duplicate-case.js no-duplicate-imports.js no-else-return.js no-empty-character-class.js no-empty-function.js no-empty-pattern.js no-empty.js no-eq-null.js no-eval.js no-ex-assign.js no-extend-native.js no-extra-bind.js no-extra-boolean-cast.js no-extra-label.js no-extra-parens.js no-extra-semi.js no-fallthrough.js no-floating-decimal.js no-func-assign.js no-global-assign.js no-implicit-coercion.js no-implicit-globals.js no-implied-eval.js no-import-assign.js no-inline-comments.js no-inner-declarations.js no-invalid-regexp.js no-invalid-this.js no-irregular-whitespace.js no-iterator.js no-label-var.js no-labels.js no-lone-blocks.js no-lonely-if.js no-loop-func.js no-loss-of-precision.js no-magic-numbers.js no-misleading-character-class.js no-mixed-operators.js no-mixed-requires.js no-mixed-spaces-and-tabs.js no-multi-assign.js no-multi-spaces.js no-multi-str.js no-multiple-empty-lines.js no-native-reassign.js no-negated-condition.js no-negated-in-lhs.js no-nested-ternary.js no-new-func.js no-new-object.js no-new-require.js no-new-symbol.js no-new-wrappers.js no-new.js no-nonoctal-decimal-escape.js no-obj-calls.js no-octal-escape.js no-octal.js no-param-reassign.js no-path-concat.js no-plusplus.js no-process-env.js no-process-exit.js no-promise-executor-return.js no-proto.js no-prototype-builtins.js no-redeclare.js no-regex-spaces.js no-restricted-exports.js no-restricted-globals.js no-restricted-imports.js no-restricted-modules.js no-restricted-properties.js no-restricted-syntax.js no-return-assign.js no-return-await.js no-script-url.js no-self-assign.js no-self-compare.js no-sequences.js no-setter-return.js no-shadow-restricted-names.js no-shadow.js no-spaced-func.js no-sparse-arrays.js no-sync.js no-tabs.js no-template-curly-in-string.js no-ternary.js no-this-before-super.js no-throw-literal.js no-trailing-spaces.js no-undef-init.js no-undef.js no-undefined.js no-underscore-dangle.js no-unexpected-multiline.js no-unmodified-loop-condition.js no-unneeded-ternary.js no-unreachable-loop.js no-unreachable.js no-unsafe-finally.js no-unsafe-negation.js no-unsafe-optional-chaining.js no-unused-expressions.js no-unused-labels.js no-unused-vars.js no-use-before-define.js no-useless-backreference.js no-useless-call.js no-useless-catch.js no-useless-computed-key.js no-useless-concat.js no-useless-constructor.js no-useless-escape.js no-useless-rename.js no-useless-return.js no-var.js no-void.js no-warning-comments.js no-whitespace-before-property.js no-with.js nonblock-statement-body-position.js object-curly-newline.js object-curly-spacing.js object-property-newline.js object-shorthand.js one-var-declaration-per-line.js one-var.js operator-assignment.js operator-linebreak.js padded-blocks.js padding-line-between-statements.js prefer-arrow-callback.js prefer-const.js prefer-destructuring.js prefer-exponentiation-operator.js prefer-named-capture-group.js prefer-numeric-literals.js prefer-object-spread.js prefer-promise-reject-errors.js prefer-reflect.js prefer-regex-literals.js prefer-rest-params.js prefer-spread.js prefer-template.js quote-props.js quotes.js radix.js require-atomic-updates.js require-await.js require-jsdoc.js require-unicode-regexp.js require-yield.js rest-spread-spacing.js semi-spacing.js semi-style.js semi.js sort-imports.js sort-keys.js sort-vars.js space-before-blocks.js space-before-function-paren.js space-in-parens.js space-infix-ops.js space-unary-ops.js spaced-comment.js strict.js switch-colon-spacing.js symbol-description.js template-curly-spacing.js template-tag-spacing.js unicode-bom.js use-isnan.js utils ast-utils.js fix-tracker.js keywords.js lazy-loading-rule-map.js patterns letters.js unicode index.js is-combining-character.js is-emoji-modifier.js is-regional-indicator-symbol.js is-surrogate-pair.js valid-jsdoc.js valid-typeof.js vars-on-top.js wrap-iife.js wrap-regex.js yield-star-spacing.js yoda.js shared ajv.js ast-utils.js config-validator.js deprecation-warnings.js logging.js relative-module-resolver.js runtime-info.js string-utils.js traverser.js types.js source-code index.js source-code.js token-store backward-token-comment-cursor.js backward-token-cursor.js cursor.js cursors.js decorative-cursor.js filter-cursor.js forward-token-comment-cursor.js forward-token-cursor.js index.js limit-cursor.js padded-token-cursor.js skip-cursor.js utils.js messages all-files-ignored.js extend-config-missing.js failed-to-read-json.js file-not-found.js no-config-found.js plugin-conflict.js plugin-invalid.js plugin-missing.js print-config-with-directory-path.js whitespace-found.js node_modules eslint-visitor-keys CHANGELOG.md README.md lib index.js visitor-keys.json package.json package.json espree CHANGELOG.md README.md espree.js lib ast-node-types.js espree.js features.js options.js token-translator.js visitor-keys.js package.json esprima README.md bin esparse.js esvalidate.js dist esprima.js package.json esquery README.md dist esquery.esm.js esquery.esm.min.js esquery.js esquery.lite.js esquery.lite.min.js esquery.min.js license.txt package.json parser.js esrecurse README.md esrecurse.js gulpfile.babel.js package.json estraverse README.md estraverse.js gulpfile.js package.json esutils README.md lib ast.js code.js keyword.js utils.js package.json fast-deep-equal README.md es6 index.d.ts index.js react.d.ts react.js index.d.ts index.js package.json react.d.ts react.js fast-json-stable-stringify .eslintrc.yml .github FUNDING.yml .travis.yml README.md benchmark index.js test.json example key_cmp.js nested.js str.js value_cmp.js index.d.ts index.js package.json test cmp.js nested.js str.js to-json.js fast-levenshtein LICENSE.md README.md levenshtein.js package.json file-entry-cache README.md cache.js changelog.md package.json find-up index.d.ts index.js package.json readme.md flat-cache README.md changelog.md package.json src cache.js del.js utils.js flatted .github FUNDING.yml README.md SPECS.md cjs index.js package.json es.js esm index.js index.js min.js package.json php flatted.php types.d.ts follow-redirects README.md http.js https.js index.js node_modules debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js ms index.js license.md package.json readme.md package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json functional-red-black-tree README.md bench test.js package.json rbtree.js test test.js get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob-parent CHANGELOG.md README.md index.js package.json glob README.md changelog.md common.js glob.js package.json sync.js globals globals.json index.d.ts index.js package.json readme.md has-flag index.d.ts index.js package.json readme.md hasurl README.md index.js package.json ignore CHANGELOG.md README.md index.d.ts index.js legacy.js package.json import-fresh index.d.ts index.js package.json readme.md imurmurhash README.md imurmurhash.js imurmurhash.min.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json is-extglob README.md index.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md is-glob README.md index.js package.json isarray .travis.yml README.md component.json index.js package.json test.js isexe README.md index.js mode.js package.json test basic.js windows.js isobject README.md index.js package.json js-base64 LICENSE.md README.md base64.d.ts base64.js package.json js-tokens CHANGELOG.md README.md index.js package.json js-yaml CHANGELOG.md README.md bin js-yaml.js dist js-yaml.js js-yaml.min.js index.js lib js-yaml.js js-yaml common.js dumper.js exception.js loader.js mark.js schema.js schema core.js default_full.js default_safe.js failsafe.js json.js type.js type binary.js bool.js float.js int.js js function.js regexp.js undefined.js map.js merge.js null.js omap.js pairs.js seq.js set.js str.js timestamp.js package.json json-schema-traverse .eslintrc.yml .travis.yml README.md index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js json-stable-stringify-without-jsonify .travis.yml example key_cmp.js nested.js str.js value_cmp.js index.js package.json test cmp.js nested.js replacer.js space.js str.js to-json.js levn README.md lib cast.js index.js parse-string.js package.json line-column README.md lib line-column.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.merge README.md index.js package.json lodash.sortby README.md index.js package.json lodash.truncate README.md index.js package.json long README.md dist long.js index.js package.json src long.js lru-cache README.md index.js package.json minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md natural-compare README.md index.js package.json near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as README.md as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt datetime.spec.ts empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js out assembly __tests__ ason.ts model.ts ~lib as-bignum integer safe u128.ts package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts datetime.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ empty.ts exportAs.ts model.ts sentences.ts singleton copy.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json optionator CHANGELOG.md README.md lib help.js index.js util.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md parent-module index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md path-key index.d.ts index.js package.json readme.md prelude-ls CHANGELOG.md README.md lib Func.js List.js Num.js Obj.js Str.js index.js package.json progress CHANGELOG.md Readme.md index.js lib node-progress.js package.json punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json regexpp README.md index.d.ts index.js package.json require-directory .travis.yml index.js package.json require-from-string index.js package.json readme.md require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json resolve-from index.js package.json readme.md ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json semver CHANGELOG.md README.md bin semver.js classes comparator.js index.js range.js semver.js functions clean.js cmp.js coerce.js compare-build.js compare-loose.js compare.js diff.js eq.js gt.js gte.js inc.js lt.js lte.js major.js minor.js neq.js parse.js patch.js prerelease.js rcompare.js rsort.js satisfies.js sort.js valid.js index.js internal constants.js debug.js identifiers.js parse-options.js re.js package.json preload.js ranges gtr.js intersects.js ltr.js max-satisfying.js min-satisfying.js min-version.js outside.js simplify.js subset.js to-comparators.js valid.js set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json shebang-command index.js package.json readme.md shebang-regex index.d.ts index.js package.json readme.md slice-ansi index.js package.json readme.md sprintf-js README.md bower.json demo angular.html dist angular-sprintf.min.js sprintf.min.js gruntfile.js package.json src angular-sprintf.js sprintf.js test test.js string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md strip-json-comments index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md table README.md dist alignString.d.ts alignString.js alignTableData.d.ts alignTableData.js calculateCellHeight.d.ts calculateCellHeight.js calculateCellWidths.d.ts calculateCellWidths.js calculateColumnWidths.d.ts calculateColumnWidths.js calculateRowHeights.d.ts calculateRowHeights.js createStream.d.ts createStream.js drawBorder.d.ts drawBorder.js drawContent.d.ts drawContent.js drawHeader.d.ts drawHeader.js drawRow.d.ts drawRow.js drawTable.d.ts drawTable.js generated validators.d.ts validators.js getBorderCharacters.d.ts getBorderCharacters.js index.d.ts index.js makeStreamConfig.d.ts makeStreamConfig.js makeTableConfig.d.ts makeTableConfig.js mapDataUsingRowHeights.d.ts mapDataUsingRowHeights.js padTableData.d.ts padTableData.js stringifyTableData.d.ts stringifyTableData.js table.d.ts table.js truncateTableData.d.ts truncateTableData.js types api.d.ts api.js internal.d.ts internal.js utils.d.ts utils.js validateConfig.d.ts validateConfig.js validateTableData.d.ts validateTableData.js wrapCell.d.ts wrapCell.js wrapString.d.ts wrapString.js wrapWord.d.ts wrapWord.js node_modules ajv .runkit_example.js README.md dist 2019.d.ts 2019.js 2020.d.ts 2020.js ajv.d.ts ajv.js compile codegen code.d.ts code.js index.d.ts index.js scope.d.ts scope.js errors.d.ts errors.js index.d.ts index.js jtd parse.d.ts parse.js serialize.d.ts serialize.js types.d.ts types.js names.d.ts names.js ref_error.d.ts ref_error.js resolve.d.ts resolve.js rules.d.ts rules.js util.d.ts util.js validate applicability.d.ts applicability.js boolSchema.d.ts boolSchema.js dataType.d.ts dataType.js defaults.d.ts defaults.js index.d.ts index.js keyword.d.ts keyword.js subschema.d.ts subschema.js core.d.ts core.js jtd.d.ts jtd.js refs data.json json-schema-2019-09 index.d.ts index.js meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.d.ts index.js meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.d.ts jtd-schema.js runtime equal.d.ts equal.js parseJson.d.ts parseJson.js quote.d.ts quote.js timestamp.d.ts timestamp.js ucs2length.d.ts ucs2length.js validation_error.d.ts validation_error.js standalone index.d.ts index.js instance.d.ts instance.js types index.d.ts index.js json-schema.d.ts json-schema.js jtd-schema.d.ts jtd-schema.js vocabularies applicator additionalItems.d.ts additionalItems.js additionalProperties.d.ts additionalProperties.js allOf.d.ts allOf.js anyOf.d.ts anyOf.js contains.d.ts contains.js dependencies.d.ts dependencies.js dependentSchemas.d.ts dependentSchemas.js if.d.ts if.js index.d.ts index.js items.d.ts items.js items2020.d.ts items2020.js not.d.ts not.js oneOf.d.ts oneOf.js patternProperties.d.ts patternProperties.js prefixItems.d.ts prefixItems.js properties.d.ts properties.js propertyNames.d.ts propertyNames.js thenElse.d.ts thenElse.js code.d.ts code.js core id.d.ts id.js index.d.ts index.js ref.d.ts ref.js discriminator index.d.ts index.js types.d.ts types.js draft2020.d.ts draft2020.js draft7.d.ts draft7.js dynamic dynamicAnchor.d.ts dynamicAnchor.js dynamicRef.d.ts dynamicRef.js index.d.ts index.js recursiveAnchor.d.ts recursiveAnchor.js recursiveRef.d.ts recursiveRef.js errors.d.ts errors.js format format.d.ts format.js index.d.ts index.js jtd discriminator.d.ts discriminator.js elements.d.ts elements.js enum.d.ts enum.js error.d.ts error.js index.d.ts index.js metadata.d.ts metadata.js nullable.d.ts nullable.js optionalProperties.d.ts optionalProperties.js properties.d.ts properties.js ref.d.ts ref.js type.d.ts type.js union.d.ts union.js values.d.ts values.js metadata.d.ts metadata.js next.d.ts next.js unevaluated index.d.ts index.js unevaluatedItems.d.ts unevaluatedItems.js unevaluatedProperties.d.ts unevaluatedProperties.js validation const.d.ts const.js dependentRequired.d.ts dependentRequired.js enum.d.ts enum.js index.d.ts index.js limitContains.d.ts limitContains.js limitItems.d.ts limitItems.js limitLength.d.ts limitLength.js limitNumber.d.ts limitNumber.js limitProperties.d.ts limitProperties.js multipleOf.d.ts multipleOf.js pattern.d.ts pattern.js required.d.ts required.js uniqueItems.d.ts uniqueItems.js lib 2019.ts 2020.ts ajv.ts compile codegen code.ts index.ts scope.ts errors.ts index.ts jtd parse.ts serialize.ts types.ts names.ts ref_error.ts resolve.ts rules.ts util.ts validate applicability.ts boolSchema.ts dataType.ts defaults.ts index.ts keyword.ts subschema.ts core.ts jtd.ts refs data.json json-schema-2019-09 index.ts meta applicator.json content.json core.json format.json meta-data.json validation.json schema.json json-schema-2020-12 index.ts meta applicator.json content.json core.json format-annotation.json meta-data.json unevaluated.json validation.json schema.json json-schema-draft-06.json json-schema-draft-07.json json-schema-secure.json jtd-schema.ts runtime equal.ts parseJson.ts quote.ts timestamp.ts ucs2length.ts validation_error.ts standalone index.ts instance.ts types index.ts json-schema.ts jtd-schema.ts vocabularies applicator additionalItems.ts additionalProperties.ts allOf.ts anyOf.ts contains.ts dependencies.ts dependentSchemas.ts if.ts index.ts items.ts items2020.ts not.ts oneOf.ts patternProperties.ts prefixItems.ts properties.ts propertyNames.ts thenElse.ts code.ts core id.ts index.ts ref.ts discriminator index.ts types.ts draft2020.ts draft7.ts dynamic dynamicAnchor.ts dynamicRef.ts index.ts recursiveAnchor.ts recursiveRef.ts errors.ts format format.ts index.ts jtd discriminator.ts elements.ts enum.ts error.ts index.ts metadata.ts nullable.ts optionalProperties.ts properties.ts ref.ts type.ts union.ts values.ts metadata.ts next.ts unevaluated index.ts unevaluatedItems.ts unevaluatedProperties.ts validation const.ts dependentRequired.ts enum.ts index.ts limitContains.ts limitItems.ts limitLength.ts limitNumber.ts limitProperties.ts multipleOf.ts pattern.ts required.ts uniqueItems.ts package.json json-schema-traverse .eslintrc.yml .github FUNDING.yml workflows build.yml publish.yml README.md index.d.ts index.js package.json spec .eslintrc.yml fixtures schema.js index.spec.js package.json tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js normalize-windows-path.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js strip-absolute-path.js strip-trailing-slashes.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js package.json text-table .travis.yml example align.js center.js dotalign.js doubledot.js table.js index.js package.json test align.js ansi-colors.js center.js dotalign.js doubledot.js table.js tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist cjs decorator.js index.js mixin-tracking.js mixins.js proxy.js settings.js types.js util.js esm index.js index.min.js types decorator.d.ts index.d.ts mixin-tracking.d.ts mixins.d.ts proxy.d.ts settings.d.ts types.d.ts util.d.ts package.json type-check README.md lib check.js index.js parse-type.js package.json type-fest base.d.ts index.d.ts package.json readme.md source async-return-type.d.ts asyncify.d.ts basic.d.ts conditional-except.d.ts conditional-keys.d.ts conditional-pick.d.ts entries.d.ts entry.d.ts except.d.ts fixed-length-array.d.ts iterable-element.d.ts literal-union.d.ts merge-exclusive.d.ts merge.d.ts mutable.d.ts opaque.d.ts package-json.d.ts partial-deep.d.ts promisable.d.ts promise-value.d.ts readonly-deep.d.ts require-at-least-one.d.ts require-exactly-one.d.ts set-optional.d.ts set-required.d.ts set-return-type.d.ts stringified.d.ts tsconfig-json.d.ts union-to-intersection.d.ts utilities.d.ts value-of.d.ts ts41 camel-case.d.ts delimiter-case.d.ts index.d.ts kebab-case.d.ts pascal-case.d.ts snake-case.d.ts universal-url README.md browser.js index.js package.json uri-js README.md dist es5 uri.all.d.ts uri.all.js uri.all.min.d.ts uri.all.min.js esnext index.d.ts index.js regexps-iri.d.ts regexps-iri.js regexps-uri.d.ts regexps-uri.js schemes http.d.ts http.js https.d.ts https.js mailto.d.ts mailto.js urn-uuid.d.ts urn-uuid.js urn.d.ts urn.js ws.d.ts ws.js wss.d.ts wss.js uri.d.ts uri.js util.d.ts util.js package.json uuid CHANGELOG.md CONTRIBUTING.md LICENSE.md README.md dist esm-browser index.js md5.js nil.js parse.js regex.js rng.js sha1.js stringify.js v1.js v3.js v35.js v4.js v5.js validate.js version.js esm-node index.js md5.js nil.js parse.js regex.js rng.js sha1.js stringify.js v1.js v3.js v35.js v4.js v5.js validate.js version.js index.js md5-browser.js md5.js nil.js parse.js regex.js rng-browser.js rng.js sha1-browser.js sha1.js stringify.js umd uuid.min.js uuidNIL.min.js uuidParse.min.js uuidStringify.min.js uuidValidate.min.js uuidVersion.min.js uuidv1.min.js uuidv3.min.js uuidv4.min.js uuidv5.min.js uuid-bin.js v1.js v3.js v35.js v4.js v5.js validate.js version.js package.json uuidv4 .eslintrc.json .releaserc.json CHANGELOG.md LICENSE.txt README.md build lib uuidv4.d.ts uuidv4.js lib uuidv4.ts package.json tsconfig.json v8-compile-cache CHANGELOG.md README.md package.json v8-compile-cache.js visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json which CHANGELOG.md README.md package.json which.js word-wrap README.md index.d.ts index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json | features not yet implemented issues with the tests differences between PCRE and JS regex | | | package-lock.json package.json
# regexpp [![npm version](https://img.shields.io/npm/v/regexpp.svg)](https://www.npmjs.com/package/regexpp) [![Downloads/month](https://img.shields.io/npm/dm/regexpp.svg)](http://www.npmtrends.com/regexpp) [![Build Status](https://github.com/mysticatea/regexpp/workflows/CI/badge.svg)](https://github.com/mysticatea/regexpp/actions) [![codecov](https://codecov.io/gh/mysticatea/regexpp/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/regexpp) [![Dependency Status](https://david-dm.org/mysticatea/regexpp.svg)](https://david-dm.org/mysticatea/regexpp) A regular expression parser for ECMAScript. ## 💿 Installation ```bash $ npm install regexpp ``` - require Node.js 8 or newer. ## 📖 Usage ```ts import { AST, RegExpParser, RegExpValidator, RegExpVisitor, parseRegExpLiteral, validateRegExpLiteral, visitRegExpAST } from "regexpp" ``` ### parseRegExpLiteral(source, options?) Parse a given regular expression literal then make AST object. This is equivalent to `new RegExpParser(options).parseLiteral(source)`. - **Parameters:** - `source` (`string | RegExp`) The source code to parse. - `options?` ([`RegExpParser.Options`]) The options to parse. - **Return:** - The AST of the regular expression. ### validateRegExpLiteral(source, options?) Validate a given regular expression literal. This is equivalent to `new RegExpValidator(options).validateLiteral(source)`. - **Parameters:** - `source` (`string`) The source code to validate. - `options?` ([`RegExpValidator.Options`]) The options to validate. ### visitRegExpAST(ast, handlers) Visit each node of a given AST. This is equivalent to `new RegExpVisitor(handlers).visit(ast)`. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. ### RegExpParser #### new RegExpParser(options?) - **Parameters:** - `options?` ([`RegExpParser.Options`]) The options to parse. #### parser.parseLiteral(source, start?, end?) Parse a regular expression literal. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"/abc/g"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression. #### parser.parsePattern(source, start?, end?, uFlag?) Parse a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"abc"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. - **Return:** - The AST of the regular expression pattern. #### parser.parseFlags(source, start?, end?) Parse a regular expression flags. - **Parameters:** - `source` (`string`) The source code to parse. E.g. `"gim"`. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - **Return:** - The AST of the regular expression flags. ### RegExpValidator #### new RegExpValidator(options) - **Parameters:** - `options` ([`RegExpValidator.Options`]) The options to validate. #### validator.validateLiteral(source, start, end) Validate a regular expression literal. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. #### validator.validatePattern(source, start, end, uFlag) Validate a regular expression pattern. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. - `uFlag?` (`boolean`) The flag to enable Unicode mode. #### validator.validateFlags(source, start, end) Validate a regular expression flags. - **Parameters:** - `source` (`string`) The source code to validate. - `start?` (`number`) The start index in the source code. Default is `0`. - `end?` (`number`) The end index in the source code. Default is `source.length`. ### RegExpVisitor #### new RegExpVisitor(handlers) - **Parameters:** - `handlers` ([`RegExpVisitor.Handlers`]) The callbacks. #### visitor.visit(ast) Validate a regular expression literal. - **Parameters:** - `ast` ([`AST.Node`]) The AST to visit. ## 📰 Changelog - [GitHub Releases](https://github.com/mysticatea/regexpp/releases) ## 🍻 Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run build` compiles TypeScript source code to `index.js`, `index.js.map`, and `index.d.ts`. - `npm run clean` removes the temporary files which are created by `npm test` and `npm run build`. - `npm run lint` runs ESLint. - `npm run update:test` updates test fixtures. - `npm run update:ids` updates `src/unicode/ids.ts`. - `npm run watch` runs tests with `--watch` option. [`AST.Node`]: src/ast.ts#L4 [`RegExpParser.Options`]: src/parser.ts#L539 [`RegExpValidator.Options`]: src/validator.ts#L127 [`RegExpVisitor.Handlers`]: src/visitor.ts#L204 # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) <table><thead> <tr> <th>Linux</th> <th>OS X</th> <th>Windows</th> <th>Coverage</th> <th>Downloads</th> </tr> </thead><tbody><tr> <td colspan="2" align="center"> <a href="https://travis-ci.org/kaelzhang/node-ignore"> <img src="https://travis-ci.org/kaelzhang/node-ignore.svg?branch=master" alt="Build Status" /></a> </td> <td align="center"> <a href="https://ci.appveyor.com/project/kaelzhang/node-ignore"> <img src="https://ci.appveyor.com/api/projects/status/github/kaelzhang/node-ignore?branch=master&svg=true" alt="Windows Build Status" /></a> </td> <td align="center"> <a href="https://codecov.io/gh/kaelzhang/node-ignore"> <img src="https://codecov.io/gh/kaelzhang/node-ignore/branch/master/graph/badge.svg" alt="Coverage Status" /></a> </td> <td align="center"> <a href="https://www.npmjs.org/package/ignore"> <img src="http://img.shields.io/npm/dm/ignore.svg" alt="npm module downloads per month" /></a> </td> </tr></tbody></table> # ignore `ignore` is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore [spec](http://git-scm.com/docs/gitignore). Pay attention that [`minimatch`](https://www.npmjs.org/package/minimatch) does not work in the gitignore way. To filter filenames according to .gitignore file, I recommend this module. ##### Tested on - Linux + Node: `0.8` - `7.x` - Windows + Node: `0.10` - `7.x`, node < `0.10` is not tested due to the lack of support of appveyor. Actually, `ignore` does not rely on any versions of node specially. Since `4.0.0`, ignore will no longer support `node < 6` by default, to use in node < 6, `require('ignore/legacy')`. For details, see [CHANGELOG](https://github.com/kaelzhang/node-ignore/blob/master/CHANGELOG.md). ## Table Of Main Contents - [Usage](#usage) - [`Pathname` Conventions](#pathname-conventions) - [Guide for 2.x -> 3.x](#upgrade-2x---3x) - [Guide for 3.x -> 4.x](#upgrade-3x---4x) - See Also: - [`glob-gitignore`](https://www.npmjs.com/package/glob-gitignore) matches files using patterns and filters them according to gitignore rules. ## Usage ```js import ignore from 'ignore' const ig = ignore().add(['.abc/*', '!.abc/d/']) ``` ### Filter the given paths ```js const paths = [ '.abc/a.js', // filtered out '.abc/d/e.js' // included ] ig.filter(paths) // ['.abc/d/e.js'] ig.ignores('.abc/a.js') // true ``` ### As the filter function ```js paths.filter(ig.createFilter()); // ['.abc/d/e.js'] ``` ### Win32 paths will be handled ```js ig.filter(['.abc\\a.js', '.abc\\d\\e.js']) // if the code above runs on windows, the result will be // ['.abc\\d\\e.js'] ``` ## Why another ignore? - `ignore` is a standalone module, and is much simpler so that it could easy work with other programs, unlike [isaacs](https://npmjs.org/~isaacs)'s [fstream-ignore](https://npmjs.org/package/fstream-ignore) which must work with the modules of the fstream family. - `ignore` only contains utility methods to filter paths according to the specified ignore rules, so - `ignore` never try to find out ignore rules by traversing directories or fetching from git configurations. - `ignore` don't cares about sub-modules of git projects. - Exactly according to [gitignore man page](http://git-scm.com/docs/gitignore), fixes some known matching issues of fstream-ignore, such as: - '`/*.js`' should only match '`a.js`', but not '`abc/a.js`'. - '`**/foo`' should match '`foo`' anywhere. - Prevent re-including a file if a parent directory of that file is excluded. - Handle trailing whitespaces: - `'a '`(one space) should not match `'a '`(two spaces). - `'a \ '` matches `'a '` - All test cases are verified with the result of `git check-ignore`. # Methods ## .add(pattern: string | Ignore): this ## .add(patterns: Array<string | Ignore>): this - **pattern** `String | Ignore` An ignore pattern string, or the `Ignore` instance - **patterns** `Array<String | Ignore>` Array of ignore patterns. Adds a rule or several rules to the current manager. Returns `this` Notice that a line starting with `'#'`(hash) is treated as a comment. Put a backslash (`'\'`) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename. ```js ignore().add('#abc').ignores('#abc') // false ignore().add('\#abc').ignores('#abc') // true ``` `pattern` could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just `ignore().add()` the content of a ignore file: ```js ignore() .add(fs.readFileSync(filenameOfGitignore).toString()) .filter(filenames) ``` `pattern` could also be an `ignore` instance, so that we could easily inherit the rules of another `Ignore` instance. ## <strike>.addIgnoreFile(path)</strike> REMOVED in `3.x` for now. To upgrade `[email protected]` up to `3.x`, use ```js import fs from 'fs' if (fs.existsSync(filename)) { ignore().add(fs.readFileSync(filename).toString()) } ``` instead. ## .filter(paths: Array<Pathname>): Array<Pathname> ```ts type Pathname = string ``` Filters the given array of pathnames, and returns the filtered array. - **paths** `Array.<Pathname>` The array of `pathname`s to be filtered. ### `Pathname` Conventions: #### 1. `Pathname` should be a `path.relative()`d pathname `Pathname` should be a string that have been `path.join()`ed, or the return value of `path.relative()` to the current directory. ```js // WRONG ig.ignores('./abc') // WRONG, for it will never happen. // If the gitignore rule locates at the root directory, // `'/abc'` should be changed to `'abc'`. // ``` // path.relative('/', '/abc') -> 'abc' // ``` ig.ignores('/abc') // Right ig.ignores('abc') // Right ig.ignores(path.join('./abc')) // path.join('./abc') -> 'abc' ``` In other words, each `Pathname` here should be a relative path to the directory of the gitignore rules. Suppose the dir structure is: ``` /path/to/your/repo |-- a | |-- a.js | |-- .b | |-- .c |-- .DS_store ``` Then the `paths` might be like this: ```js [ 'a/a.js' '.b', '.c/.DS_store' ] ``` Usually, you could use [`glob`](http://npmjs.org/package/glob) with `option.mark = true` to fetch the structure of the current directory: ```js import glob from 'glob' glob('**', { // Adds a / character to directory matches. mark: true }, (err, files) => { if (err) { return console.error(err) } let filtered = ignore().add(patterns).filter(files) console.log(filtered) }) ``` #### 2. filenames and dirnames `node-ignore` does NO `fs.stat` during path matching, so for the example below: ```js ig.add('config/') // `ig` does NOT know if 'config' is a normal file, directory or something ig.ignores('config') // And it returns `false` ig.ignores('config/') // returns `true` ``` Specially for people who develop some library based on `node-ignore`, it is important to understand that. ## .ignores(pathname: Pathname): boolean > new in 3.2.0 Returns `Boolean` whether `pathname` should be ignored. ```js ig.ignores('.abc/a.js') // true ``` ## .createFilter() Creates a filter function which could filter an array of paths with `Array.prototype.filter`. Returns `function(path)` the filter function. ## `options.ignorecase` since 4.0.0 Similar as the `core.ignorecase` option of [git-config](https://git-scm.com/docs/git-config), `node-ignore` will be case insensitive if `options.ignorecase` is set to `true` (default value), otherwise case sensitive. ```js const ig = ignore({ ignorecase: false }) ig.add('*.png') ig.ignores('*.PNG') // false ``` **** # Upgrade Guide ## Upgrade 2.x -> 3.x - All `options` of 2.x are unnecessary and removed, so just remove them. - `ignore()` instance is no longer an [`EventEmitter`](nodejs.org/api/events.html), and all events are unnecessary and removed. - `.addIgnoreFile()` is removed, see the [.addIgnoreFile](#addignorefilepath) section for details. ## Upgrade 3.x -> 4.x Since `4.0.0`, `ignore` will no longer support node < 6, to use `ignore` in node < 6: ```js var ignore = require('ignore/legacy') ``` **** # Collaborators - [@whitecolor](https://github.com/whitecolor) *Alex* - [@SamyPesse](https://github.com/SamyPesse) *Samy Pessé* - [@azproduction](https://github.com/azproduction) *Mikhail Davydov* - [@TrySound](https://github.com/TrySound) *Bogdan Chadkin* - [@JanMattner](https://github.com/JanMattner) *Jan Mattner* - [@ntwb](https://github.com/ntwb) *Stephen Edgar* - [@kasperisager](https://github.com/kasperisager) *Kasper Isager* - [@sandersn](https://github.com/sandersn) *Nathan Shively-Sanders* # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. These files are compiled dot templates from dot folder. Do NOT edit them directly, edit the templates and run `npm run build` from main ajv folder. # is-extglob [![NPM version](https://img.shields.io/npm/v/is-extglob.svg?style=flat)](https://www.npmjs.com/package/is-extglob) [![NPM downloads](https://img.shields.io/npm/dm/is-extglob.svg?style=flat)](https://npmjs.org/package/is-extglob) [![Build Status](https://img.shields.io/travis/jonschlinkert/is-extglob.svg?style=flat)](https://travis-ci.org/jonschlinkert/is-extglob) > Returns true if a string has an extglob. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-extglob ``` ## Usage ```js var isExtglob = require('is-extglob'); ``` **True** ```js isExtglob('?(abc)'); isExtglob('@(abc)'); isExtglob('!(abc)'); isExtglob('*(abc)'); isExtglob('+(abc)'); ``` **False** Escaped extglobs: ```js isExtglob('\\?(abc)'); isExtglob('\\@(abc)'); isExtglob('\\!(abc)'); isExtglob('\\*(abc)'); isExtglob('\\+(abc)'); ``` Everything else... ```js isExtglob('foo.js'); isExtglob('!foo.js'); isExtglob('*.js'); isExtglob('**/abc.js'); isExtglob('abc/*.js'); isExtglob('abc/(aaa|bbb).js'); isExtglob('abc/[a-z].js'); isExtglob('abc/{a,b}.js'); isExtglob('abc/?.js'); isExtglob('abc.js'); isExtglob('abc/def/ghi.js'); ``` ## History **v2.0** Adds support for escaping. Escaped exglobs no longer return true. ## About ### Related projects * [has-glob](https://www.npmjs.com/package/has-glob): Returns `true` if an array has a glob pattern. | [homepage](https://github.com/jonschlinkert/has-glob "Returns `true` if an array has a glob pattern.") * [is-glob](https://www.npmjs.com/package/is-glob): Returns `true` if the given string looks like a glob pattern or an extglob pattern… [more](https://github.com/jonschlinkert/is-glob) | [homepage](https://github.com/jonschlinkert/is-glob "Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a bet") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/jonschlinkert/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ### License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/is-extglob/blob/master/LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.1.31, on October 12, 2016._ # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # lru cache A cache object that deletes the least-recently-used items. [![Build Status](https://travis-ci.org/isaacs/node-lru-cache.svg?branch=master)](https://travis-ci.org/isaacs/node-lru-cache) [![Coverage Status](https://coveralls.io/repos/isaacs/node-lru-cache/badge.svg?service=github)](https://coveralls.io/github/isaacs/node-lru-cache) ## Installation: ```javascript npm install lru-cache --save ``` ## Usage: ```javascript var LRU = require("lru-cache") , options = { max: 500 , length: function (n, key) { return n * 2 + key.length } , dispose: function (key, n) { n.close() } , maxAge: 1000 * 60 * 60 } , cache = new LRU(options) , otherCache = new LRU(50) // sets just the max size cache.set("key", "value") cache.get("key") // "value" // non-string keys ARE fully supported // but note that it must be THE SAME object, not // just a JSON-equivalent object. var someObject = { a: 1 } cache.set(someObject, 'a value') // Object keys are not toString()-ed cache.set('[object Object]', 'a different value') assert.equal(cache.get(someObject), 'a value') // A similar object with same keys/values won't work, // because it's a different object identity assert.equal(cache.get({ a: 1 }), undefined) cache.reset() // empty the cache ``` If you put more stuff in it, then items will fall out. If you try to put an oversized thing in it, then it'll fall out right away. ## Options * `max` The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that's the whole purpose of this lib, but it defaults to `Infinity`. Setting it to a non-number or negative number will throw a `TypeError`. Setting it to 0 makes it be `Infinity`. * `maxAge` Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a `TypeError`. * `length` Function that is used to calculate the length of stored items. If you're storing strings or buffers, then you probably want to do something like `function(n, key){return n.length}`. The default is `function(){return 1}`, which is fine if you want to store `max` like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet. * `dispose` Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with `key, value`. It's called *before* actually removing the item from the internal cache, so if you want to immediately put it back in, you'll have to do that in a `nextTick` or `setTimeout` callback or it won't do anything. * `stale` By default, if you set a `maxAge`, it'll only actually pull stale items out of the cache when you `get(key)`. (That is, it's not pre-emptively doing a `setTimeout` or anything.) If you set `stale:true`, it'll return the stale value before deleting it. If you don't set this, then it'll return `undefined` when you try to get a stale entry, as if it had already been deleted. * `noDisposeOnSet` By default, if you set a `dispose()` method, then it'll be called whenever a `set()` operation overwrites an existing key. If you set this option, `dispose()` will only be called when a key falls out of the cache, not when it is overwritten. * `updateAgeOnGet` When using time-expiring entries with `maxAge`, setting this to `true` will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.) ## API * `set(key, value, maxAge)` * `get(key) => value` Both of these will update the "recently used"-ness of the key. They do what you think. `maxAge` is optional and overrides the cache `maxAge` option if provided. If the key is not found, `get()` will return `undefined`. The key and val can be any value. * `peek(key)` Returns the key value (or `undefined` if not found) without updating the "recently used"-ness of the key. (If you find yourself using this a lot, you *might* be using the wrong sort of data structure, but there are some use cases where it's handy.) * `del(key)` Deletes a key out of the cache. * `reset()` Clear the cache entirely, throwing away all values. * `has(key)` Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. * `forEach(function(value,key,cache), [thisp])` Just like `Array.prototype.forEach`. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.) * `rforEach(function(value,key,cache), [thisp])` The same as `cache.forEach(...)` but items are iterated over in reverse order. (ie, less recently used items are iterated over first.) * `keys()` Return an array of the keys in the cache. * `values()` Return an array of the values in the cache. * `length` Return total length of objects in cache taking into account `length` options function. * `itemCount` Return total quantity of objects currently in cache. Note, that `stale` (see options) items are returned as part of this item count. * `dump()` Return an array of the cache entries ready for serialization and usage with 'destinationCache.load(arr)`. * `load(cacheEntriesArray)` Loads another cache entries array, obtained with `sourceCache.dump()`, into the cache. The destination cache is reset before loading new entries * `prune()` Manually iterates over the entire cache proactively pruning old entries # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support. [![Build Status](https://travis-ci.org/epoberezkin/fast-deep-equal.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-deep-equal) [![npm](https://img.shields.io/npm/v/fast-deep-equal.svg)](https://www.npmjs.com/package/fast-deep-equal) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-deep-equal/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-deep-equal?branch=master) ## Install ```bash npm install fast-deep-equal ``` ## Features - ES5 compatible - works in node.js (8+) and browsers (IE9+) - checks equality of Date and RegExp objects by value. ES6 equal (`require('fast-deep-equal/es6')`) also supports: - Maps - Sets - Typed arrays ## Usage ```javascript var equal = require('fast-deep-equal'); console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true ``` To support ES6 Maps, Sets and Typed arrays equality use: ```javascript var equal = require('fast-deep-equal/es6'); console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true ``` To use with React (avoiding the traversal of React elements' _owner property that contains circular references and is not needed when comparing the elements - borrowed from [react-fast-compare](https://github.com/FormidableLabs/react-fast-compare)): ```javascript var equal = require('fast-deep-equal/react'); var equal = require('fast-deep-equal/es6/react'); ``` ## Performance benchmark Node.js v12.6.0: ``` fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled) fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled) fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled) nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled) shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled) underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled) lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled) deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled) deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled) ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled) util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled) assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled) The fastest is fast-deep-equal ``` To run benchmark (requires node.js 6+): ```bash npm run benchmark ``` __Please note__: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application. ## Enterprise support fast-deep-equal package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-deep-equal?utm_source=npm-fast-deep-equal&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/fast-deep-equal/blob/master/LICENSE) # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 4.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` ## Timezone support In order to provide support for timezones, without relying on the JavaScript host or any other time-zone aware environment, this library makes use of teh IANA Timezone Database directly: https://www.iana.org/time-zones The database files are parsed by the scripts in this folder, which emit AssemblyScript code which is used to process the various rules at runtime. # flatted [![Downloads](https://img.shields.io/npm/dm/flatted.svg)](https://www.npmjs.com/package/flatted) [![Coverage Status](https://coveralls.io/repos/github/WebReflection/flatted/badge.svg?branch=main)](https://coveralls.io/github/WebReflection/flatted?branch=main) [![Build Status](https://travis-ci.com/WebReflection/flatted.svg?branch=main)](https://travis-ci.com/WebReflection/flatted) [![License: ISC](https://img.shields.io/badge/License-ISC-yellow.svg)](https://opensource.org/licenses/ISC) ![WebReflection status](https://offline.report/status/webreflection.svg) ![snow flake](./flatted.jpg) <sup>**Social Media Photo by [Matt Seymour](https://unsplash.com/@mattseymour) on [Unsplash](https://unsplash.com/)**</sup> A super light (0.5K) and fast circular JSON parser, directly from the creator of [CircularJSON](https://github.com/WebReflection/circular-json/#circularjson). Now available also for **[PHP](./php/flatted.php)**. ```js npm i flatted ``` Usable via [CDN](https://unpkg.com/flatted) or as regular module. ```js // ESM import {parse, stringify, toJSON, fromJSON} from 'flatted'; // CJS const {parse, stringify, toJSON, fromJSON} = require('flatted'); const a = [{}]; a[0].a = a; a.push(a); stringify(a); // [["1","0"],{"a":"0"}] ``` ## toJSON and from JSON If you'd like to implicitly survive JSON serialization, these two helpers helps: ```js import {toJSON, fromJSON} from 'flatted'; class RecursiveMap extends Map { static fromJSON(any) { return new this(fromJSON(any)); } toJSON() { return toJSON([...this.entries()]); } } const recursive = new RecursiveMap; const same = {}; same.same = same; recursive.set('same', same); const asString = JSON.stringify(recursive); const asMap = RecursiveMap.fromJSON(JSON.parse(asString)); asMap.get('same') === asMap.get('same').same; // true ``` ## Flatted VS JSON As it is for every other specialized format capable of serializing and deserializing circular data, you should never `JSON.parse(Flatted.stringify(data))`, and you should never `Flatted.parse(JSON.stringify(data))`. The only way this could work is to `Flatted.parse(Flatted.stringify(data))`, as it is also for _CircularJSON_ or any other, otherwise there's no granted data integrity. Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won't be serialized and unserialized as expected. ### New in V1: Exact same JSON API * Added a [reviver](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#Syntax) parameter to `.parse(string, reviver)` and revive your own objects. * Added a [replacer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Syntax) and a `space` parameter to `.stringify(object, replacer, space)` for feature parity with JSON signature. ### Compatibility All ECMAScript engines compatible with `Map`, `Set`, `Object.keys`, and `Array.prototype.reduce` will work, even if polyfilled. ### How does it work ? While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. `*` Once parsed, all indexes will be replaced through the flattened collection. <sup><sub>`*` represented as string to avoid conflicts with numbers</sub></sup> ```js // logic example var a = [{one: 1}, {two: '2'}]; a[0].a = a; // a is the main object, will be at index '0' // {one: 1} is the second object, index '1' // {two: '2'} the third, in '2', and it has a string // which will be found at index '3' Flatted.stringify(a); // [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"] // a[one,two] {one: 1, a} {two: '2'} '2' ``` # levn [![Build Status](https://travis-ci.org/gkz/levn.png)](https://travis-ci.org/gkz/levn) <a name="levn" /> __Light ECMAScript (JavaScript) Value Notation__ Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments). Levn aims to concisely describe JavaScript values in text, and allow for the extraction and validation of those values. Levn uses [type-check](https://github.com/gkz/type-check) for its type format, and to validate the results. MIT license. Version 0.4.1. __How is this different than JSON?__ levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose. npm install levn For updates on levn, [follow me on twitter](https://twitter.com/gkzahariev). ## Quick Examples ```js var parse = require('levn').parse; parse('Number', '2'); // 2 parse('String', '2'); // '2' parse('String', 'levn'); // 'levn' parse('String', 'a b'); // 'a b' parse('Boolean', 'true'); // true parse('Date', '#2011-11-11#'); // (Date object) parse('Date', '2011-11-11'); // (Date object) parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi parse('RegExp', 're'); // /re/ parse('Int', '2'); // 2 parse('Number | String', 'str'); // 'str' parse('Number | String', '2'); // 2 parse('[Number]', '[1,2,3]'); // [1,2,3] parse('(String, Boolean)', '(hi, false)'); // ['hi', false] parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2} // at the top level, you can ommit surrounding delimiters parse('[Number]', '1,2,3'); // [1,2,3] parse('(String, Boolean)', 'hi, false'); // ['hi', false] parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2} // wildcard - auto choose type parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}] ``` ## Usage `require('levn');` returns an object that exposes three properties. `VERSION` is the current version of the library as a string. `parse` and `parsedTypeParse` are functions. ```js // parse(type, input, options); parse('[Number]', '1,2,3'); // [1, 2, 3] // parsedTypeParse(parsedType, input, options); var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ### parse(type, input, options) `parse` casts the string `input` into a JavaScript value according to the specified `type` in the [type format](https://github.com/gkz/type-check#type-format) (and taking account the optional `options`) and returns the resulting JavaScript value. ##### arguments * type - `String` - the type written in the [type format](https://github.com/gkz/type-check#type-format) which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js parse('[Number]', '1,2,3'); // [1, 2, 3] ``` ### parsedTypeParse(parsedType, input, options) `parsedTypeParse` casts the string `input` into a JavaScript value according to the specified `type` which has already been parsed (and taking account the optional `options`) and returns the resulting JavaScript value. You can parse a type using the [type-check](https://github.com/gkz/type-check) library's `parseType` function. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `String` - the value written in the [levn format](#levn-format) * options - `Maybe Object` - an optional parameter specifying additional [options](#options) ##### returns `*` - the resulting JavaScript value ##### example ```js var parsedType = require('type-check').parseType('[Number]'); parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3] ``` ## Levn Format Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, `parse('Number', '2')` will produce the number `2`, but `parse('String', '2')` will produce the string `"2"`. If you do not provide type information, and simply use `*`, levn will parse the input according the unambiguous "explicit" mode, which we will now detail - you can also set the `explicit` option to true manually in the [options](#options). * `"string"`, `'string'` are parsed as a String, eg. `"a msg"` is `"a msg"` * `#date#` is parsed as a Date, eg. `#2011-11-11#` is `new Date('2011-11-11')` * `/regexp/flags` is parsed as a RegExp, eg. `/re/gi` is `/re/gi` * `undefined`, `null`, `NaN`, `true`, and `false` are all their JavaScript equivalents * `[element1, element2, etc]` is an Array, and the casting procedure is recursively applied to each element. Eg. `[1,2,3]` is `[1,2,3]`. * `(element1, element2, etc)` is an tuple, and the casting procedure is recursively applied to each element. Eg. `(1, a)` is `(1, a)` (is `[1, 'a']`). * `{key1: val1, key2: val2, ...}` is an Object, and the casting procedure is recursively applied to each property. Eg. `{a: 1, b: 2}` is `{a: 1, b: 2}`. * Any test which does not fall under the above, and which does not contain special characters (`[``]``(``)``{``}``:``,`) is a string, eg. `$12- blah` is `"$12- blah"`. If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the [type format](https://github.com/gkz/type-check#type-format) section of [type-check](https://github.com/gkz/type-check) for more information about how to specify types. There are some rules about what levn can do with the information: * If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. `[({})]` is `"[({})]"`, and `"hi"` is `'"hi"'`. * If a Date is expected, the surrounding `#` can be omitted from date literals. Eg. `2011-11-11` is `new Date('2011-11-11')`. * If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing `/` can be omitted - this will have the affect of setting the source of the regex to the input. Eg. `regex` is `/regex/`. * If an Array is expected, and it is the root node (at the top level), the opening `[` and closing `]` can be omitted. Eg. `1,2,3` is `[1,2,3]`. * If a tuple is expected, and it is the root node (at the top level), the opening `(` and closing `)` can be omitted. Eg. `1, a` is `(1, a)` (is `[1, 'a']`). * If an Object is expected, and it is the root node (at the top level), the opening `{` and closing `}` can be omitted. Eg `a: 1, b: 2` is `{a: 1, b: 2}`. If you list multiple types (eg. `Number | String`), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list. Whitespace between special characters and elements is inconsequential. ## Options Options is an object. It is an optional parameter to the `parse` and `parsedTypeParse` functions. ### Explicit A `Boolean`. By default it is `false`. __Example:__ ```js parse('RegExp', 're', {explicit: false}); // /re/ parse('RegExp', 're', {explicit: true}); // Error: ... does not type check... parse('RegExp | String', 're', {explicit: true}); // 're' ``` `explicit` sets whether to be in explicit mode or not. Using `*` automatically activates explicit mode. For more information, read the [levn format](#levn-format) section. ### customTypes An `Object`. Empty `{}` by default. __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function (x) { return x % 2 === 0; }, cast: function (x) { return {type: 'Just', value: parseInt(x)}; } } } } parse('Even', '2', options); // 2 parse('Even', '3', options); // Error: Value: "3" does not type check... ``` __Another Example:__ ```js function Person(name, age){ this.name = name; this.age = age; } var options = { customTypes: { Person: { typeOf: 'Object', validate: function (x) { x instanceof Person; }, cast: function (value, options, typesCast) { var name, age; if ({}.toString.call(value).slice(8, -1) !== 'Object') { return {type: 'Nothing'}; } name = typesCast(value.name, [{type: 'String'}], options); age = typesCast(value.age, [{type: 'Numger'}], options); return {type: 'Just', value: new Person(name, age)}; } } } parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25} ``` `customTypes` is an object whose keys are the name of the types, and whose values are an object with three properties, `typeOf`, `validate`, and `cast`. For more information about `typeOf` and `validate`, please see the [custom types](https://github.com/gkz/type-check#custom-types) section of type-check. `cast` is a function which receives three arguments, the value under question, options, and the typesCast function. In `cast`, attempt to cast the value into the specified type. If you are successful, return an object in the format `{type: 'Just', value: CAST-VALUE}`, if you know it won't work, return `{type: 'Nothing'}`. You can use the `typesCast` function to cast any child values. Remember to pass `options` to it. In your function you can also check for `options.explicit` and act accordingly. ## Technical About `levn` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [type-check](https://github.com/gkz/type-check) to both parse types and validate values. It also uses the [prelude.ls](http://preludels.com/) library. # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value = <JSON.Value>valueOrNull; // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let innerString = (<JSON.Str>value).valueOf(); let jsonString = (<JSON.Str>value).stringify(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.stringify(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/main/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/[email protected]/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _: [], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _: [2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC ## assemblyscript-temporal An implementation of temporal within AssemblyScript, with an initial focus on non-timezone-aware classes and functionality. ### Why? AssemblyScript has minimal `Date` support, however, the JS Date API itself is terrible and people tend not to use it that often. As a result libraries like moment / luxon have become staple replacements. However, there is now a [relatively mature TC39 proposal](https://github.com/tc39/proposal-temporal) that adds greatly improved date support to JS. The goal of this project is to implement Temporal for AssemblyScript. ### Usage This library currently supports the following types: #### `PlainDateTime` A `PlainDateTime` represents a calendar date and wall-clock time that does not carry time zone information, e.g. December 7th, 1995 at 3:00 PM (in the Gregorian calendar). For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindatetime.html), this implementation follows the specification as closely as possible. You can create a `PlainDateTime` from individual components, a string or an object literal: ```javascript datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.year; // 2019; datetime.month; // 11; // ... datetime.nanosecond; // 789; datetime = PlainDateTime.from("1976-11-18T12:34:56"); datetime.toString(); // "1976-11-18T12:34:56" datetime = PlainDateTime.from({ year: 1966, month: 3, day: 3 }); datetime.toString(); // "1966-03-03T00:00:00" ``` There are various ways you can manipulate a date: ```javascript // use 'with' to copy a date but with various property values overriden datetime = new PlainDateTime(1976, 11, 18, 15, 23, 30, 123, 456, 789); datetime.with({ year: 2019 }).toString(); // "2019-11-18T15:23:30.123456789" // use 'add' or 'substract' to add / subtract a duration datetime = PlainDateTime.from("2020-01-12T15:00"); datetime.add({ months: 1 }).toString(); // "2020-02-12T15:00:00"); // add / subtract support Duration objects or object literals datetime.add(new Duration(1)).toString(); // "2021-01-12T15:00:00"); ``` You can compare dates and check for equality ```javascript dt1 = PlainDateTime.from("1976-11-18"); dt2 = PlainDateTime.from("2019-10-29"); PlainDateTime.compare(dt1, dt1); // 0 PlainDateTime.compare(dt1, dt2); // -1 dt1.equals(dt1); // true ``` Currently `PlainDateTime` only supports the ISO 8601 (Gregorian) calendar. #### `PlainDate` A `PlainDate` object represents a calendar date that is not associated with a particular time or time zone, e.g. August 24th, 2006. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaindate.html), this implementation follows the specification as closely as possible. The `PlainDate` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainTime` A `PlainTime` object represents a wall-clock time that is not associated with a particular date or time zone, e.g. 7:39 PM. For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plaintime.html), this implementation follows the specification as closely as possible. The `PlainTime` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `PlainMonthDay` A date without a year component. This is useful to express things like "Bastille Day is on the 14th of July". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainmonthday.html) , this implementation follows the specification as closely as possible. ```javascript const monthDay = PlainMonthDay.from({ month: 7, day: 14 }); // => 07-14 const date = monthDay.toPlainDate({ year: 2030 }); // => 2030-07-14 date.dayOfWeek; // => 7 ``` The `PlainMonthDay` API is almost identical to `PlainDateTime`, so see above for more API usage examples. #### `PlainYearMonth` A date without a day component. This is useful to express things like "the October 2020 meeting". For detailed documentation see the [TC39 Temporal proposal website](https://tc39.es/proposal-temporal/docs/plainyearmonth.html) , this implementation follows the specification as closely as possible. The `PlainYearMonth` API is almost identical to `PlainDateTime`, so see above for API usage examples. #### `now` The `now` object has several methods which give information about the current time and date. ```javascript dateTime = now.plainDateTimeISO(); dateTime.toString(); // 2021-04-01T12:05:47.357 ``` ## Contributing This project is open source, MIT licensed and your contributions are very much welcomed. There is a [brief document that outlines implementation progress and priorities](./development.md). # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde ([email protected]) - James Talmage ([email protected]) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) <h1 align="center">Enquirer</h1> <p align="center"> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/v/enquirer.svg" alt="version"> </a> <a href="https://travis-ci.org/enquirer/enquirer"> <img src="https://img.shields.io/travis/enquirer/enquirer.svg" alt="travis"> </a> <a href="https://npmjs.org/package/enquirer"> <img src="https://img.shields.io/npm/dm/enquirer.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <b>Stylish CLI prompts that are user-friendly, intuitive and easy to create.</b><br> <sub>>_ Prompts should be more like conversations than inquisitions▌</sub> </p> <br> <p align="center"> <sub>(Example shows Enquirer's <a href="#survey-prompt">Survey Prompt</a>)</a></sub> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"><br> <sub>The terminal in all examples is <a href="https://hyper.is/">Hyper</a>, theme is <a href="https://github.com/jonschlinkert/hyper-monokai-extended">hyper-monokai-extended</a>.</sub><br><br> <a href="#built-in-prompts"><strong>See more prompt examples</strong></a> </p> <br> <br> Created by [jonschlinkert](https://github.com/jonschlinkert) and [doowb](https://github.com/doowb), Enquirer is fast, easy to use, and lightweight enough for small projects, while also being powerful and customizable enough for the most advanced use cases. * **Fast** - [Loads in ~4ms](#-performance) (that's about _3-4 times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps_) * **Lightweight** - Only one dependency, the excellent [ansi-colors](https://github.com/doowb/ansi-colors) by [Brian Woodward](https://github.com/doowb). * **Easy to implement** - Uses promises and async/await and sensible defaults to make prompts easy to create and implement. * **Easy to use** - Thrill your users with a better experience! Navigating around input and choices is a breeze. You can even create [quizzes](examples/fun/countdown.js), or [record](examples/fun/record.js) and [playback](examples/fun/play.js) key bindings to aid with tutorials and videos. * **Intuitive** - Keypress combos are available to simplify usage. * **Flexible** - All prompts can be used standalone or chained together. * **Stylish** - Easily override semantic styles and symbols for any part of the prompt. * **Extensible** - Easily create and use custom prompts by extending Enquirer's built-in [prompts](#-prompts). * **Pluggable** - Add advanced features to Enquirer using plugins. * **Validation** - Optionally validate user input with any prompt. * **Well tested** - All prompts are well-tested, and tests are easy to create without having to use brittle, hacky solutions to spy on prompts or "inject" values. * **Examples** - There are numerous [examples](examples) available to help you get started. If you like Enquirer, please consider starring or tweeting about this project to show your support. Thanks! <br> <p align="center"> <b>>_ Ready to start making prompts your users will love? ▌</b><br> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/heartbeat.gif" alt="Enquirer Select Prompt with heartbeat example" width="750"> </p> <br> <br> ## ❯ Getting started Get started with Enquirer, the most powerful and easy-to-use Node.js library for creating interactive CLI prompts. * [Install](#-install) * [Usage](#-usage) * [Enquirer](#-enquirer) * [Prompts](#-prompts) - [Built-in Prompts](#-prompts) - [Custom Prompts](#-custom-prompts) * [Key Bindings](#-key-bindings) * [Options](#-options) * [Release History](#-release-history) * [Performance](#-performance) * [About](#-about) <br> ## ❯ Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install enquirer --save ``` Install with [yarn](https://yarnpkg.com/en/): ```sh $ yarn add enquirer ``` <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/npm-install.gif" alt="Install Enquirer with NPM" width="750"> </p> _(Requires Node.js 8.6 or higher. Please let us know if you need support for an earlier version by creating an [issue](../../issues/new).)_ <br> ## ❯ Usage ### Single prompt The easiest way to get started with enquirer is to pass a [question object](#prompt-options) to the `prompt` method. ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); // { username: 'jonschlinkert' } ``` _(Examples with `await` need to be run inside an `async` function)_ ### Multiple prompts Pass an array of ["question" objects](#prompt-options) to run a series of prompts. ```js const response = await prompt([ { type: 'input', name: 'name', message: 'What is your name?' }, { type: 'input', name: 'username', message: 'What is your username?' } ]); console.log(response); // { name: 'Edward Chan', username: 'edwardmchan' } ``` ### Different ways to run enquirer #### 1. By importing the specific `built-in prompt` ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Did you like enquirer?' }); prompt.run() .then(answer => console.log('Answer:', answer)); ``` #### 2. By passing the options to `prompt` ```js const { prompt } = require('enquirer'); prompt({ type: 'confirm', name: 'question', message: 'Did you like enquirer?' }) .then(answer => console.log('Answer:', answer)); ``` **Jump to**: [Getting Started](#-getting-started) · [Prompts](#-prompts) · [Options](#-options) · [Key Bindings](#-key-bindings) <br> ## ❯ Enquirer **Enquirer is a prompt runner** Add Enquirer to your JavaScript project with following line of code. ```js const Enquirer = require('enquirer'); ``` The main export of this library is the `Enquirer` class, which has methods and features designed to simplify running prompts. ```js const { prompt } = require('enquirer'); const question = [ { type: 'input', name: 'username', message: 'What is your username?' }, { type: 'password', name: 'password', message: 'What is your password?' } ]; let answers = await prompt(question); console.log(answers); ``` **Prompts control how values are rendered and returned** Each individual prompt is a class with special features and functionality for rendering the types of values you want to show users in the terminal, and subsequently returning the types of values you need to use in your application. **How can I customize prompts?** Below in this guide you will find information about creating [custom prompts](#-custom-prompts). For now, we'll focus on how to customize an existing prompt. All of the individual [prompt classes](#built-in-prompts) in this library are exposed as static properties on Enquirer. This allows them to be used directly without using `enquirer.prompt()`. Use this approach if you need to modify a prompt instance, or listen for events on the prompt. **Example** ```js const { Input } = require('enquirer'); const prompt = new Input({ name: 'username', message: 'What is your username?' }); prompt.run() .then(answer => console.log('Username:', answer)) .catch(console.error); ``` ### [Enquirer](index.js#L20) Create an instance of `Enquirer`. **Params** * `options` **{Object}**: (optional) Options to use with all prompts. * `answers` **{Object}**: (optional) Answers object to initialize with. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` ### [register()](index.js#L42) Register a custom prompt type. **Params** * `type` **{String}** * `fn` **{Function|Prompt}**: `Prompt` class, or a function that returns a `Prompt` class. * `returns` **{Object}**: Returns the Enquirer instance **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); enquirer.register('customType', require('./custom-prompt')); ``` ### [prompt()](index.js#L78) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const response = await enquirer.prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` ### [use()](index.js#L160) Use an enquirer plugin. **Params** * `plugin` **{Function}**: Plugin function that takes an instance of Enquirer. * `returns` **{Object}**: Returns the Enquirer instance. **Example** ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); const plugin = enquirer => { // do stuff to enquire instance }; enquirer.use(plugin); ``` ### [Enquirer#prompt](index.js#L210) Prompt function that takes a "question" object or array of question objects, and returns an object with responses from the user. **Params** * `questions` **{Array|Object}**: Options objects for one or more prompts to run. * `returns` **{Promise}**: Promise that returns an "answers" object with the user's responses. **Example** ```js const { prompt } = require('enquirer'); const response = await prompt({ type: 'input', name: 'username', message: 'What is your username?' }); console.log(response); ``` <br> ## ❯ Prompts This section is about Enquirer's prompts: what they look like, how they work, how to run them, available options, and how to customize the prompts or create your own prompt concept. **Getting started with Enquirer's prompts** * [Prompt](#prompt) - The base `Prompt` class used by other prompts - [Prompt Options](#prompt-options) * [Built-in prompts](#built-in-prompts) * [Prompt Types](#prompt-types) - The base `Prompt` class used by other prompts * [Custom prompts](#%E2%9D%AF-custom-prompts) - Enquirer 2.0 introduced the concept of prompt "types", with the goal of making custom prompts easier than ever to create and use. ### Prompt The base `Prompt` class is used to create all other prompts. ```js const { Prompt } = require('enquirer'); class MyCustomPrompt extends Prompt {} ``` See the documentation for [creating custom prompts](#-custom-prompts) to learn more about how this works. #### Prompt Options Each prompt takes an options object (aka "question" object), that implements the following interface: ```js { // required type: string | function, name: string | function, message: string | function | async function, // optional skip: boolean | function | async function, initial: string | function | async function, format: function | async function, result: function | async function, validate: function | async function, } ``` Each property of the options object is described below: | **Property** | **Required?** | **Type** | **Description** | | ------------ | ------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `type` | yes | `string\|function` | Enquirer uses this value to determine the type of prompt to run, but it's optional when prompts are run directly. | | `name` | yes | `string\|function` | Used as the key for the answer on the returned values (answers) object. | | `message` | yes | `string\|function` | The message to display when the prompt is rendered in the terminal. | | `skip` | no | `boolean\|function` | If `true` it will not ask that prompt. | | `initial` | no | `string\|function` | The default value to return if the user does not supply a value. | | `format` | no | `function` | Function to format user input in the terminal. | | `result` | no | `function` | Function to format the final submitted value before it's returned. | | `validate` | no | `function` | Function to validate the submitted value before it's returned. This function may return a boolean or a string. If a string is returned it will be used as the validation error message. | **Example usage** ```js const { prompt } = require('enquirer'); const question = { type: 'input', name: 'username', message: 'What is your username?' }; prompt(question) .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` <br> ### Built-in prompts * [AutoComplete Prompt](#autocomplete-prompt) * [BasicAuth Prompt](#basicauth-prompt) * [Confirm Prompt](#confirm-prompt) * [Form Prompt](#form-prompt) * [Input Prompt](#input-prompt) * [Invisible Prompt](#invisible-prompt) * [List Prompt](#list-prompt) * [MultiSelect Prompt](#multiselect-prompt) * [Numeral Prompt](#numeral-prompt) * [Password Prompt](#password-prompt) * [Quiz Prompt](#quiz-prompt) * [Survey Prompt](#survey-prompt) * [Scale Prompt](#scale-prompt) * [Select Prompt](#select-prompt) * [Sort Prompt](#sort-prompt) * [Snippet Prompt](#snippet-prompt) * [Toggle Prompt](#toggle-prompt) ### AutoComplete Prompt Prompt that auto-completes as the user types, and returns the selected value as a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/autocomplete-prompt.gif" alt="Enquirer AutoComplete Prompt" width="750"> </p> **Example Usage** ```js const { AutoComplete } = require('enquirer'); const prompt = new AutoComplete({ name: 'flavor', message: 'Pick your favorite flavor', limit: 10, initial: 2, choices: [ 'Almond', 'Apple', 'Banana', 'Blackberry', 'Blueberry', 'Cherry', 'Chocolate', 'Cinnamon', 'Coconut', 'Cranberry', 'Grape', 'Nougat', 'Orange', 'Pear', 'Pineapple', 'Raspberry', 'Strawberry', 'Vanilla', 'Watermelon', 'Wintergreen' ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **AutoComplete Options** | Option | Type | Default | Description | | ----------- | ---------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `highlight` | `function` | `dim` version of primary style | The color to use when "highlighting" characters in the list that match user input. | | `multiple` | `boolean` | `false` | Allow multiple choices to be selected. | | `suggest` | `function` | Greedy match, returns true if choice message contains input string. | Function that filters choices. Takes user input and a choices array, and returns a list of matching choices. | | `initial` | `number` | 0 | Preselected item in the list of choices. | | `footer` | `function` | None | Function that displays [footer text](https://github.com/enquirer/enquirer/blob/6c2819518a1e2ed284242a99a685655fbaabfa28/examples/autocomplete/option-footer.js#L10) | **Related prompts** * [Select](#select-prompt) * [MultiSelect](#multiselect-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### BasicAuth Prompt Prompt that asks for username and password to authenticate the user. The default implementation of `authenticate` function in `BasicAuth` prompt is to compare the username and password with the values supplied while running the prompt. The implementer is expected to override the `authenticate` function with a custom logic such as making an API request to a server to authenticate the username and password entered and expect a token back. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61570485-7ffd9c00-aaaa-11e9-857a-d47dc7008284.gif" alt="Enquirer BasicAuth Prompt" width="750"> </p> **Example Usage** ```js const { BasicAuth } = require('enquirer'); const prompt = new BasicAuth({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '123', showPassword: true }); prompt .run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Confirm Prompt Prompt that returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/confirm-prompt.gif" alt="Enquirer Confirm Prompt" width="750"> </p> **Example Usage** ```js const { Confirm } = require('enquirer'); const prompt = new Confirm({ name: 'question', message: 'Want to answer?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Form Prompt Prompt that allows the user to enter and submit multiple values on a single terminal screen. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/form-prompt.gif" alt="Enquirer Form Prompt" width="750"> </p> **Example Usage** ```js const { Form } = require('enquirer'); const prompt = new Form({ name: 'user', message: 'Please provide the following information:', choices: [ { name: 'firstname', message: 'First Name', initial: 'Jon' }, { name: 'lastname', message: 'Last Name', initial: 'Schlinkert' }, { name: 'username', message: 'GitHub username', initial: 'jonschlinkert' } ] }); prompt.run() .then(value => console.log('Answer:', value)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Input Prompt Prompt that takes user input and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/input-prompt.gif" alt="Enquirer Input Prompt" width="750"> </p> **Example Usage** ```js const { Input } = require('enquirer'); const prompt = new Input({ message: 'What is your username?', initial: 'jonschlinkert' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.log); ``` You can use [data-store](https://github.com/jonschlinkert/data-store) to store [input history](https://github.com/enquirer/enquirer/blob/master/examples/input/option-history.js) that the user can cycle through (see [source](https://github.com/enquirer/enquirer/blob/8407dc3579123df5e6e20215078e33bb605b0c37/lib/prompts/input.js)). **Related prompts** * [Confirm](#confirm-prompt) * [Numeral](#numeral-prompt) * [Password](#password-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Invisible Prompt Prompt that takes user input, hides it from the terminal, and returns a string. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/invisible-prompt.gif" alt="Enquirer Invisible Prompt" width="750"> </p> **Example Usage** ```js const { Invisible } = require('enquirer'); const prompt = new Invisible({ name: 'secret', message: 'What is your secret?' }); prompt.run() .then(answer => console.log('Answer:', { secret: answer })) .catch(console.error); ``` **Related prompts** * [Password](#password-prompt) * [Input](#input-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### List Prompt Prompt that returns a list of values, created by splitting the user input. The default split character is `,` with optional trailing whitespace. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/list-prompt.gif" alt="Enquirer List Prompt" width="750"> </p> **Example Usage** ```js const { List } = require('enquirer'); const prompt = new List({ name: 'keywords', message: 'Type comma-separated keywords' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Sort](#sort-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### MultiSelect Prompt Prompt that allows the user to select multiple items from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/multiselect-prompt.gif" alt="Enquirer MultiSelect Prompt" width="750"> </p> **Example Usage** ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: ['aqua', 'blue', 'fuchsia'] ``` **Example key-value pairs** Optionally, pass a `result` function and use the `.map` method to return an object of key-value pairs of the selected names and values: [example](./examples/multiselect/option-result.js) ```js const { MultiSelect } = require('enquirer'); const prompt = new MultiSelect({ name: 'value', message: 'Pick your favorite colors', limit: 7, choices: [ { name: 'aqua', value: '#00ffff' }, { name: 'black', value: '#000000' }, { name: 'blue', value: '#0000ff' }, { name: 'fuchsia', value: '#ff00ff' }, { name: 'gray', value: '#808080' }, { name: 'green', value: '#008000' }, { name: 'lime', value: '#00ff00' }, { name: 'maroon', value: '#800000' }, { name: 'navy', value: '#000080' }, { name: 'olive', value: '#808000' }, { name: 'purple', value: '#800080' }, { name: 'red', value: '#ff0000' }, { name: 'silver', value: '#c0c0c0' }, { name: 'teal', value: '#008080' }, { name: 'white', value: '#ffffff' }, { name: 'yellow', value: '#ffff00' } ], result(names) { return this.map(names); } }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); // Answer: { aqua: '#00ffff', blue: '#0000ff', fuchsia: '#ff00ff' } ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Numeral Prompt Prompt that takes a number as input. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/numeral-prompt.gif" alt="Enquirer Numeral Prompt" width="750"> </p> **Example Usage** ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ name: 'number', message: 'Please enter a number' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Confirm](#confirm-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Password Prompt Prompt that takes user input and masks it in the terminal. Also see the [invisible prompt](#invisible-prompt) <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/password-prompt.gif" alt="Enquirer Password Prompt" width="750"> </p> **Example Usage** ```js const { Password } = require('enquirer'); const prompt = new Password({ name: 'password', message: 'What is your password?' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Input](#input-prompt) * [Invisible](#invisible-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Quiz Prompt Prompt that allows the user to play multiple-choice quiz questions. <p align="center"> <img src="https://user-images.githubusercontent.com/13731210/61567561-891d4780-aa6f-11e9-9b09-3d504abd24ed.gif" alt="Enquirer Quiz Prompt" width="750"> </p> **Example Usage** ```js const { Quiz } = require('enquirer'); const prompt = new Quiz({ name: 'countries', message: 'How many countries are there in the world?', choices: ['165', '175', '185', '195', '205'], correctChoice: 3 }); prompt .run() .then(answer => { if (answer.correct) { console.log('Correct!'); } else { console.log(`Wrong! Correct answer is ${answer.correctAnswer}`); } }) .catch(console.error); ``` **Quiz Options** | Option | Type | Required | Description | | ----------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | | `choices` | `array` | Yes | The list of possible answers to the quiz question. | | `correctChoice`| `number` | Yes | Index of the correct choice from the `choices` array. | **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Survey Prompt Prompt that allows the user to provide feedback for a list of questions. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/survey-prompt.gif" alt="Enquirer Survey Prompt" width="750"> </p> **Example Usage** ```js const { Survey } = require('enquirer'); const prompt = new Survey({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.' }, { name: 'navigation', message: 'The website is easy to navigate.' }, { name: 'images', message: 'The website usually has good images.' }, { name: 'upload', message: 'The website makes it easy to upload images.' }, { name: 'colors', message: 'The website has a pleasing color palette.' } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [Scale](#scale-prompt) * [Snippet](#snippet-prompt) * [Select](#select-prompt) *** ### Scale Prompt A more compact version of the [Survey prompt](#survey-prompt), the Scale prompt allows the user to quickly provide feedback using a [Likert Scale](https://en.wikipedia.org/wiki/Likert_scale). <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/scale-prompt.gif" alt="Enquirer Scale Prompt" width="750"> </p> **Example Usage** ```js const { Scale } = require('enquirer'); const prompt = new Scale({ name: 'experience', message: 'Please rate your experience', scale: [ { name: '1', message: 'Strongly Disagree' }, { name: '2', message: 'Disagree' }, { name: '3', message: 'Neutral' }, { name: '4', message: 'Agree' }, { name: '5', message: 'Strongly Agree' } ], margin: [0, 0, 2, 1], choices: [ { name: 'interface', message: 'The website has a friendly interface.', initial: 2 }, { name: 'navigation', message: 'The website is easy to navigate.', initial: 2 }, { name: 'images', message: 'The website usually has good images.', initial: 2 }, { name: 'upload', message: 'The website makes it easy to upload images.', initial: 2 }, { name: 'colors', message: 'The website has a pleasing color palette.', initial: 2 } ] }); prompt.run() .then(value => console.log('ANSWERS:', value)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Select Prompt Prompt that allows the user to select from a list of options. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/select-prompt.gif" alt="Enquirer Select Prompt" width="750"> </p> **Example Usage** ```js const { Select } = require('enquirer'); const prompt = new Select({ name: 'color', message: 'Pick a flavor', choices: ['apple', 'grape', 'watermelon', 'cherry', 'orange'] }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [AutoComplete](#autocomplete-prompt) * [MultiSelect](#multiselect-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Sort Prompt Prompt that allows the user to sort items in a list. **Example** In this [example](https://github.com/enquirer/enquirer/raw/master/examples/sort/prompt.js), custom styling is applied to the returned values to make it easier to see what's happening. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/sort-prompt.gif" alt="Enquirer Sort Prompt" width="750"> </p> **Example Usage** ```js const colors = require('ansi-colors'); const { Sort } = require('enquirer'); const prompt = new Sort({ name: 'colors', message: 'Sort the colors in order of preference', hint: 'Top is best, bottom is worst', numbered: true, choices: ['red', 'white', 'green', 'cyan', 'yellow'].map(n => ({ name: n, message: colors[n](n) })) }); prompt.run() .then(function(answer = []) { console.log(answer); console.log('Your preferred order of colors is:'); console.log(answer.map(key => colors[key](key)).join('\n')); }) .catch(console.error); ``` **Related prompts** * [List](#list-prompt) * [Select](#select-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Snippet Prompt Prompt that allows the user to replace placeholders in a snippet of code or text. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/snippet-prompt.gif" alt="Prompts" width="750"> </p> **Example Usage** ```js const semver = require('semver'); const { Snippet } = require('enquirer'); const prompt = new Snippet({ name: 'username', message: 'Fill out the fields in package.json', required: true, fields: [ { name: 'author_name', message: 'Author Name' }, { name: 'version', validate(value, state, item, index) { if (item && item.name === 'version' && !semver.valid(value)) { return prompt.styles.danger('version should be a valid semver value'); } return true; } } ], template: `{ "name": "\${name}", "description": "\${description}", "version": "\${version}", "homepage": "https://github.com/\${username}/\${name}", "author": "\${author_name} (https://github.com/\${username})", "repository": "\${username}/\${name}", "license": "\${license:ISC}" } ` }); prompt.run() .then(answer => console.log('Answer:', answer.result)) .catch(console.error); ``` **Related prompts** * [Survey](#survey-prompt) * [AutoComplete](#autocomplete-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Toggle Prompt Prompt that allows the user to toggle between two values then returns `true` or `false`. <p align="center"> <img src="https://raw.githubusercontent.com/enquirer/enquirer/master/media/toggle-prompt.gif" alt="Enquirer Toggle Prompt" width="750"> </p> **Example Usage** ```js const { Toggle } = require('enquirer'); const prompt = new Toggle({ message: 'Want to answer?', enabled: 'Yep', disabled: 'Nope' }); prompt.run() .then(answer => console.log('Answer:', answer)) .catch(console.error); ``` **Related prompts** * [Confirm](#confirm-prompt) * [Input](#input-prompt) * [Sort](#sort-prompt) **↑ back to:** [Getting Started](#-getting-started) · [Prompts](#-prompts) *** ### Prompt Types There are 5 (soon to be 6!) type classes: * [ArrayPrompt](#arrayprompt) - [Options](#options) - [Properties](#properties) - [Methods](#methods) - [Choices](#choices) - [Defining choices](#defining-choices) - [Choice properties](#choice-properties) - [Related prompts](#related-prompts) * [AuthPrompt](#authprompt) * [BooleanPrompt](#booleanprompt) * DatePrompt (Coming Soon!) * [NumberPrompt](#numberprompt) * [StringPrompt](#stringprompt) Each type is a low-level class that may be used as a starting point for creating higher level prompts. Continue reading to learn how. ### ArrayPrompt The `ArrayPrompt` class is used for creating prompts that display a list of choices in the terminal. For example, Enquirer uses this class as the basis for the [Select](#select) and [Survey](#survey) prompts. #### Options In addition to the [options](#options) available to all prompts, Array prompts also support the following options. | **Option** | **Required?** | **Type** | **Description** | | ----------- | ------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------- | | `autofocus` | `no` | `string\|number` | The index or name of the choice that should have focus when the prompt loads. Only one choice may have focus at a time. | | | `stdin` | `no` | `stream` | The input stream to use for emitting keypress events. Defaults to `process.stdin`. | | `stdout` | `no` | `stream` | The output stream to use for writing the prompt to the terminal. Defaults to `process.stdout`. | | | #### Properties Array prompts have the following instance properties and getters. | **Property name** | **Type** | **Description** | | ----------------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `choices` | `array` | Array of choices that have been normalized from choices passed on the prompt options. | | `cursor` | `number` | Position of the cursor relative to the _user input (string)_. | | `enabled` | `array` | Returns an array of enabled choices. | | `focused` | `array` | Returns the currently selected choice in the visible list of choices. This is similar to the concept of focus in HTML and CSS. Focused choices are always visible (on-screen). When a list of choices is longer than the list of visible choices, and an off-screen choice is _focused_, the list will scroll to the focused choice and re-render. | | `focused` | Gets the currently selected choice. Equivalent to `prompt.choices[prompt.index]`. | | `index` | `number` | Position of the pointer in the _visible list (array) of choices_. | | `limit` | `number` | The number of choices to display on-screen. | | `selected` | `array` | Either a list of enabled choices (when `options.multiple` is true) or the currently focused choice. | | `visible` | `string` | | #### Methods | **Method** | **Description** | | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pointer()` | Returns the visual symbol to use to identify the choice that currently has focus. The `❯` symbol is often used for this. The pointer is not always visible, as with the `autocomplete` prompt. | | `indicator()` | Returns the visual symbol that indicates whether or not a choice is checked/enabled. | | `focus()` | Sets focus on a choice, if it can be focused. | #### Choices Array prompts support the `choices` option, which is the array of choices users will be able to select from when rendered in the terminal. **Type**: `string|object` **Example** ```js const { prompt } = require('enquirer'); const questions = [{ type: 'select', name: 'color', message: 'Favorite color?', initial: 1, choices: [ { name: 'red', message: 'Red', value: '#ff0000' }, //<= choice object { name: 'green', message: 'Green', value: '#00ff00' }, //<= choice object { name: 'blue', message: 'Blue', value: '#0000ff' } //<= choice object ] }]; let answers = await prompt(questions); console.log('Answer:', answers.color); ``` #### Defining choices Whether defined as a string or object, choices are normalized to the following interface: ```js { name: string; message: string | undefined; value: string | undefined; hint: string | undefined; disabled: boolean | string | undefined; } ``` **Example** ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: ['Apple', 'Orange', 'Raspberry'] }; ``` Normalizes to the following when the prompt is run: ```js const question = { name: 'fruit', message: 'Favorite fruit?', choices: [ { name: 'Apple', message: 'Apple', value: 'Apple' }, { name: 'Orange', message: 'Orange', value: 'Orange' }, { name: 'Raspberry', message: 'Raspberry', value: 'Raspberry' } ] }; ``` #### Choice properties The following properties are supported on `choice` objects. | **Option** | **Type** | **Description** | | ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `name` | `string` | The unique key to identify a choice | | `message` | `string` | The message to display in the terminal. `name` is used when this is undefined. | | `value` | `string` | Value to associate with the choice. Useful for creating key-value pairs from user choices. `name` is used when this is undefined. | | `choices` | `array` | Array of "child" choices. | | `hint` | `string` | Help message to display next to a choice. | | `role` | `string` | Determines how the choice will be displayed. Currently the only role supported is `separator`. Additional roles may be added in the future (like `heading`, etc). Please create a [feature request] | | `enabled` | `boolean` | Enabled a choice by default. This is only supported when `options.multiple` is true or on prompts that support multiple choices, like [MultiSelect](#-multiselect). | | `disabled` | `boolean\|string` | Disable a choice so that it cannot be selected. This value may either be `true`, `false`, or a message to display. | | `indicator` | `string\|function` | Custom indicator to render for a choice (like a check or radio button). | #### Related prompts * [AutoComplete](#autocomplete-prompt) * [Form](#form-prompt) * [MultiSelect](#multiselect-prompt) * [Select](#select-prompt) * [Survey](#survey-prompt) *** ### AuthPrompt The `AuthPrompt` is used to create prompts to log in user using any authentication method. For example, Enquirer uses this class as the basis for the [BasicAuth Prompt](#basicauth-prompt). You can also find prompt examples in `examples/auth/` folder that utilizes `AuthPrompt` to create OAuth based authentication prompt or a prompt that authenticates using time-based OTP, among others. `AuthPrompt` has a factory function that creates an instance of `AuthPrompt` class and it expects an `authenticate` function, as an argument, which overrides the `authenticate` function of the `AuthPrompt` class. #### Methods | **Method** | **Description** | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `authenticate()` | Contain all the authentication logic. This function should be overridden to implement custom authentication logic. The default `authenticate` function throws an error if no other function is provided. | #### Choices Auth prompt supports the `choices` option, which is the similar to the choices used in [Form Prompt](#form-prompt). **Example** ```js const { AuthPrompt } = require('enquirer'); function authenticate(value, state) { if (value.username === this.options.username && value.password === this.options.password) { return true; } return false; } const CustomAuthPrompt = AuthPrompt.create(authenticate); const prompt = new CustomAuthPrompt({ name: 'password', message: 'Please enter your password', username: 'rajat-sr', password: '1234567', choices: [ { name: 'username', message: 'username' }, { name: 'password', message: 'password' } ] }); prompt .run() .then(answer => console.log('Authenticated?', answer)) .catch(console.error); ``` #### Related prompts * [BasicAuth Prompt](#basicauth-prompt) *** ### BooleanPrompt The `BooleanPrompt` class is used for creating prompts that display and return a boolean value. ```js const { BooleanPrompt } = require('enquirer'); const prompt = new BooleanPrompt({ header: '========================', message: 'Do you love enquirer?', footer: '========================', }); prompt.run() .then(answer => console.log('Selected:', answer)) .catch(console.error); ``` **Returns**: `boolean` *** ### NumberPrompt The `NumberPrompt` class is used for creating prompts that display and return a numerical value. ```js const { NumberPrompt } = require('enquirer'); const prompt = new NumberPrompt({ header: '************************', message: 'Input the Numbers:', footer: '************************', }); prompt.run() .then(answer => console.log('Numbers are:', answer)) .catch(console.error); ``` **Returns**: `string|number` (number, or number formatted as a string) *** ### StringPrompt The `StringPrompt` class is used for creating prompts that display and return a string value. ```js const { StringPrompt } = require('enquirer'); const prompt = new StringPrompt({ header: '************************', message: 'Input the String:', footer: '************************' }); prompt.run() .then(answer => console.log('String is:', answer)) .catch(console.error); ``` **Returns**: `string` <br> ## ❯ Custom prompts With Enquirer 2.0, custom prompts are easier than ever to create and use. **How do I create a custom prompt?** Custom prompts are created by extending either: * Enquirer's `Prompt` class * one of the built-in [prompts](#-prompts), or * low-level [types](#-types). <!-- Example: HaiKarate Custom Prompt --> ```js const { Prompt } = require('enquirer'); class HaiKarate extends Prompt { constructor(options = {}) { super(options); this.value = options.initial || 0; this.cursorHide(); } up() { this.value++; this.render(); } down() { this.value--; this.render(); } render() { this.clear(); // clear previously rendered prompt from the terminal this.write(`${this.state.message}: ${this.value}`); } } // Use the prompt by creating an instance of your custom prompt class. const prompt = new HaiKarate({ message: 'How many sprays do you want?', initial: 10 }); prompt.run() .then(answer => console.log('Sprays:', answer)) .catch(console.error); ``` If you want to be able to specify your prompt by `type` so that it may be used alongside other prompts, you will need to first create an instance of `Enquirer`. ```js const Enquirer = require('enquirer'); const enquirer = new Enquirer(); ``` Then use the `.register()` method to add your custom prompt. ```js enquirer.register('haikarate', HaiKarate); ``` Now you can do the following when defining "questions". ```js let spritzer = require('cologne-drone'); let answers = await enquirer.prompt([ { type: 'haikarate', name: 'cologne', message: 'How many sprays do you need?', initial: 10, async onSubmit(name, value) { await spritzer.activate(value); //<= activate drone return value; } } ]); ``` <br> ## ❯ Key Bindings ### All prompts These key combinations may be used with all prompts. | **command** | **description** | | -------------------------------- | -------------------------------------- | | <kbd>ctrl</kbd> + <kbd>c</kbd> | Cancel the prompt. | | <kbd>ctrl</kbd> + <kbd>g</kbd> | Reset the prompt to its initial state. | <br> ### Move cursor These combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>left</kbd> | Move the cursor back one character. | | <kbd>right</kbd> | Move the cursor forward one character. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> ### Edit Input These key combinations may be used on prompts that support user input (eg. [input prompt](#input-prompt), [password prompt](#password-prompt), and [invisible prompt](#invisible-prompt)). | **command** | **description** | | ------------------------------ | ---------------------------------------- | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move cursor to the start of the line | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move cursor to the end of the line | | <kbd>ctrl</kbd> + <kbd>b</kbd> | Move cursor back one character | | <kbd>ctrl</kbd> + <kbd>f</kbd> | Move cursor forward one character | | <kbd>ctrl</kbd> + <kbd>x</kbd> | Toggle between first and cursor position | <br> | **command (Mac)** | **command (Windows)** | **description** | | ----------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | <kbd>delete</kbd> | <kbd>backspace</kbd> | Delete one character to the left. | | <kbd>fn</kbd> + <kbd>delete</kbd> | <kbd>delete</kbd> | Delete one character to the right. | | <kbd>option</kbd> + <kbd>up</kbd> | <kbd>alt</kbd> + <kbd>up</kbd> | Scroll to the previous item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | | <kbd>option</kbd> + <kbd>down</kbd> | <kbd>alt</kbd> + <kbd>down</kbd> | Scroll to the next item in history ([Input prompt](#input-prompt) only, when [history is enabled](examples/input/option-history.js)). | ### Select choices These key combinations may be used on prompts that support _multiple_ choices, such as the [multiselect prompt](#multiselect-prompt), or the [select prompt](#select-prompt) when the `multiple` options is true. | **command** | **description** | | ----------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>space</kbd> | Toggle the currently selected choice when `options.multiple` is true. | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>a</kbd> | Toggle all choices to be enabled or disabled. | | <kbd>i</kbd> | Invert the current selection of choices. | | <kbd>g</kbd> | Toggle the current choice group. | <br> ### Hide/show choices | **command** | **description** | | ------------------------------- | ---------------------------------------------- | | <kbd>fn</kbd> + <kbd>up</kbd> | Decrease the number of visible choices by one. | | <kbd>fn</kbd> + <kbd>down</kbd> | Increase the number of visible choices by one. | <br> ### Move/lock Pointer | **command** | **description** | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | <kbd>number</kbd> | Move the pointer to the choice at the given index. Also toggles the selected choice when `options.multiple` is true. | | <kbd>up</kbd> | Move the pointer up. | | <kbd>down</kbd> | Move the pointer down. | | <kbd>ctrl</kbd> + <kbd>a</kbd> | Move the pointer to the first _visible_ choice. | | <kbd>ctrl</kbd> + <kbd>e</kbd> | Move the pointer to the last _visible_ choice. | | <kbd>shift</kbd> + <kbd>up</kbd> | Scroll up one choice without changing pointer position (locks the pointer while scrolling). | | <kbd>shift</kbd> + <kbd>down</kbd> | Scroll down one choice without changing pointer position (locks the pointer while scrolling). | <br> | **command (Mac)** | **command (Windows)** | **description** | | -------------------------------- | --------------------- | ---------------------------------------------------------- | | <kbd>fn</kbd> + <kbd>left</kbd> | <kbd>home</kbd> | Move the pointer to the first choice in the choices array. | | <kbd>fn</kbd> + <kbd>right</kbd> | <kbd>end</kbd> | Move the pointer to the last choice in the choices array. | <br> ## ❯ Release History Please see [CHANGELOG.md](CHANGELOG.md). ## ❯ Performance ### System specs MacBook Pro, Intel Core i7, 2.5 GHz, 16 GB. ### Load time Time it takes for the module to load the first time (average of 3 runs): ``` enquirer: 4.013ms inquirer: 286.717ms ``` <br> ## ❯ About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Todo We're currently working on documentation for the following items. Please star and watch the repository for updates! * [ ] Customizing symbols * [ ] Customizing styles (palette) * [ ] Customizing rendered input * [ ] Customizing returned values * [ ] Customizing key bindings * [ ] Question validation * [ ] Choice validation * [ ] Skipping questions * [ ] Async choices * [ ] Async timers: loaders, spinners and other animations * [ ] Links to examples </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ```sh $ yarn && yarn test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> #### Contributors | **Commits** | **Contributor** | | --- | --- | | 283 | [jonschlinkert](https://github.com/jonschlinkert) | | 82 | [doowb](https://github.com/doowb) | | 32 | [rajat-sr](https://github.com/rajat-sr) | | 20 | [318097](https://github.com/318097) | | 15 | [g-plane](https://github.com/g-plane) | | 12 | [pixelass](https://github.com/pixelass) | | 5 | [adityavyas611](https://github.com/adityavyas611) | | 5 | [satotake](https://github.com/satotake) | | 3 | [tunnckoCore](https://github.com/tunnckoCore) | | 3 | [Ovyerus](https://github.com/Ovyerus) | | 3 | [sw-yx](https://github.com/sw-yx) | | 2 | [DanielRuf](https://github.com/DanielRuf) | | 2 | [GabeL7r](https://github.com/GabeL7r) | | 1 | [AlCalzone](https://github.com/AlCalzone) | | 1 | [hipstersmoothie](https://github.com/hipstersmoothie) | | 1 | [danieldelcore](https://github.com/danieldelcore) | | 1 | [ImgBotApp](https://github.com/ImgBotApp) | | 1 | [jsonkao](https://github.com/jsonkao) | | 1 | [knpwrs](https://github.com/knpwrs) | | 1 | [yeskunall](https://github.com/yeskunall) | | 1 | [mischah](https://github.com/mischah) | | 1 | [renarsvilnis](https://github.com/renarsvilnis) | | 1 | [sbugert](https://github.com/sbugert) | | 1 | [stephencweiss](https://github.com/stephencweiss) | | 1 | [skellock](https://github.com/skellock) | | 1 | [whxaxes](https://github.com/whxaxes) | #### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) #### Credit Thanks to [derhuerst](https://github.com/derhuerst), creator of prompt libraries such as [prompt-skeleton](https://github.com/derhuerst/prompt-skeleton), which influenced some of the concepts we used in our prompts. #### License Copyright © 2018-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. [![NPM version](https://img.shields.io/npm/v/esprima.svg)](https://www.npmjs.com/package/esprima) [![npm download](https://img.shields.io/npm/dm/esprima.svg)](https://www.npmjs.com/package/esprima) [![Build Status](https://img.shields.io/travis/jquery/esprima/master.svg)](https://travis-ci.org/jquery/esprima) [![Coverage Status](https://img.shields.io/codecov/c/github/jquery/esprima/master.svg)](https://codecov.io/github/jquery/esprima) **Esprima** ([esprima.org](http://esprima.org), BSD license) is a high performance, standard-compliant [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) parser written in ECMAScript (also popularly known as [JavaScript](https://en.wikipedia.org/wiki/JavaScript)). Esprima is created and maintained by [Ariya Hidayat](https://twitter.com/ariyahidayat), with the help of [many contributors](https://github.com/jquery/esprima/contributors). ### Features - Full support for ECMAScript 2017 ([ECMA-262 8th Edition](http://www.ecma-international.org/publications/standards/Ecma-262.htm)) - Sensible [syntax tree format](https://github.com/estree/estree/blob/master/es5.md) as standardized by [ESTree project](https://github.com/estree/estree) - Experimental support for [JSX](https://facebook.github.io/jsx/), a syntax extension for [React](https://facebook.github.io/react/) - Optional tracking of syntax node location (index-based and line-column) - [Heavily tested](http://esprima.org/test/ci.html) (~1500 [unit tests](https://github.com/jquery/esprima/tree/master/test/fixtures) with [full code coverage](https://codecov.io/github/jquery/esprima)) ### API Esprima can be used to perform [lexical analysis](https://en.wikipedia.org/wiki/Lexical_analysis) (tokenization) or [syntactic analysis](https://en.wikipedia.org/wiki/Parsing) (parsing) of a JavaScript program. A simple example on Node.js REPL: ```javascript > var esprima = require('esprima'); > var program = 'const answer = 42'; > esprima.tokenize(program); [ { type: 'Keyword', value: 'const' }, { type: 'Identifier', value: 'answer' }, { type: 'Punctuator', value: '=' }, { type: 'Numeric', value: '42' } ] > esprima.parseScript(program); { type: 'Program', body: [ { type: 'VariableDeclaration', declarations: [Object], kind: 'const' } ], sourceType: 'script' } ``` For more information, please read the [complete documentation](http://esprima.org/doc). argparse ======== [![Build Status](https://secure.travis-ci.org/nodeca/argparse.svg?branch=master)](http://travis-ci.org/nodeca/argparse) [![NPM version](https://img.shields.io/npm/v/argparse.svg)](https://www.npmjs.org/package/argparse) CLI arguments parser for node.js. Javascript port of python's [argparse](http://docs.python.org/dev/library/argparse.html) module (original version 3.2). That's a full port, except some very rare options, recorded in issue tracker. **NB. Difference with original.** - Method names changed to camelCase. See [generated docs](http://nodeca.github.com/argparse/). - Use `defaultValue` instead of `default`. - Use `argparse.Const.REMAINDER` instead of `argparse.REMAINDER`, and similarly for constant values `OPTIONAL`, `ZERO_OR_MORE`, and `ONE_OR_MORE` (aliases for `nargs` values `'?'`, `'*'`, `'+'`, respectively), and `SUPPRESS`. Example ======= test.js file: ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse example' }); parser.addArgument( [ '-f', '--foo' ], { help: 'foo bar' } ); parser.addArgument( [ '-b', '--bar' ], { help: 'bar foo' } ); parser.addArgument( '--baz', { help: 'baz bar' } ); var args = parser.parseArgs(); console.dir(args); ``` Display help: ``` $ ./test.js -h usage: example.js [-h] [-v] [-f FOO] [-b BAR] [--baz BAZ] Argparse example Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -f FOO, --foo FOO foo bar -b BAR, --bar BAR bar foo --baz BAZ baz bar ``` Parse arguments: ``` $ ./test.js -f=3 --bar=4 --baz 5 { foo: '3', bar: '4', baz: '5' } ``` More [examples](https://github.com/nodeca/argparse/tree/master/examples). ArgumentParser objects ====================== ``` new ArgumentParser({parameters hash}); ``` Creates a new ArgumentParser object. **Supported params:** - ```description``` - Text to display before the argument help. - ```epilog``` - Text to display after the argument help. - ```addHelp``` - Add a -h/–help option to the parser. (default: true) - ```argumentDefault``` - Set the global default value for arguments. (default: null) - ```parents``` - A list of ArgumentParser objects whose arguments should also be included. - ```prefixChars``` - The set of characters that prefix optional arguments. (default: ‘-‘) - ```formatterClass``` - A class for customizing the help output. - ```prog``` - The name of the program (default: `path.basename(process.argv[1])`) - ```usage``` - The string describing the program usage (default: generated) - ```conflictHandler``` - Usually unnecessary, defines strategy for resolving conflicting optionals. **Not supported yet** - ```fromfilePrefixChars``` - The set of characters that prefix files from which additional arguments should be read. Details in [original ArgumentParser guide](http://docs.python.org/dev/library/argparse.html#argumentparser-objects) addArgument() method ==================== ``` ArgumentParser.addArgument(name or flag or [name] or [flags...], {options}) ``` Defines how a single command-line argument should be parsed. - ```name or flag or [name] or [flags...]``` - Either a positional name (e.g., `'foo'`), a single option (e.g., `'-f'` or `'--foo'`), an array of a single positional name (e.g., `['foo']`), or an array of options (e.g., `['-f', '--foo']`). Options: - ```action``` - The basic type of action to be taken when this argument is encountered at the command line. - ```nargs```- The number of command-line arguments that should be consumed. - ```constant``` - A constant value required by some action and nargs selections. - ```defaultValue``` - The value produced if the argument is absent from the command line. - ```type``` - The type to which the command-line argument should be converted. - ```choices``` - A container of the allowable values for the argument. - ```required``` - Whether or not the command-line option may be omitted (optionals only). - ```help``` - A brief description of what the argument does. - ```metavar``` - A name for the argument in usage messages. - ```dest``` - The name of the attribute to be added to the object returned by parseArgs(). Details in [original add_argument guide](http://docs.python.org/dev/library/argparse.html#the-add-argument-method) Action (some details) ================ ArgumentParser objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them, though most actions simply add an attribute to the object returned by parseArgs(). The action keyword argument specifies how the command-line arguments should be handled. The supported actions are: - ```store``` - Just stores the argument’s value. This is the default action. - ```storeConst``` - Stores value, specified by the const keyword argument. (Note that the const keyword argument defaults to the rather unhelpful None.) The 'storeConst' action is most commonly used with optional arguments, that specify some sort of flag. - ```storeTrue``` and ```storeFalse``` - Stores values True and False respectively. These are special cases of 'storeConst'. - ```append``` - Stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. - ```appendConst``` - Stores a list, and appends value, specified by the const keyword argument to the list. (Note, that the const keyword argument defaults is None.) The 'appendConst' action is typically used when multiple arguments need to store constants to the same list. - ```count``` - Counts the number of times a keyword argument occurs. For example, used for increasing verbosity levels. - ```help``` - Prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser. See ArgumentParser for details of how the output is created. - ```version``` - Prints version information and exit. Expects a `version=` keyword argument in the addArgument() call. Details in [original action guide](http://docs.python.org/dev/library/argparse.html#action) Sub-commands ============ ArgumentParser.addSubparsers() Many programs split their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands like `svn checkout`, `svn update`, and `svn commit`. Splitting up functionality this way can be a particularly good idea when a program performs several different functions which require different kinds of command-line arguments. `ArgumentParser` supports creation of such sub-commands with `addSubparsers()` method. The `addSubparsers()` method is normally called with no arguments and returns an special action object. This object has a single method `addParser()`, which takes a command name and any `ArgumentParser` constructor arguments, and returns an `ArgumentParser` object that can be modified as usual. Example: sub_commands.js ```javascript #!/usr/bin/env node 'use strict'; var ArgumentParser = require('../lib/argparse').ArgumentParser; var parser = new ArgumentParser({ version: '0.0.1', addHelp:true, description: 'Argparse examples: sub-commands', }); var subparsers = parser.addSubparsers({ title:'subcommands', dest:"subcommand_name" }); var bar = subparsers.addParser('c1', {addHelp:true}); bar.addArgument( [ '-f', '--foo' ], { action: 'store', help: 'foo3 bar3' } ); var bar = subparsers.addParser( 'c2', {aliases:['co'], addHelp:true} ); bar.addArgument( [ '-b', '--bar' ], { action: 'store', type: 'int', help: 'foo3 bar3' } ); var args = parser.parseArgs(); console.dir(args); ``` Details in [original sub-commands guide](http://docs.python.org/dev/library/argparse.html#sub-commands) Contributors ============ - [Eugene Shkuropat](https://github.com/shkuropat) - [Paul Jacobson](https://github.com/hpaulj) [others](https://github.com/nodeca/argparse/graphs/contributors) License ======= Copyright (c) 2012 [Vitaly Puzrin](https://github.com/puzrin). Released under the MIT license. See [LICENSE](https://github.com/nodeca/argparse/blob/master/LICENSE) for details. # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows command prompt notes ##### CMD On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Example: ```cmd set DEBUG=* & node app.js ``` ##### PowerShell (VS Code default) PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Example: ```cmd $env:DEBUG='app';node app.js ``` Then, run the program to be debugged as usual. npm script example: ```js "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js", ``` ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Extend You can simply extend debugger ```js const log = require('debug')('auth'); //creates new debug instance with extended namespace const logSign = log.extend('sign'); const logLogin = log.extend('login'); log('hello'); // auth hello logSign('hello'); //auth:sign hello logLogin('hello'); //auth:login hello ``` ## Set dynamically You can also enable debug dynamically by calling the `enable()` method : ```js let debug = require('debug'); console.log(1, debug.enabled('test')); debug.enable('test'); console.log(2, debug.enabled('test')); debug.disable(); console.log(3, debug.enabled('test')); ``` print : ``` 1 false 2 true 3 false ``` Usage : `enable(namespaces)` `namespaces` can include modes separated by a colon and wildcards. Note that calling `enable()` completely overrides previously set DEBUG variable : ``` $ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))' => false ``` `disable()` Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with. For example: ```js let debug = require('debug'); debug.enable('foo:*,-foo:bar'); let namespaces = debug.disable(); debug.enable(namespaces); ``` Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical. ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # sprintf.js **sprintf.js** is a complete open source JavaScript sprintf implementation for the *browser* and *node.js*. Its prototype is simple: string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]]) The placeholders in the format string are marked by `%` and are followed by one or more of these elements, in this order: * An optional number followed by a `$` sign that selects which argument index to use for the value. If not specified, arguments will be placed in the same order as the placeholders in the input string. * An optional `+` sign that forces to preceed the result with a plus or minus sign on numeric values. By default, only the `-` sign is used on negative numbers. * An optional padding specifier that says what character to use for padding (if specified). Possible values are `0` or any other character precedeed by a `'` (single quote). The default is to pad with *spaces*. * An optional `-` sign, that causes sprintf to left-align the result of this placeholder. The default is to right-align the result. * An optional number, that says how many characters the result should have. If the value to be returned is shorter than this number, the result will be padded. When used with the `j` (JSON) type specifier, the padding length specifies the tab size used for indentation. * An optional precision modifier, consisting of a `.` (dot) followed by a number, that says how many digits should be displayed for floating point numbers. When used with the `g` type specifier, it specifies the number of significant digits. When used on a string, it causes the result to be truncated. * A type specifier that can be any of: * `%` — yields a literal `%` character * `b` — yields an integer as a binary number * `c` — yields an integer as the character with that ASCII value * `d` or `i` — yields an integer as a signed decimal number * `e` — yields a float using scientific notation * `u` — yields an integer as an unsigned decimal number * `f` — yields a float as is; see notes on precision above * `g` — yields a float as is; see notes on precision above * `o` — yields an integer as an octal number * `s` — yields a string as is * `x` — yields an integer as a hexadecimal number (lower-case) * `X` — yields an integer as a hexadecimal number (upper-case) * `j` — yields a JavaScript object or array as a JSON encoded string ## JavaScript `vsprintf` `vsprintf` is the same as `sprintf` except that it accepts an array of arguments, rather than a variable number of arguments: vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) ## Argument swapping You can also swap the arguments. That is, the order of the placeholders doesn't have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to: sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") And, of course, you can repeat the placeholders without having to increase the number of arguments. ## Named arguments Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - `(` and `)` - and begin with a keyword that refers to a key: var user = { name: "Dolly" } sprintf("Hello %(name)s", user) // Hello Dolly Keywords in replacement fields can be optionally followed by any number of keywords or indexes: var users = [ {name: "Dolly"}, {name: "Molly"}, {name: "Polly"} ] sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly Note: mixing positional and named placeholders is not (yet) supported ## Computed values You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly. sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890 sprintf("Current date and time: %s", function() { return new Date().toString() }) # AngularJS You can now use `sprintf` and `vsprintf` (also aliased as `fmt` and `vfmt` respectively) in your AngularJS projects. See `demo/`. # Installation ## Via Bower bower install sprintf ## Or as a node.js module npm install sprintf-js ### Usage var sprintf = require("sprintf-js").sprintf, vsprintf = require("sprintf-js").vsprintf sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants") vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"]) # License **sprintf.js** is licensed under the terms of the 3-clause BSD license. # uuidv4 uuidv4 creates v4 UUIDs. ## Status | Category | Status | | ---------------- | --------------------------------------------------------------------------------------------------- | | Version | [![npm](https://img.shields.io/npm/v/uuidv4)](https://www.npmjs.com/package/uuidv4) | | Dependencies | ![David](https://img.shields.io/david/thenativeweb/uuidv4) | | Dev dependencies | ![David](https://img.shields.io/david/dev/thenativeweb/uuidv4) | | Build | ![GitHub Actions](https://github.com/thenativeweb/uuidv4/workflows/Release/badge.svg?branch=main) | | License | ![GitHub](https://img.shields.io/github/license/thenativeweb/uuidv4) | ## Please note This module will be deprecated in the future in favour of module [uuid](https://www.npmjs.com/package/uuid). Most of the functionality of this module is already included in `uuid` since version `8.3.0`, so most of the functions of this module have already been marked as deprecated. ## Installation ```shell $ npm install uuidv4 ``` ## Quick start First you need to integrate uuidv4 into your project by using the `require` function: ```javascript const { uuid } = require('uuidv4'); ``` If you use TypeScript, use the following code instead: ```typescript import { uuid } from 'uuidv4'; ``` Then you can create UUIDs. To do so simply call the `uuid` function: ```javascript console.log(uuid()); // => '11bf5b37-e0b8-42e0-8dcf-dc8c4aefc000' ``` ### Verifying a UUID To verify whether a given value is a UUID, use the `isUuid` function: ```javascript import { isUuid } from 'uuidv4'; console.log(isUuid('75442486-0878-440c-9db1-a7006c25a39f')); // => true ``` _Please note that the `isUuid` function returns `true` for both, `v4` and `v5` UUIDs. In addition, `isUuid` returns `true` for `empty()`._ #### Using a regular expression If you want to perform the verification on your own using a regular expression, use the `regex` property, and access its `v4` or `v5` property, depending on what you need: ```javascript import { regex } from 'uuidv4'; console.log(regex.v4); console.log(regex.v5); ``` _Please note that the regular expressions also consider `empty()` to be a valid UUID._ #### Using a JSON schema If you want to perform the verification on your own using a JSON schema, use the `jsonSchema` property, and access its `v4` or `v5` property, depending on what you need: ```javascript import { jsonSchema } from 'uuidv4'; console.log(jsonSchema.v4); console.log(jsonSchema.v5); ``` _Please note that the JSON schemas also consider `empty()` to be a valid UUID._ ### Getting a UUID from a string From time to time you need an identifier that looks like a UUID, but is actually inferred from a string. For that, use the `fromString` function, which returns a UUID `v5`: ```javascript import { fromString } from 'uuidv4'; console.log(fromString('the native web')); // => 'cdb63720-9628-5ef6-bbca-2e5ce6094f3c' ``` By default, the `fromString` function uses a pre-configured namespace. If you want to use your own namespace, provide a UUID as second parameter: ```javascript import { fromString } from 'uuidv4'; console.log(fromString('the native web', '004aadf4-8e1a-4450-905b-6039179f52da')); // => 'b1c4a89e-4905-5e3c-b57f-dc92627d011e' ``` ### Getting the empty UUID If you need a UUID that consists only of zeros, use the `empty` function: ```javascript import { empty } from 'uuidv4'; console.log(empty()); // => '00000000-0000-0000-0000-000000000000' ``` ## Running quality assurance To run quality assurance for this module use [roboter](https://www.npmjs.com/package/roboter): ```shell $ npx roboter ``` # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # fast-json-stable-stringify Deterministic `JSON.stringify()` - a faster version of [@substack](https://github.com/substack)'s json-stable-strigify without [jsonify](https://github.com/substack/jsonify). You can also pass in a custom comparison function. [![Build Status](https://travis-ci.org/epoberezkin/fast-json-stable-stringify.svg?branch=master)](https://travis-ci.org/epoberezkin/fast-json-stable-stringify) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/fast-json-stable-stringify/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/fast-json-stable-stringify?branch=master) # example ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; console.log(stringify(obj)); ``` output: ``` {"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8} ``` # methods ``` js var stringify = require('fast-json-stable-stringify') ``` ## var str = stringify(obj, opts) Return a deterministic stringified string `str` from the object `obj`. ## options ### cmp If `opts` is given, you can supply an `opts.cmp` to have a custom comparison function for object keys. Your function `opts.cmp` is called with these parameters: ``` js opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue }) ``` For example, to sort on the object key names in reverse order you could write: ``` js var stringify = require('fast-json-stable-stringify'); var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 }; var s = stringify(obj, function (a, b) { return a.key < b.key ? 1 : -1; }); console.log(s); ``` which results in the output string: ``` {"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3} ``` Or if you wanted to sort on the object values in reverse order, you could write: ``` var stringify = require('fast-json-stable-stringify'); var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 }; var s = stringify(obj, function (a, b) { return a.value < b.value ? 1 : -1; }); console.log(s); ``` which outputs: ``` {"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10} ``` ### cycles Pass `true` in `opts.cycles` to stringify circular property as `__cycle__` - the result will not be a valid JSON string in this case. TypeError will be thrown in case of circular object without this option. # install With [npm](https://npmjs.org) do: ``` npm install fast-json-stable-stringify ``` # benchmark To run benchmark (requires Node.js 6+): ``` node benchmark ``` Results: ``` fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled) json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled) fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled) faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled) The fastest is fast-stable-stringify ``` ## Enterprise support fast-json-stable-stringify package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-fast-json-stable-stringify?utm_source=npm-fast-json-stable-stringify&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. # license [MIT](https://github.com/epoberezkin/fast-json-stable-stringify/blob/master/LICENSE) Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. iMurmurHash.js ============== An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on [Gary Court's implementation](https://github.com/garycourt/murmurhash-js) with [kazuyukitanimura's modifications](https://github.com/kazuyukitanimura/murmurhash-js). This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing. Installation ------------ To use iMurmurHash in the browser, [download the latest version](https://raw.github.com/jensyt/imurmurhash-js/master/imurmurhash.min.js) and include it as a script on your site. ```html <script type="text/javascript" src="/scripts/imurmurhash.min.js"></script> <script> // Your code here, access iMurmurHash using the global object MurmurHash3 </script> ``` --- To use iMurmurHash in Node.js, install the module using NPM: ```bash npm install imurmurhash ``` Then simply include it in your scripts: ```javascript MurmurHash3 = require('imurmurhash'); ``` Quick Example ------------- ```javascript // Create the initial hash var hashState = MurmurHash3('string'); // Incrementally add text hashState.hash('more strings'); hashState.hash('even more strings'); // All calls can be chained if desired hashState.hash('and').hash('some').hash('more'); // Get a result hashState.result(); // returns 0xe4ccfe6b ``` Functions --------- ### MurmurHash3 ([string], [seed]) Get a hash state object, optionally initialized with the given _string_ and _seed_. _Seed_ must be a positive integer if provided. Calling this function without the `new` keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use `new` to create a new state object. For example: ```javascript // Use the cached object, calling the function again will return the same // object (but reset, so the current state would be lost) hashState = MurmurHash3(); ... // Create a new object that can be safely used however you wish. Calling the // function again will simply return a new state object, and no state loss // will occur, at the cost of creating more objects. hashState = new MurmurHash3(); ``` Both methods can be mixed however you like if you have different use cases. --- ### MurmurHash3.prototype.hash (string) Incrementally add _string_ to the hash. This can be called as many times as you want for the hash state object, including after a call to `result()`. Returns `this` so calls can be chained. --- ### MurmurHash3.prototype.result () Get the result of the hash as a 32-bit positive integer. This performs the tail and finalizer portions of the algorithm, but does not store the result in the state object. This means that it is perfectly safe to get results and then continue adding strings via `hash`. ```javascript // Do the whole string at once MurmurHash3('this is a test string').result(); // 0x70529328 // Do part of the string, get a result, then the other part var m = MurmurHash3('this is a'); m.result(); // 0xbfc4f834 m.hash(' test string').result(); // 0x70529328 (same as above) ``` --- ### MurmurHash3.prototype.reset ([seed]) Reset the state object for reuse, optionally using the given _seed_ (defaults to 0 like the constructor). Returns `this` so calls can be chained. --- License (MIT) ------------- Copyright (c) 2013 Gary Court, Jens Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![build](https://github.com/epoberezkin/json-schema-traverse/workflows/build/badge.svg)](https://github.com/epoberezkin/json-schema-traverse/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/json-schema-traverse)](https://www.npmjs.com/package/json-schema-traverse) [![coverage](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## Enterprise support json-schema-traverse package is a part of [Tidelift enterprise subscription](https://tidelift.com/subscription/pkg/npm-json-schema-traverse?utm_source=npm-json-schema-traverse&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) - it provides a centralised commercial support to open-source software users, in addition to the support provided by software maintainers. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` # isobject [![NPM version](https://img.shields.io/npm/v/isobject.svg?style=flat)](https://www.npmjs.com/package/isobject) [![NPM downloads](https://img.shields.io/npm/dm/isobject.svg?style=flat)](https://npmjs.org/package/isobject) [![Build Status](https://img.shields.io/travis/jonschlinkert/isobject.svg?style=flat)](https://travis-ci.org/jonschlinkert/isobject) Returns true if the value is an object and not an array or null. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject --save ``` Use [is-plain-object](https://github.com/jonschlinkert/is-plain-object) if you want only objects that are created by the `Object` constructor. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install isobject ``` Install with [bower](http://bower.io/) ```sh $ bower install isobject ``` ## Usage ```js var isObject = require('isobject'); ``` **True** All of the following return `true`: ```js isObject({}); isObject(Object.create({})); isObject(Object.create(Object.prototype)); isObject(Object.create(null)); isObject({}); isObject(new Foo); isObject(/foo/); ``` **False** All of the following return `false`: ```js isObject(); isObject(function () {}); isObject(1); isObject([]); isObject(undefined); isObject(null); ``` ## Related projects You might also be interested in these projects: [merge-deep](https://www.npmjs.com/package/merge-deep): Recursively merge values in a javascript object. | [homepage](https://github.com/jonschlinkert/merge-deep) * [extend-shallow](https://www.npmjs.com/package/extend-shallow): Extend an object with the properties of additional objects. node.js/javascript util. | [homepage](https://github.com/jonschlinkert/extend-shallow) * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object) * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of) ## Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](https://github.com/jonschlinkert/isobject/issues/new). ## Building docs Generate readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install verb && npm run docs ``` Or, if [verb](https://github.com/verbose/verb) is installed globally: ```sh $ verb ``` ## Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ## Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ## License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/isobject/blob/master/LICENSE). *** _This file was generated by [verb](https://github.com/verbose/verb), v0.9.0, on April 25, 2016._ # ansi-colors [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/ansi-colors.svg?style=flat)](https://www.npmjs.com/package/ansi-colors) [![NPM monthly downloads](https://img.shields.io/npm/dm/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![NPM total downloads](https://img.shields.io/npm/dt/ansi-colors.svg?style=flat)](https://npmjs.org/package/ansi-colors) [![Linux Build Status](https://img.shields.io/travis/doowb/ansi-colors.svg?style=flat&label=Travis)](https://travis-ci.org/doowb/ansi-colors) > Easily add ANSI colors to your text and symbols in the terminal. A faster drop-in replacement for chalk, kleur and turbocolor (without the dependencies and rendering bugs). Please consider following this project's author, [Brian Woodward](https://github.com/doowb), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save ansi-colors ``` ![image](https://user-images.githubusercontent.com/383994/39635445-8a98a3a6-4f8b-11e8-89c1-068c45d4fff8.png) ## Why use this? ansi-colors is _the fastest Node.js library for terminal styling_. A more performant drop-in replacement for chalk, with no dependencies. * _Blazing fast_ - Fastest terminal styling library in node.js, 10-20x faster than chalk! * _Drop-in replacement_ for [chalk](https://github.com/chalk/chalk). * _No dependencies_ (Chalk has 7 dependencies in its tree!) * _Safe_ - Does not modify the `String.prototype` like [colors](https://github.com/Marak/colors.js). * Supports [nested colors](#nested-colors), **and does not have the [nested styling bug](#nested-styling-bug) that is present in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur)**. * Supports [chained colors](#chained-colors). * [Toggle color support](#toggle-color-support) on or off. ## Usage ```js const c = require('ansi-colors'); console.log(c.red('This is a red string!')); console.log(c.green('This is a red string!')); console.log(c.cyan('This is a cyan string!')); console.log(c.yellow('This is a yellow string!')); ``` ![image](https://user-images.githubusercontent.com/383994/39653848-a38e67da-4fc0-11e8-89ae-98c65ebe9dcf.png) ## Chained colors ```js console.log(c.bold.red('this is a bold red message')); console.log(c.bold.yellow.italic('this is a bold yellow italicized message')); console.log(c.green.bold.underline('this is a bold green underlined message')); ``` ![image](https://user-images.githubusercontent.com/383994/39635780-7617246a-4f8c-11e8-89e9-05216cc54e38.png) ## Nested colors ```js console.log(c.yellow(`foo ${c.red.bold('red')} bar ${c.cyan('cyan')} baz`)); ``` ![image](https://user-images.githubusercontent.com/383994/39635817-8ed93d44-4f8c-11e8-8afd-8c3ea35f5fbe.png) ### Nested styling bug `ansi-colors` does not have the nested styling bug found in [colorette](https://github.com/jorgebucaran/colorette), [chalk](https://github.com/chalk/chalk), and [kleur](https://github.com/lukeed/kleur). ```js const { bold, red } = require('ansi-styles'); console.log(bold(`foo ${red.dim('bar')} baz`)); const colorette = require('colorette'); console.log(colorette.bold(`foo ${colorette.red(colorette.dim('bar'))} baz`)); const kleur = require('kleur'); console.log(kleur.bold(`foo ${kleur.red.dim('bar')} baz`)); const chalk = require('chalk'); console.log(chalk.bold(`foo ${chalk.red.dim('bar')} baz`)); ``` **Results in the following** (sans icons and labels) ![image](https://user-images.githubusercontent.com/383994/47280326-d2ee0580-d5a3-11e8-9611-ea6010f0a253.png) ## Toggle color support Easily enable/disable colors. ```js const c = require('ansi-colors'); // disable colors manually c.enabled = false; // or use a library to automatically detect support c.enabled = require('color-support').hasBasic; console.log(c.red('I will only be colored red if the terminal supports colors')); ``` ## Strip ANSI codes Use the `.unstyle` method to strip ANSI codes from a string. ```js console.log(c.unstyle(c.blue.bold('foo bar baz'))); //=> 'foo bar baz' ``` ## Available styles **Note** that bright and bright-background colors are not always supported. | Colors | Background Colors | Bright Colors | Bright Background Colors | | ------- | ----------------- | ------------- | ------------------------ | | black | bgBlack | blackBright | bgBlackBright | | red | bgRed | redBright | bgRedBright | | green | bgGreen | greenBright | bgGreenBright | | yellow | bgYellow | yellowBright | bgYellowBright | | blue | bgBlue | blueBright | bgBlueBright | | magenta | bgMagenta | magentaBright | bgMagentaBright | | cyan | bgCyan | cyanBright | bgCyanBright | | white | bgWhite | whiteBright | bgWhiteBright | | gray | | | | | grey | | | | _(`gray` is the U.S. spelling, `grey` is more commonly used in the Canada and U.K.)_ ### Style modifiers * dim * **bold** * hidden * _italic_ * underline * inverse * ~~strikethrough~~ * reset ## Aliases Create custom aliases for styles. ```js const colors = require('ansi-colors'); colors.alias('primary', colors.yellow); colors.alias('secondary', colors.bold); console.log(colors.primary.secondary('Foo')); ``` ## Themes A theme is an object of custom aliases. ```js const colors = require('ansi-colors'); colors.theme({ danger: colors.red, dark: colors.dim.gray, disabled: colors.gray, em: colors.italic, heading: colors.bold.underline, info: colors.cyan, muted: colors.dim, primary: colors.blue, strong: colors.bold, success: colors.green, underline: colors.underline, warning: colors.yellow }); // Now, we can use our custom styles alongside the built-in styles! console.log(colors.danger.strong.em('Error!')); console.log(colors.warning('Heads up!')); console.log(colors.info('Did you know...')); console.log(colors.success.bold('It worked!')); ``` ## Performance **Libraries tested** * ansi-colors v3.0.4 * chalk v2.4.1 ### Mac > MacBook Pro, Intel Core i7, 2.3 GHz, 16 GB. **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.915ms` * chalk - `12.437ms` **Benchmarks** ``` # All Colors ansi-colors x 173,851 ops/sec ±0.42% (91 runs sampled) chalk x 9,944 ops/sec ±2.53% (81 runs sampled))) # Chained colors ansi-colors x 20,791 ops/sec ±0.60% (88 runs sampled) chalk x 2,111 ops/sec ±2.34% (83 runs sampled) # Nested colors ansi-colors x 59,304 ops/sec ±0.98% (92 runs sampled) chalk x 4,590 ops/sec ±2.08% (82 runs sampled) ``` ### Windows > Windows 10, Intel Core i7-7700k CPU @ 4.2 GHz, 32 GB **Load time** Time it takes to load the first time `require()` is called: * ansi-colors - `1.494ms` * chalk - `11.523ms` **Benchmarks** ``` # All Colors ansi-colors x 193,088 ops/sec ±0.51% (95 runs sampled)) chalk x 9,612 ops/sec ±3.31% (77 runs sampled))) # Chained colors ansi-colors x 26,093 ops/sec ±1.13% (94 runs sampled) chalk x 2,267 ops/sec ±2.88% (80 runs sampled)) # Nested colors ansi-colors x 67,747 ops/sec ±0.49% (93 runs sampled) chalk x 4,446 ops/sec ±3.01% (82 runs sampled)) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [ansi-wrap](https://www.npmjs.com/package/ansi-wrap): Create ansi colors by passing the open and close codes. | [homepage](https://github.com/jonschlinkert/ansi-wrap "Create ansi colors by passing the open and close codes.") * [strip-color](https://www.npmjs.com/package/strip-color): Strip ANSI color codes from a string. No dependencies. | [homepage](https://github.com/jonschlinkert/strip-color "Strip ANSI color codes from a string. No dependencies.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 48 | [jonschlinkert](https://github.com/jonschlinkert) | | 42 | [doowb](https://github.com/doowb) | | 6 | [lukeed](https://github.com/lukeed) | | 2 | [Silic0nS0ldier](https://github.com/Silic0nS0ldier) | | 1 | [dwieeb](https://github.com/dwieeb) | | 1 | [jorgebucaran](https://github.com/jorgebucaran) | | 1 | [madhavarshney](https://github.com/madhavarshney) | | 1 | [chapterjason](https://github.com/chapterjason) | ### Author **Brian Woodward** * [GitHub Profile](https://github.com/doowb) * [Twitter Profile](https://twitter.com/doowb) * [LinkedIn Profile](https://linkedin.com/in/woodwardbrian) ### License Copyright © 2019, [Brian Woodward](https://github.com/doowb). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on July 01, 2019._ # Acorn-JSX [![Build Status](https://travis-ci.org/acornjs/acorn-jsx.svg?branch=master)](https://travis-ci.org/acornjs/acorn-jsx) [![NPM version](https://img.shields.io/npm/v/acorn-jsx.svg)](https://www.npmjs.org/package/acorn-jsx) This is plugin for [Acorn](http://marijnhaverbeke.nl/acorn/) - a tiny, fast JavaScript parser, written completely in JavaScript. It was created as an experimental alternative, faster [React.js JSX](http://facebook.github.io/react/docs/jsx-in-depth.html) parser. Later, it replaced the [official parser](https://github.com/facebookarchive/esprima) and these days is used by many prominent development tools. ## Transpiler Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out [Babel](https://babeljs.io/) and [Buble](https://buble.surge.sh/) transpilers which use `acorn-jsx` under the hood. ## Usage Requiring this module provides you with an Acorn plugin that you can use like this: ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');"); ``` Note that official spec doesn't support mix of XML namespaces and object-style access in tag names (#27) like in `<namespace:Object.Property />`, so it was deprecated in `[email protected]`. If you still want to opt-in to support of such constructions, you can pass the following option: ```javascript acorn.Parser.extend(jsx({ allowNamespacedObjects: true })) ``` Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely: ```javascript acorn.Parser.extend(jsx({ allowNamespaces: false })) ``` Note that by default `allowNamespaces` is enabled for spec compliancy. ## License This plugin is issued under the [MIT license](./LICENSE). # eslint-utils [![npm version](https://img.shields.io/npm/v/eslint-utils.svg)](https://www.npmjs.com/package/eslint-utils) [![Downloads/month](https://img.shields.io/npm/dm/eslint-utils.svg)](http://www.npmtrends.com/eslint-utils) [![Build Status](https://github.com/mysticatea/eslint-utils/workflows/CI/badge.svg)](https://github.com/mysticatea/eslint-utils/actions) [![Coverage Status](https://codecov.io/gh/mysticatea/eslint-utils/branch/master/graph/badge.svg)](https://codecov.io/gh/mysticatea/eslint-utils) [![Dependency Status](https://david-dm.org/mysticatea/eslint-utils.svg)](https://david-dm.org/mysticatea/eslint-utils) ## 🏁 Goal This package provides utility functions and classes for make ESLint custom rules. For examples: - [getStaticValue](https://eslint-utils.mysticatea.dev/api/ast-utils.html#getstaticvalue) evaluates static value on AST. - [ReferenceTracker](https://eslint-utils.mysticatea.dev/api/scope-utils.html#referencetracker-class) checks the members of modules/globals as handling assignments and destructuring. ## 📖 Usage See [documentation](https://eslint-utils.mysticatea.dev/). ## 📰 Changelog See [releases](https://github.com/mysticatea/eslint-utils/releases). ## ❤️ Contributing Welcome contributing! Please use GitHub's Issues/PRs. ### Development Tools - `npm test` runs tests and measures coverage. - `npm run clean` removes the coverage result of `npm test` command. - `npm run coverage` shows the coverage result of the last `npm test` command. - `npm run lint` runs ESLint. - `npm run watch` runs tests on each file change. # word-wrap [![NPM version](https://img.shields.io/npm/v/word-wrap.svg?style=flat)](https://www.npmjs.com/package/word-wrap) [![NPM monthly downloads](https://img.shields.io/npm/dm/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![NPM total downloads](https://img.shields.io/npm/dt/word-wrap.svg?style=flat)](https://npmjs.org/package/word-wrap) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/word-wrap.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/word-wrap) > Wrap words to a specified length. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save word-wrap ``` ## Usage ```js var wrap = require('word-wrap'); wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'); ``` Results in: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. ``` ## Options ![image](https://cloud.githubusercontent.com/assets/383994/6543728/7a381c08-c4f6-11e4-8b7d-b6ba197569c9.png) ### options.width Type: `Number` Default: `50` The width of the text before wrapping to a new line. **Example:** ```js wrap(str, {width: 60}); ``` ### options.indent Type: `String` Default: `` (two spaces) The string to use at the beginning of each line. **Example:** ```js wrap(str, {indent: ' '}); ``` ### options.newline Type: `String` Default: `\n` The string to use at the end of each line. **Example:** ```js wrap(str, {newline: '\n\n'}); ``` ### options.escape Type: `function` Default: `function(str){return str;}` An escape function to run on each line after splitting them. **Example:** ```js var xmlescape = require('xml-escape'); wrap(str, { escape: function(string){ return xmlescape(string); } }); ``` ### options.trim Type: `Boolean` Default: `false` Trim trailing whitespace from the returned string. This option is included since `.trim()` would also strip the leading indentation from the first line. **Example:** ```js wrap(str, {trim: true}); ``` ### options.cut Type: `Boolean` Default: `false` Break a word between any two letters when the word is longer than the specified width. **Example:** ```js wrap(str, {cut: true}); ``` ## About ### Related projects * [common-words](https://www.npmjs.com/package/common-words): Updated list (JSON) of the 100 most common words in the English language. Useful for… [more](https://github.com/jonschlinkert/common-words) | [homepage](https://github.com/jonschlinkert/common-words "Updated list (JSON) of the 100 most common words in the English language. Useful for excluding these words from arrays.") * [shuffle-words](https://www.npmjs.com/package/shuffle-words): Shuffle the words in a string and optionally the letters in each word using the… [more](https://github.com/jonschlinkert/shuffle-words) | [homepage](https://github.com/jonschlinkert/shuffle-words "Shuffle the words in a string and optionally the letters in each word using the Fisher-Yates algorithm. Useful for creating test fixtures, benchmarking samples, etc.") * [unique-words](https://www.npmjs.com/package/unique-words): Return the unique words in a string or array. | [homepage](https://github.com/jonschlinkert/unique-words "Return the unique words in a string or array.") * [wordcount](https://www.npmjs.com/package/wordcount): Count the words in a string. Support for english, CJK and Cyrillic. | [homepage](https://github.com/jonschlinkert/wordcount "Count the words in a string. Support for english, CJK and Cyrillic.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Contributors | **Commits** | **Contributor** | | --- | --- | | 43 | [jonschlinkert](https://github.com/jonschlinkert) | | 2 | [lordvlad](https://github.com/lordvlad) | | 2 | [hildjj](https://github.com/hildjj) | | 1 | [danilosampaio](https://github.com/danilosampaio) | | 1 | [2fd](https://github.com/2fd) | | 1 | [toddself](https://github.com/toddself) | | 1 | [wolfgang42](https://github.com/wolfgang42) | | 1 | [zachhale](https://github.com/zachhale) | ### Building docs _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` ### Running tests Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](https://twitter.com/jonschlinkert) ### License Copyright © 2017, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 02, 2017._ # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` semver(1) -- The semantic versioner for npm =========================================== ## Install ```bash npm install semver ```` ## Usage As a node module: ```js const semver = require('semver') semver.valid('1.2.3') // '1.2.3' semver.valid('a.b.c') // null semver.clean(' =v1.2.3 ') // '1.2.3' semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true semver.gt('1.2.3', '9.8.7') // false semver.lt('1.2.3', '9.8.7') // true semver.minVersion('>=1.0.0') // '1.0.0' semver.valid(semver.coerce('v2')) // '2.0.0' semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7' ``` You can also just load the module for the function that you care about, if you'd like to minimize your footprint. ```js // load the whole API at once in a single object const semver = require('semver') // or just load the bits you need // all of them listed here, just pick and choose what you want // classes const SemVer = require('semver/classes/semver') const Comparator = require('semver/classes/comparator') const Range = require('semver/classes/range') // functions for working with versions const semverParse = require('semver/functions/parse') const semverValid = require('semver/functions/valid') const semverClean = require('semver/functions/clean') const semverInc = require('semver/functions/inc') const semverDiff = require('semver/functions/diff') const semverMajor = require('semver/functions/major') const semverMinor = require('semver/functions/minor') const semverPatch = require('semver/functions/patch') const semverPrerelease = require('semver/functions/prerelease') const semverCompare = require('semver/functions/compare') const semverRcompare = require('semver/functions/rcompare') const semverCompareLoose = require('semver/functions/compare-loose') const semverCompareBuild = require('semver/functions/compare-build') const semverSort = require('semver/functions/sort') const semverRsort = require('semver/functions/rsort') // low-level comparators between versions const semverGt = require('semver/functions/gt') const semverLt = require('semver/functions/lt') const semverEq = require('semver/functions/eq') const semverNeq = require('semver/functions/neq') const semverGte = require('semver/functions/gte') const semverLte = require('semver/functions/lte') const semverCmp = require('semver/functions/cmp') const semverCoerce = require('semver/functions/coerce') // working with ranges const semverSatisfies = require('semver/functions/satisfies') const semverMaxSatisfying = require('semver/ranges/max-satisfying') const semverMinSatisfying = require('semver/ranges/min-satisfying') const semverToComparators = require('semver/ranges/to-comparators') const semverMinVersion = require('semver/ranges/min-version') const semverValidRange = require('semver/ranges/valid') const semverOutside = require('semver/ranges/outside') const semverGtr = require('semver/ranges/gtr') const semverLtr = require('semver/ranges/ltr') const semverIntersects = require('semver/ranges/intersects') const simplifyRange = require('semver/ranges/simplify') const rangeSubset = require('semver/ranges/subset') ``` As a command-line utility: ``` $ semver -h A JavaScript implementation of the https://semver.org/ specification Copyright Isaac Z. Schlueter Usage: semver [options] <version> [<version> [...]] Prints valid versions sorted by SemVer precedence Options: -r --range <range> Print versions that match the specified range. -i --increment [<level>] Increment a version by the specified level. Level can be one of: major, minor, patch, premajor, preminor, prepatch, or prerelease. Default level is 'patch'. Only one version may be specified. --preid <identifier> Identifier to be used to prefix premajor, preminor, prepatch or prerelease version increments. -l --loose Interpret versions and ranges loosely -p --include-prerelease Always include prerelease versions in range matching -c --coerce Coerce a string into SemVer if possible (does not imply --loose) --rtl Coerce version strings right to left --ltr Coerce version strings left to right (default) Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions. If no satisfying versions are found, then exits failure. Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them. ``` ## Versions A "version" is described by the `v2.0.0` specification found at <https://semver.org/>. A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges A `version range` is a set of `comparators` which specify versions that satisfy the range. A `comparator` is composed of an `operator` and a `version`. The set of primitive `operators` is: * `<` Less than * `<=` Less than or equal to * `>` Greater than * `>=` Greater than or equal to * `=` Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included. For example, the comparator `>=1.2.7` would match the versions `1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` or `1.1.0`. Comparators can be joined by whitespace to form a `comparator set`, which is satisfied by the **intersection** of all of the comparators it includes. A range is composed of one or more comparator sets, joined by `||`. A version matches a range if and only if every comparator in at least one of the `||`-separated comparator sets is satisfied by the version. For example, the range `>=1.2.7 <1.3.0` would match the versions `1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, or `1.1.0`. The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, `1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. ### Prerelease Tags If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then it will only be allowed to satisfy comparator sets if at least one comparator with the same `[major, minor, patch]` tuple also has a prerelease tag. For example, the range `>1.2.3-alpha.3` would be allowed to match the version `1.2.3-alpha.7`, but it would *not* be satisfied by `3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater than" `1.2.3-alpha.3` according to the SemVer sort rules. The version range only accepts prerelease tags on the `1.2.3` version. The version `3.4.5` *would* satisfy the range, because it does not have a prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author's design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics. Second, a user who has opted into using a prerelease version has clearly indicated the intent to use *that specific* set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the *next* set of prerelease versions. Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the `includePrerelease` flag on the options object to any [functions](https://github.com/npm/node-semver#functions) that do range matching. #### Prerelease Identifiers The method `.inc` takes an additional `identifier` string argument that will append the value of the string as a prerelease identifier: ```javascript semver.inc('1.2.3', 'prerelease', 'beta') // '1.2.4-beta.0' ``` command-line example: ```bash $ semver 1.2.3 -i prerelease --preid beta 1.2.4-beta.0 ``` Which then can be used to increment further: ```bash $ semver 1.2.4-beta.0 -i prerelease 1.2.4-beta.1 ``` ### Advanced Range Syntax Advanced range syntax desugars to primitive comparators in deterministic ways. Advanced ranges may be combined in the same way as primitive comparators using white space or `||`. #### Hyphen Ranges `X.Y.Z - A.B.C` Specifies an inclusive set. * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes. * `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts. * `1.2.3 - 2.3` := `>=1.2.3 <2.4.0-0` * `1.2.3 - 2` := `>=1.2.3 <3.0.0-0` #### X-Ranges `1.2.x` `1.X` `1.2.*` `*` Any of `X`, `x`, or `*` may be used to "stand in" for one of the numeric values in the `[major, minor, patch]` tuple. * `*` := `>=0.0.0` (Any version satisfies) * `1.x` := `>=1.0.0 <2.0.0-0` (Matching major version) * `1.2.x` := `>=1.2.0 <1.3.0-0` (Matching major and minor versions) A partial version range is treated as an X-Range, so the special character is in fact optional. * `""` (empty string) := `*` := `>=0.0.0` * `1` := `1.x.x` := `>=1.0.0 <2.0.0-0` * `1.2` := `1.2.x` := `>=1.2.0 <1.3.0-0` #### Tilde Ranges `~1.2.3` `~1.2` `~1` Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not. * `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0-0` * `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0-0` (Same as `1.2.x`) * `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0-0` (Same as `1.x`) * `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0-0` * `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0-0` (Same as `0.2.x`) * `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0-0` (Same as `0.x`) * `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. #### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` Allows changes that do not modify the left-most non-zero element in the `[major, minor, patch]` tuple. In other words, this allows patch and minor updates for versions `1.0.0` and above, patch updates for versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. Many authors treat a `0.x` version as if the `x` were the major "breaking-change" indicator. Caret ranges are ideal when an author may make breaking changes between `0.2.4` and `0.3.0` releases, which is a common practice. However, it presumes that there will *not* be breaking changes between `0.2.4` and `0.2.5`. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices. * `^1.2.3` := `>=1.2.3 <2.0.0-0` * `^0.2.3` := `>=0.2.3 <0.3.0-0` * `^0.0.3` := `>=0.0.3 <0.0.4-0` * `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0-0` Note that prereleases in the `1.2.3` version will be allowed, if they are greater than or equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but `1.2.4-beta.2` would not, because it is a prerelease of a different `[major, minor, patch]` tuple. * `^0.0.3-beta` := `>=0.0.3-beta <0.0.4-0` Note that prereleases in the `0.0.3` version *only* will be allowed, if they are greater than or equal to `beta`. So, `0.0.3-pr.2` would be allowed. When parsing caret ranges, a missing `patch` value desugars to the number `0`, but will allow flexibility within that value, even if the major and minor versions are both `0`. * `^1.2.x` := `>=1.2.0 <2.0.0-0` * `^0.0.x` := `>=0.0.0 <0.1.0-0` * `^0.0` := `>=0.0.0 <0.1.0-0` A missing `minor` and `patch` values will desugar to zero, but also allow flexibility within those values, even if the major version is zero. * `^1.x` := `>=1.0.0 <2.0.0-0` * `^0.x` := `>=0.0.0 <1.0.0-0` ### Range Grammar Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors: ```bnf range-set ::= range ( logical-or range ) * logical-or ::= ( ' ' ) * '||' ( ' ' ) * range ::= hyphen | simple ( ' ' simple ) * | '' hyphen ::= partial ' - ' partial simple ::= primitive | partial | tilde | caret primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )? xr ::= 'x' | 'X' | '*' | nr nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) * tilde ::= '~' partial caret ::= '^' partial qualifier ::= ( '-' pre )? ( '+' build )? pre ::= parts build ::= parts parts ::= part ( '.' part ) * part ::= nr | [-0-9A-Za-z]+ ``` ## Functions All methods and classes take a final `options` object argument. All options in this object are `false` by default. The options supported are: - `loose` Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the `options` argument is a boolean value instead of an object, it is interpreted to be the `loose` param. - `includePrerelease` Set to suppress the [default behavior](https://github.com/npm/node-semver#prerelease-tags) of excluding prerelease tagged versions from ranges unless they are explicitly opted into. Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse. * `valid(v)`: Return the parsed version, or null if it's not valid. * `inc(v, release)`: Return the version incremented by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if it's not valid * `premajor` in one call will bump the version up to the next major version and down to a prerelease of that major version. `preminor`, and `prepatch` work the same way. * If called from a non-prerelease version, the `prerelease` will work the same as `prepatch`. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it. * `prerelease(v)`: Returns an array of prerelease components, or null if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]` * `major(v)`: Return the major version number. * `minor(v)`: Return the minor version number. * `patch(v)`: Return the patch version number. * `intersects(r1, r2, loose)`: Return true if the two supplied ranges or comparators intersect. * `parse(v)`: Attempt to parse a string as a semantic version, returning either a `SemVer` object or `null`. ### Comparison * `gt(v1, v2)`: `v1 > v2` * `gte(v1, v2)`: `v1 >= v2` * `lt(v1, v2)`: `v1 < v2` * `lte(v1, v2)`: `v1 <= v2` * `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent, even if they're not the exact same string. You already know how to compare strings. * `neq(v1, v2)`: `v1 != v2` The opposite of `eq`. * `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call the corresponding function above. `"==="` and `"!=="` do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided. * `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions in descending order when passed to `Array.sort()`. * `compareBuild(v1, v2)`: The same as `compare` but considers `build` when two versions are equal. Sorts in ascending order if passed to `Array.sort()`. `v2` is greater. Sorts in ascending order if passed to `Array.sort()`. * `diff(v1, v2)`: Returns difference between two versions by the release type (`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`), or null if the versions are the same. ### Comparators * `intersects(comparator)`: Return true if the comparators intersect ### Ranges * `validRange(range)`: Return the valid range or null if it's not valid * `satisfies(version, range)`: Return true if the version satisfies the range. * `maxSatisfying(versions, range)`: Return the highest version in the list that satisfies the range, or `null` if none of them do. * `minSatisfying(versions, range)`: Return the lowest version in the list that satisfies the range, or `null` if none of them do. * `minVersion(range)`: Return the lowest version that can possibly match the given range. * `gtr(version, range)`: Return `true` if version is greater than all the versions possible in the range. * `ltr(version, range)`: Return `true` if version is less than all the versions possible in the range. * `outside(version, range, hilo)`: Return true if the version is outside the bounds of the range in either the high or low direction. The `hilo` argument must be either the string `'>'` or `'<'`. (This is the function called by `gtr` and `ltr`.) * `intersects(range)`: Return true if any of the ranges comparators intersect * `simplifyRange(versions, range)`: Return a "simplified" range that matches the same items in `versions` list as the range specified. Note that it does *not* guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with `||` programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned. * `subset(subRange, superRange)`: Return `true` if the `subRange` range is entirely contained by the `superRange` range. Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, *or* satisfy a range! For example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9` until `2.0.0`, so the version `1.2.10` would not be greater than the range (because `2.0.1` satisfies, which is higher), nor less than the range (since `1.2.8` satisfies, which is lower), and it also does not satisfy the range. If you want to know if a version satisfies or does not satisfy a range, use the `satisfies(version, range)` function. ### Coercion * `coerce(version, options)`: Coerces a string to semver if possible This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., `1`, `1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes `3.4.0`). Only text which lacks digits will fail coercion (`version one` is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value components are invalid (`9999999999999999.4.7.4` is likely invalid). If the `options.rtl` flag is set, then `coerce` will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, `1.2.3.4` will return `2.3.4` in rtl mode, not `4.0.0`. `1.2.3/4` will return `4.0.0`, because the `4` is not a part of any other overlapping SemVer tuple. ### Clean * `clean(version)`: Clean a string to be a valid semver if possible This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges. ex. * `s.clean(' = v 2.1.5foo')`: `null` * `s.clean(' = v 2.1.5foo', { loose: true })`: `'2.1.5-foo'` * `s.clean(' = v 2.1.5-foo')`: `null` * `s.clean(' = v 2.1.5-foo', { loose: true })`: `'2.1.5-foo'` * `s.clean('=v2.1.5')`: `'2.1.5'` * `s.clean(' =v2.1.5')`: `2.1.5` * `s.clean(' 2.1.5 ')`: `'2.1.5'` * `s.clean('~1.0.0')`: `null` ## Exported Modules <!-- TODO: Make sure that all of these items are documented (classes aren't, eg), and then pull the module name into the documentation for that specific thing. --> You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main `require('semver')` export uses getter functions to lazily load the parts of the API that are used. The following modules are available: * `require('semver')` * `require('semver/classes')` * `require('semver/classes/comparator')` * `require('semver/classes/range')` * `require('semver/classes/semver')` * `require('semver/functions/clean')` * `require('semver/functions/cmp')` * `require('semver/functions/coerce')` * `require('semver/functions/compare')` * `require('semver/functions/compare-build')` * `require('semver/functions/compare-loose')` * `require('semver/functions/diff')` * `require('semver/functions/eq')` * `require('semver/functions/gt')` * `require('semver/functions/gte')` * `require('semver/functions/inc')` * `require('semver/functions/lt')` * `require('semver/functions/lte')` * `require('semver/functions/major')` * `require('semver/functions/minor')` * `require('semver/functions/neq')` * `require('semver/functions/parse')` * `require('semver/functions/patch')` * `require('semver/functions/prerelease')` * `require('semver/functions/rcompare')` * `require('semver/functions/rsort')` * `require('semver/functions/satisfies')` * `require('semver/functions/sort')` * `require('semver/functions/valid')` * `require('semver/ranges/gtr')` * `require('semver/ranges/intersects')` * `require('semver/ranges/ltr')` * `require('semver/ranges/max-satisfying')` * `require('semver/ranges/min-satisfying')` * `require('semver/ranges/min-version')` * `require('semver/ranges/outside')` * `require('semver/ranges/to-comparators')` * `require('semver/ranges/valid')` bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # fast-levenshtein - Levenshtein algorithm in Javascript [![Build Status](https://secure.travis-ci.org/hiddentao/fast-levenshtein.png)](http://travis-ci.org/hiddentao/fast-levenshtein) [![NPM module](https://badge.fury.io/js/fast-levenshtein.png)](https://badge.fury.io/js/fast-levenshtein) [![NPM downloads](https://img.shields.io/npm/dm/fast-levenshtein.svg?maxAge=2592000)](https://www.npmjs.com/package/fast-levenshtein) [![Follow on Twitter](https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/hiddentao) An efficient Javascript implementation of the [Levenshtein algorithm](http://en.wikipedia.org/wiki/Levenshtein_distance) with locale-specific collator support. ## Features * Works in node.js and in the browser. * Better performance than other implementations by not needing to store the whole matrix ([more info](http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm)). * Locale-sensitive string comparisions if needed. * Comprehensive test suite and performance benchmark. * Small: <1 KB minified and gzipped ## Installation ### node.js Install using [npm](http://npmjs.org/): ```bash $ npm install fast-levenshtein ``` ### Browser Using bower: ```bash $ bower install fast-levenshtein ``` If you are not using any module loader system then the API will then be accessible via the `window.Levenshtein` object. ## Examples **Default usage** ```javascript var levenshtein = require('fast-levenshtein'); var distance = levenshtein.get('back', 'book'); // 2 var distance = levenshtein.get('我愛你', '我叫你'); // 1 ``` **Locale-sensitive string comparisons** It supports using [Intl.Collator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Collator) for locale-sensitive string comparisons: ```javascript var levenshtein = require('fast-levenshtein'); levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true}); // 1 ``` ## Building and Testing To build the code and run the tests: ```bash $ npm install -g grunt-cli $ npm install $ npm run build ``` ## Performance _Thanks to [Titus Wormer](https://github.com/wooorm) for [encouraging me](https://github.com/hiddentao/fast-levenshtein/issues/1) to do this._ Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM): ```bash Running suite Implementation comparison [benchmark/speed.js]... >> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled) >> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled) >> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled) >> natural x 255 ops/sec ±0.76% (88 runs sampled) >> levenshtein x 180 ops/sec ±3.55% (86 runs sampled) >> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled) Benchmark done. Fastest test is fast-levenshtein at 4.2x faster than levenshtein-component ``` You can run this benchmark yourself by doing: ```bash $ npm install $ npm run build $ npm run benchmark ``` ## Contributing If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes. See [CONTRIBUTING.md](https://github.com/hiddentao/fast-levenshtein/blob/master/CONTRIBUTING.md) for details. ## License MIT - see [LICENSE.md](https://github.com/hiddentao/fast-levenshtein/blob/master/LICENSE.md) [![build status](https://secure.travis-ci.org/dankogai/js-base64.png)](http://travis-ci.org/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## HEADS UP In version 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see below). And since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/[email protected]/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.encode(latin, true)); // ZGFua29nYWk skips padding Base64.encodeURI(latin)); // ZGFua29nYWk Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.decode( 'ZGFua29nYWk'); // dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IE11. Do the following in your shell. ```shell $ make base64.es5.js ``` # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # `asbuild` [![Stars](https://img.shields.io/github/stars/AssemblyScript/asbuild.svg?style=social&maxAge=3600&label=Star)](https://github.com/AssemblyScript/asbuild/stargazers) *A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc.* ## 🚩 Table of Contents - [Installing](#-installing) - [Usage](#-usage) - [`asb init`](#asb-init---create-an-empty-project) - [`asb test`](#asb-test---run-as-pect-tests) - [`asb fmt`](#asb-fmt---format-as-files-using-eslint) - [`asb run`](#asb-run---run-a-wasi-binary) - [`asb build`](#asb-build---compile-the-project-using-asc) - [Background](#-background) ## 🔧 Installing Install it globally ``` npm install -g asbuild ``` Or, locally as dev dependencies ``` npm install --save-dev asbuild ``` ## 💡 Usage ``` Build tool for AssemblyScript projects. Usage: asb [command] [options] Commands: asb Alias of build command, to maintain back-ward compatibility [default] asb build Compile a local package and all of its dependencies [aliases: compile, make] asb init [baseDir] Create a new AS package in an given directory asb test Run as-pect tests asb fmt [paths..] This utility formats current module using eslint. [aliases: format, lint] Options: --version Show version number [boolean] --help Show help [boolean] ``` ### `asb init` - Create an empty project ``` asb init [baseDir] Create a new AS package in an given directory Positionals: baseDir Create a sample AS project in this directory [string] [default: "."] Options: --version Show version number [boolean] --help Show help [boolean] --yes Skip the interactive prompt [boolean] [default: false] ``` ### `asb test` - Run as-pect tests ``` asb test Run as-pect tests USAGE: asb test [options] -- [aspect_options] Options: --version Show version number [boolean] --help Show help [boolean] --verbose, --vv Print out arguments passed to as-pect [boolean] [default: false] ``` ### `asb fmt` - Format AS files using ESlint ``` asb fmt [paths..] This utility formats current module using eslint. Positionals: paths Paths to format [array] [default: ["."]] Initialisation: --init Generates recommended eslint config for AS Projects [boolean] Miscellaneous --lint, --dry-run Tries to fix problems without saving the changes to the file system [boolean] [default: false] Options: --version Show version number [boolean] --help Show help ``` ### `asb run` - Run a WASI binary ``` asb run Run a WASI binary USAGE: asb run [options] [binary path] -- [binary options] Positionals: binary path to Wasm binary [string] [required] Options: --version Show version number [boolean] --help Show help [boolean] --preopen, -p comma separated list of directories to open. [default: "."] ``` ### `asb build` - Compile the project using asc ``` asb build Compile a local package and all of its dependencies USAGE: asb build [entry_file] [options] -- [asc_options] Options: --version Show version number [boolean] --help Show help [boolean] --baseDir, -d Base directory of project. [string] [default: "."] --config, -c Path to asconfig file [string] [default: "./asconfig.json"] --wat Output wat file to outDir [boolean] [default: false] --outDir Directory to place built binaries. Default "./build/<target>/" [string] --target Target for compilation [string] [default: "release"] --verbose Print out arguments passed to asc [boolean] [default: false] Examples: asb build Build release of 'assembly/index.ts to build/release/packageName.wasm asb build --target release Build a release binary asb build -- --measure Pass argument to 'asc' ``` #### Defaults ##### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ##### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./tests/build_test) ## 📖 Background Asbuild started as wrapper around `asc` to provide an easier CLI interface and now has been extened to support other commands like `init`, `test` and `fmt` just like `cargo` to become a one stop build tool for AS Projects. ## 📜 License This library is provided under the open-source [MIT license](https://choosealicense.com/licenses/mit/). # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 66 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # eslint-visitor-keys [![npm version](https://img.shields.io/npm/v/eslint-visitor-keys.svg)](https://www.npmjs.com/package/eslint-visitor-keys) [![Downloads/month](https://img.shields.io/npm/dm/eslint-visitor-keys.svg)](http://www.npmtrends.com/eslint-visitor-keys) [![Build Status](https://travis-ci.org/eslint/eslint-visitor-keys.svg?branch=master)](https://travis-ci.org/eslint/eslint-visitor-keys) [![Dependency Status](https://david-dm.org/eslint/eslint-visitor-keys.svg)](https://david-dm.org/eslint/eslint-visitor-keys) Constants and utilities about visitor keys to traverse AST. ## 💿 Installation Use [npm] to install. ```bash $ npm install eslint-visitor-keys ``` ### Requirements - [Node.js] 10.0.0 or later. ## 📖 Usage ```js const evk = require("eslint-visitor-keys") ``` ### evk.KEYS > type: `{ [type: string]: string[] | undefined }` Visitor keys. This keys are frozen. This is an object. Keys are the type of [ESTree] nodes. Their values are an array of property names which have child nodes. For example: ``` console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"] ``` ### evk.getKeys(node) > type: `(node: object) => string[]` Get the visitor keys of a given AST node. This is similar to `Object.keys(node)` of ES Standard, but some keys are excluded: `parent`, `leadingComments`, `trailingComments`, and names which start with `_`. This will be used to traverse unknown nodes. For example: ``` const node = { type: "AssignmentExpression", left: { type: "Identifier", name: "foo" }, right: { type: "Literal", value: 0 } } console.log(evk.getKeys(node)) // → ["type", "left", "right"] ``` ### evk.unionWith(additionalKeys) > type: `(additionalKeys: object) => { [type: string]: string[] | undefined }` Make the union set with `evk.KEYS` and the given keys. - The order of keys is, `additionalKeys` is at first, then `evk.KEYS` is concatenated after that. - It removes duplicated keys as keeping the first one. For example: ``` console.log(evk.unionWith({ MethodDefinition: ["decorators"] })) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... } ``` ## 📰 Change log See [GitHub releases](https://github.com/eslint/eslint-visitor-keys/releases). ## 🍻 Contributing Welcome. See [ESLint contribution guidelines](https://eslint.org/docs/developer-guide/contributing/). ### Development commands - `npm test` runs tests and measures code coverage. - `npm run lint` checks source codes with ESLint. - `npm run coverage` opens the code coverage report of the previous test with your default browser. - `npm run release` publishes this package to [npm] registory. [npm]: https://www.npmjs.com/ [Node.js]: https://nodejs.org/en/ [ESTree]: https://github.com/estree/estree # URI.js URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI ([RFC 3987](http://www.ietf.org/rfc/rfc3987.txt)), IDNA ([RFC 5890](http://www.ietf.org/rfc/rfc5890.txt)), IPv6 Address ([RFC 5952](http://www.ietf.org/rfc/rfc5952.txt)), IPv6 Zone Identifier ([RFC 6874](http://www.ietf.org/rfc/rfc6874.txt)) specifications. URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated). ## API ### Parsing URI.parse("uri://user:[email protected]:123/one/two.three?q1=a1&q2=a2#body"); //returns: //{ // scheme : "uri", // userinfo : "user:pass", // host : "example.com", // port : 123, // path : "/one/two.three", // query : "q1=a1&q2=a2", // fragment : "body" //} ### Serializing URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer" ### Resolving URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g" ### Normalizing URI.normalize("HTTP://ABC.com:80/%7Esmith/home.html") === "http://abc.com/~smith/home.html" ### Comparison URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true ### IP Support //IPv4 normalization URI.normalize("//192.068.001.000") === "//192.68.1.0" //IPv6 normalization URI.normalize("//[2001:0:0DB8::0:0001]") === "//[2001:0:db8::1]" //IPv6 zone identifier support URI.parse("//[2001:db8::7%25en1]"); //returns: //{ // host : "2001:db8::7%en1" //} ### IRI Support //convert IRI to URI URI.serialize(URI.parse("http://examplé.org/rosé")) === "http://xn--exampl-gva.org/ros%C3%A9" //convert URI to IRI URI.serialize(URI.parse("http://xn--exampl-gva.org/ros%C3%A9"), {iri:true}) === "http://examplé.org/rosé" ### Options All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties: * `scheme` (string) Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior. * `reference` (string) If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme. * `tolerant` (boolean, false) If set to `true`, the parser will relax URI resolving rules. * `absolutePath` (boolean, false) If set to `true`, the serializer will not resolve a relative `path` component. * `iri` (boolean, false) If set to `true`, the serializer will unescape non-ASCII characters as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `unicodeSupport` (boolean, false) If set to `true`, the parser will unescape non-ASCII characters in the parsed output as per [RFC 3987](http://www.ietf.org/rfc/rfc3987.txt). * `domainHost` (boolean, false) If set to `true`, the library will treat the `host` component as a domain name, and convert IDNs (International Domain Names) as per [RFC 5891](http://www.ietf.org/rfc/rfc5891.txt). ## Scheme Extendable URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. Currently, URI.js has built in support for the following schemes: * http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\] * https \[[RFC 2818](http://www.ietf.org/rfc/rfc2818.txt)\] * ws \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * wss \[[RFC 6455](http://www.ietf.org/rfc/rfc6455.txt)\] * mailto \[[RFC 6068](http://www.ietf.org/rfc/rfc6068.txt)\] * urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\] * urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\] ### HTTP/HTTPS Support URI.equal("HTTP://ABC.COM:80", "http://abc.com/") === true URI.equal("https://abc.com", "HTTPS://ABC.COM:443/") === true ### WS/WSS Support URI.parse("wss://example.com/foo?bar=baz"); //returns: //{ // scheme : "wss", // host: "example.com", // resourceName: "/foo?bar=baz", // secure: true, //} URI.equal("WS://ABC.COM:80/chat#one", "ws://abc.com/chat") === true ### Mailto Support URI.parse("mailto:[email protected],[email protected]?subject=SUBSCRIBE&body=Sign%20me%20up!"); //returns: //{ // scheme : "mailto", // to : ["[email protected]", "[email protected]"], // subject : "SUBSCRIBE", // body : "Sign me up!" //} URI.serialize({ scheme : "mailto", to : ["[email protected]"], subject : "REMOVE", body : "Please remove me", headers : { cc : "[email protected]" } }) === "mailto:[email protected][email protected]&subject=REMOVE&body=Please%20remove%20me" ### URN Support URI.parse("urn:example:foo"); //returns: //{ // scheme : "urn", // nid : "example", // nss : "foo", //} #### URN UUID Support URI.parse("urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6"); //returns: //{ // scheme : "urn", // nid : "uuid", // uuid : "f81d4fae-7dec-11d0-a765-00a0c91e6bf6", //} ## Usage To load in a browser, use the following tag: <script type="text/javascript" src="uri-js/dist/es5/uri.all.min.js"></script> To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line: npm install uri-js # OR yarn add uri-js Then, in your code, load it using: const URI = require("uri-js"); If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using: import * as URI from "uri-js"; Or you can load just what you need using named exports: import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from "uri-js"; ## Breaking changes ### Breaking changes from 3.x URN parsing has been completely changed to better align with the specification. Scheme is now always `urn`, but has two new properties: `nid` which contains the Namspace Identifier, and `nss` which contains the Namespace Specific String. The `nss` property will be removed by higher order scheme handlers, such as the UUID URN scheme handler. The UUID of a URN can now be found in the `uuid` property. ### Breaking changes from 2.x URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful. ### Breaking changes from 1.x The `errors` array on parsed components is now an `error` string. [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) ### Status: Work in progress Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ <!-- -- This file is auto-generated from README_js.md. Changes should be made there. --> # uuid [![CI](https://github.com/uuidjs/uuid/workflows/CI/badge.svg)](https://github.com/uuidjs/uuid/actions?query=workflow%3ACI) [![Browser](https://github.com/uuidjs/uuid/workflows/Browser/badge.svg)](https://github.com/uuidjs/uuid/actions?query=workflow%3ABrowser) For the creation of [RFC4122](http://www.ietf.org/rfc/rfc4122.txt) UUIDs - **Complete** - Support for RFC4122 version 1, 3, 4, and 5 UUIDs - **Cross-platform** - Support for ... - CommonJS, [ECMAScript Modules](#ecmascript-modules) and [CDN builds](#cdn-builds) - Node 8, 10, 12, 14 - Chrome, Safari, Firefox, Edge, IE 11 browsers - Webpack and rollup.js module bundlers - [React Native / Expo](#react-native--expo) - **Secure** - Cryptographically-strong random values - **Small** - Zero-dependency, small footprint, plays nice with "tree shaking" packagers - **CLI** - Includes the [`uuid` command line](#command-line) utility **Upgrading from `[email protected]`?** Your code is probably okay, but check out [Upgrading From `[email protected]`](#upgrading-from-uuid3x) for details. ## Quickstart To create a random UUID... **1. Install** ```shell npm install uuid ``` **2. Create a UUID** (ES6 module syntax) ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); // ⇨ '9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d' ``` ... or using CommonJS syntax: ```javascript const { v4: uuidv4 } = require('uuid'); uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' ``` For timestamp UUIDs, namespace UUIDs, and other options read on ... ## API Summary | | | | | --- | --- | --- | | [`uuid.NIL`](#uuidnil) | The nil UUID string (all zeros) | New in `[email protected]` | | [`uuid.parse()`](#uuidparsestr) | Convert UUID string to array of bytes | New in `[email protected]` | | [`uuid.stringify()`](#uuidstringifyarr-offset) | Convert array of bytes to UUID string | New in `[email protected]` | | [`uuid.v1()`](#uuidv1options-buffer-offset) | Create a version 1 (timestamp) UUID | | | [`uuid.v3()`](#uuidv3name-namespace-buffer-offset) | Create a version 3 (namespace w/ MD5) UUID | | | [`uuid.v4()`](#uuidv4options-buffer-offset) | Create a version 4 (random) UUID | | | [`uuid.v5()`](#uuidv5name-namespace-buffer-offset) | Create a version 5 (namespace w/ SHA-1) UUID | | | [`uuid.validate()`](#uuidvalidatestr) | Test a string to see if it is a valid UUID | New in `[email protected]` | | [`uuid.version()`](#uuidversionstr) | Detect RFC version of a UUID | New in `[email protected]` | ## API ### uuid.NIL The nil UUID string (all zeros). Example: ```javascript import { NIL as NIL_UUID } from 'uuid'; NIL_UUID; // ⇨ '00000000-0000-0000-0000-000000000000' ``` ### uuid.parse(str) Convert UUID string to array of bytes | | | | --------- | ---------------------------------------- | | `str` | A valid UUID `String` | | _returns_ | `Uint8Array[16]` | | _throws_ | `TypeError` if `str` is not a valid UUID | Note: Ordering of values in the byte arrays used by `parse()` and `stringify()` follows the left &Rarr; right order of hex-pairs in UUID strings. As shown in the example below. Example: ```javascript import { parse as uuidParse } from 'uuid'; // Parse a UUID const bytes = uuidParse('6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b'); // Convert to hex strings to show byte order (for documentation purposes) [...bytes].map((v) => v.toString(16).padStart(2, '0')); // ⇨ // [ // '6e', 'c0', 'bd', '7f', // '11', 'c0', '43', 'da', // '97', '5e', '2a', '8a', // 'd9', 'eb', 'ae', '0b' // ] ``` ### uuid.stringify(arr[, offset]) Convert array of bytes to UUID string | | | | -------------- | ---------------------------------------------------------------------------- | | `arr` | `Array`-like collection of 16 values (starting from `offset`) between 0-255. | | [`offset` = 0] | `Number` Starting index in the Array | | _returns_ | `String` | | _throws_ | `TypeError` if a valid UUID string cannot be generated | Note: Ordering of values in the byte arrays used by `parse()` and `stringify()` follows the left &Rarr; right order of hex-pairs in UUID strings. As shown in the example below. Example: ```javascript import { stringify as uuidStringify } from 'uuid'; const uuidBytes = [ 0x6e, 0xc0, 0xbd, 0x7f, 0x11, 0xc0, 0x43, 0xda, 0x97, 0x5e, 0x2a, 0x8a, 0xd9, 0xeb, 0xae, 0x0b, ]; uuidStringify(uuidBytes); // ⇨ '6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b' ``` ### uuid.v1([options[, buffer[, offset]]]) Create an RFC version 1 (timestamp) UUID | | | | --- | --- | | [`options`] | `Object` with one or more of the following properties: | | [`options.node` ] | RFC "node" field as an `Array[6]` of byte values (per 4.1.6) | | [`options.clockseq`] | RFC "clock sequence" as a `Number` between 0 - 0x3fff | | [`options.msecs`] | RFC "timestamp" field (`Number` of milliseconds, unix epoch) | | [`options.nsecs`] | RFC "timestamp" field (`Number` of nanseconds to add to `msecs`, should be 0-10,000) | | [`options.random`] | `Array` of 16 random bytes (0-255) | | [`options.rng`] | Alternative to `options.random`, a `Function` that returns an `Array` of 16 random bytes (0-255) | | [`buffer`] | `Array \| Buffer` If specified, uuid will be written here in byte-form, starting at `offset` | | [`offset` = 0] | `Number` Index to start writing UUID bytes in `buffer` | | _returns_ | UUID `String` if no `buffer` is specified, otherwise returns `buffer` | | _throws_ | `Error` if more than 10M UUIDs/sec are requested | Note: The default [node id](https://tools.ietf.org/html/rfc4122#section-4.1.6) (the last 12 digits in the UUID) is generated once, randomly, on process startup, and then remains unchanged for the duration of the process. Note: `options.random` and `options.rng` are only meaningful on the very first call to `v1()`, where they may be passed to initialize the internal `node` and `clockseq` fields. Example: ```javascript import { v1 as uuidv1 } from 'uuid'; uuidv1(); // ⇨ '2c5ea4c0-4067-11e9-8bad-9b1deb4d3b7d' ``` Example using `options`: ```javascript import { v1 as uuidv1 } from 'uuid'; const v1options = { node: [0x01, 0x23, 0x45, 0x67, 0x89, 0xab], clockseq: 0x1234, msecs: new Date('2011-11-01').getTime(), nsecs: 5678, }; uuidv1(v1options); // ⇨ '710b962e-041c-11e1-9234-0123456789ab' ``` ### uuid.v3(name, namespace[, buffer[, offset]]) Create an RFC version 3 (namespace w/ MD5) UUID API is identical to `v5()`, but uses "v3" instead. &#x26a0;&#xfe0f; Note: Per the RFC, "_If backward compatibility is not an issue, SHA-1 [Version 5] is preferred_." ### uuid.v4([options[, buffer[, offset]]]) Create an RFC version 4 (random) UUID | | | | --- | --- | | [`options`] | `Object` with one or more of the following properties: | | [`options.random`] | `Array` of 16 random bytes (0-255) | | [`options.rng`] | Alternative to `options.random`, a `Function` that returns an `Array` of 16 random bytes (0-255) | | [`buffer`] | `Array \| Buffer` If specified, uuid will be written here in byte-form, starting at `offset` | | [`offset` = 0] | `Number` Index to start writing UUID bytes in `buffer` | | _returns_ | UUID `String` if no `buffer` is specified, otherwise returns `buffer` | Example: ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' ``` Example using predefined `random` values: ```javascript import { v4 as uuidv4 } from 'uuid'; const v4options = { random: [ 0x10, 0x91, 0x56, 0xbe, 0xc4, 0xfb, 0xc1, 0xea, 0x71, 0xb4, 0xef, 0xe1, 0x67, 0x1c, 0x58, 0x36, ], }; uuidv4(v4options); // ⇨ '109156be-c4fb-41ea-b1b4-efe1671c5836' ``` ### uuid.v5(name, namespace[, buffer[, offset]]) Create an RFC version 5 (namespace w/ SHA-1) UUID | | | | --- | --- | | `name` | `String \| Array` | | `namespace` | `String \| Array[16]` Namespace UUID | | [`buffer`] | `Array \| Buffer` If specified, uuid will be written here in byte-form, starting at `offset` | | [`offset` = 0] | `Number` Index to start writing UUID bytes in `buffer` | | _returns_ | UUID `String` if no `buffer` is specified, otherwise returns `buffer` | Note: The RFC `DNS` and `URL` namespaces are available as `v5.DNS` and `v5.URL`. Example with custom namespace: ```javascript import { v5 as uuidv5 } from 'uuid'; // Define a custom namespace. Readers, create your own using something like // https://www.uuidgenerator.net/ const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341'; uuidv5('Hello, World!', MY_NAMESPACE); // ⇨ '630eb68f-e0fa-5ecc-887a-7c7a62614681' ``` Example with RFC `URL` namespace: ```javascript import { v5 as uuidv5 } from 'uuid'; uuidv5('https://www.w3.org/', uuidv5.URL); // ⇨ 'c106a26a-21bb-5538-8bf2-57095d1976c1' ``` ### uuid.validate(str) Test a string to see if it is a valid UUID | | | | --------- | --------------------------------------------------- | | `str` | `String` to validate | | _returns_ | `true` if string is a valid UUID, `false` otherwise | Example: ```javascript import { validate as uuidValidate } from 'uuid'; uuidValidate('not a UUID'); // ⇨ false uuidValidate('6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b'); // ⇨ true ``` Using `validate` and `version` together it is possible to do per-version validation, e.g. validate for only v4 UUIds. ```javascript import { version as uuidVersion } from 'uuid'; import { validate as uuidValidate } from 'uuid'; function uuidValidateV4(uuid) { return uuidValidate(uuid) && uuidVersion(uuid) === 4; } const v1Uuid = 'd9428888-122b-11e1-b85c-61cd3cbb3210'; const v4Uuid = '109156be-c4fb-41ea-b1b4-efe1671c5836'; uuidValidateV4(v4Uuid); // ⇨ true uuidValidateV4(v1Uuid); // ⇨ false ``` ### uuid.version(str) Detect RFC version of a UUID | | | | --------- | ---------------------------------------- | | `str` | A valid UUID `String` | | _returns_ | `Number` The RFC version of the UUID | | _throws_ | `TypeError` if `str` is not a valid UUID | Example: ```javascript import { version as uuidVersion } from 'uuid'; uuidVersion('45637ec4-c85f-11ea-87d0-0242ac130003'); // ⇨ 1 uuidVersion('6ec0bd7f-11c0-43da-975e-2a8ad9ebae0b'); // ⇨ 4 ``` ## Command Line UUIDs can be generated from the command line using `uuid`. ```shell $ uuid ddeb27fb-d9a0-4624-be4d-4615062daed4 ``` The default is to generate version 4 UUIDS, however the other versions are supported. Type `uuid --help` for details: ```shell $ uuid --help Usage: uuid uuid v1 uuid v3 <name> <namespace uuid> uuid v4 uuid v5 <name> <namespace uuid> uuid --help Note: <namespace uuid> may be "URL" or "DNS" to use the corresponding UUIDs defined by RFC4122 ``` ## ECMAScript Modules This library comes with [ECMAScript Modules](https://www.ecma-international.org/ecma-262/6.0/#sec-modules) (ESM) support for Node.js versions that support it ([example](./examples/node-esmodules/)) as well as bundlers like [rollup.js](https://rollupjs.org/guide/en/#tree-shaking) ([example](./examples/browser-rollup/)) and [webpack](https://webpack.js.org/guides/tree-shaking/) ([example](./examples/browser-webpack/)) (targeting both, Node.js and browser environments). ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' ``` To run the examples you must first create a dist build of this library in the module root: ```shell npm run build ``` ## CDN Builds ### ECMAScript Modules To load this module directly into modern browsers that [support loading ECMAScript Modules](https://caniuse.com/#feat=es6-module) you can make use of [jspm](https://jspm.org/): ```html <script type="module"> import { v4 as uuidv4 } from 'https://jspm.dev/uuid'; console.log(uuidv4()); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' </script> ``` ### UMD To load this module directly into older browsers you can use the [UMD (Universal Module Definition)](https://github.com/umdjs/umd) builds from any of the following CDNs: **Using [UNPKG](https://unpkg.com/uuid@latest/dist/umd/)**: ```html <script src="https://unpkg.com/uuid@latest/dist/umd/uuidv4.min.js"></script> ``` **Using [jsDelivr](https://cdn.jsdelivr.net/npm/uuid@latest/dist/umd/)**: ```html <script src="https://cdn.jsdelivr.net/npm/uuid@latest/dist/umd/uuidv4.min.js"></script> ``` **Using [cdnjs](https://cdnjs.com/libraries/uuid)**: ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/uuid/8.1.0/uuidv4.min.js"></script> ``` These CDNs all provide the same [`uuidv4()`](#uuidv4options-buffer-offset) method: ```html <script> uuidv4(); // ⇨ '55af1e37-0734-46d8-b070-a1e42e4fc392' </script> ``` Methods for the other algorithms ([`uuidv1()`](#uuidv1options-buffer-offset), [`uuidv3()`](#uuidv3name-namespace-buffer-offset) and [`uuidv5()`](#uuidv5name-namespace-buffer-offset)) are available from the files `uuidv1.min.js`, `uuidv3.min.js` and `uuidv5.min.js` respectively. ## "getRandomValues() not supported" This error occurs in environments where the standard [`crypto.getRandomValues()`](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getRandomValues) API is not supported. This issue can be resolved by adding an appropriate polyfill: ### React Native / Expo 1. Install [`react-native-get-random-values`](https://github.com/LinusU/react-native-get-random-values#readme) 1. Import it _before_ `uuid`. Since `uuid` might also appear as a transitive dependency of some other imports it's safest to just import `react-native-get-random-values` as the very first thing in your entry point: ```javascript import 'react-native-get-random-values'; import { v4 as uuidv4 } from 'uuid'; ``` Note: If you are using Expo, you must be using at least `[email protected]` and `[email protected]`. ### Web Workers / Service Workers (Edge <= 18) [In Edge <= 18, Web Crypto is not supported in Web Workers or Service Workers](https://caniuse.com/#feat=cryptography) and we are not aware of a polyfill (let us know if you find one, please). ## Upgrading From `[email protected]` ### Only Named Exports Supported When Using with Node.js ESM `[email protected]` did not come with native ECMAScript Module (ESM) support for Node.js. Importing it in Node.js ESM consequently imported the CommonJS source with a default export. This library now comes with true Node.js ESM support and only provides named exports. Instead of doing: ```javascript import uuid from 'uuid'; uuid.v4(); ``` you will now have to use the named exports: ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); ``` ### Deep Requires No Longer Supported Deep requires like `require('uuid/v4')` [which have been deprecated in `[email protected]`](#deep-requires-now-deprecated) are no longer supported. ## Upgrading From `[email protected]` "_Wait... what happened to `[email protected]` - `[email protected]`?!?_" In order to avoid confusion with RFC [version 4](#uuidv4options-buffer-offset) and [version 5](#uuidv5name-namespace-buffer-offset) UUIDs, and a possible [version 6](http://gh.peabody.io/uuidv6/), releases 4 thru 6 of this module have been skipped. ### Deep Requires Now Deprecated `[email protected]` encouraged the use of deep requires to minimize the bundle size of browser builds: ```javascript const uuidv4 = require('uuid/v4'); // <== NOW DEPRECATED! uuidv4(); ``` As of `[email protected]` this library now provides ECMAScript modules builds, which allow packagers like Webpack and Rollup to do "tree-shaking" to remove dead code. Instead, use the `import` syntax: ```javascript import { v4 as uuidv4 } from 'uuid'; uuidv4(); ``` ... or for CommonJS: ```javascript const { v4: uuidv4 } = require('uuid'); uuidv4(); ``` ### Default Export Removed `[email protected]` was exporting the Version 4 UUID method as a default export: ```javascript const uuid = require('uuid'); // <== REMOVED! ``` This usage pattern was already discouraged in `[email protected]` and has been removed in `[email protected]`. ---- Markdown generated from [README_js.md](README_js.md) by [![RunMD Logo](http://i.imgur.com/h0FVyzU.png)](https://github.com/broofa/runmd) # isarray `Array#isArray` for older browsers. [![build status](https://secure.travis-ci.org/juliangruber/isarray.svg)](http://travis-ci.org/juliangruber/isarray) [![downloads](https://img.shields.io/npm/dm/isarray.svg)](https://www.npmjs.org/package/isarray) [![browser support](https://ci.testling.com/juliangruber/isarray.png) ](https://ci.testling.com/juliangruber/isarray) ## Usage ```js var isArray = require('isarray'); console.log(isArray([])); // => true console.log(isArray({})); // => false ``` ## Installation With [npm](http://npmjs.org) do ```bash $ npm install isarray ``` Then bundle for the browser with [browserify](https://github.com/substack/browserify). With [component](http://component.io) do ```bash $ component install juliangruber/isarray ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # assemblyscript-regex A regex engine for AssemblyScript. [AssemblyScript](https://www.assemblyscript.org/) is a new language, based on TypeScript, that runs on WebAssembly. AssemblyScript has a lightweight standard library, but lacks support for Regular Expression. The project fills that gap! This project exposes an API that mirrors the JavaScript [RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp) class: ```javascript const regex = new RegExp("fo*", "g"); const str = "table football, foul"; let match: Match | null = regex.exec(str); while (match != null) { // first iteration // match.index = 6 // match.matches[0] = "foo" // second iteration // match.index = 16 // match.matches[0] = "fo" match = regex.exec(str); } ``` ## Project status The initial focus of this implementation has been feature support and functionality over performance. It currently supports a sufficient number of regex features to be considered useful, including most character classes, common assertions, groups, alternations, capturing groups and quantifiers. The next phase of development will focussed on more extensive testing and performance. The project currently has reasonable unit test coverage, focussed on positive and negative test cases on a per-feature basis. It also includes a more exhaustive test suite with test cases borrowed from another regex library. ### Feature support Based on the classfication within the [MDN cheatsheet](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions/Cheatsheet) **Character sets** - [x] . - [x] \d - [x] \D - [x] \w - [x] \W - [x] \s - [x] \S - [x] \t - [x] \r - [x] \n - [x] \v - [x] \f - [ ] [\b] - [ ] \0 - [ ] \cX - [x] \xhh - [x] \uhhhh - [ ] \u{hhhh} or \u{hhhhh} - [x] \ **Assertions** - [x] ^ - [x] $ - [ ] \b - [ ] \B **Other assertions** - [ ] x(?=y) Lookahead assertion - [ ] x(?!y) Negative lookahead assertion - [ ] (?<=y)x Lookbehind assertion - [ ] (?<!y)x Negative lookbehind assertion **Groups and ranges** - [x] x|y - [x] [xyz][a-c] - [x] [^xyz][^a-c] - [x] (x) capturing group - [ ] \n back reference - [ ] (?<Name>x) named capturing group - [x] (?:x) Non-capturing group **Quantifiers** - [x] x\* - [x] x+ - [x] x? - [x] x{n} - [x] x{n,} - [x] x{n,m} - [ ] x\*? / x+? / ... **RegExp** - [x] global - [ ] sticky - [x] case insensitive - [x] multiline - [x] dotAll - [ ] unicode ### Development This project is open source, MIT licenced and your contributions are very much welcomed. To get started, check out the repository and install dependencies: ``` $ npm install ``` A few general points about the tools and processes this project uses: - This project uses prettier for code formatting and eslint to provide additional syntactic checks. These are both run on `npm test` and as part of the CI build. - The unit tests are executed using [as-pect](https://github.com/jtenner/as-pect) - a native AssemblyScript test runner - The specification tests are within the `spec` folder. The `npm run test:generate` target transforms these tests into as-pect tests which execute as part of the standard build / test cycle - In order to support improved debugging you can execute this library as TypeScript (rather than WebAssembly), via the `npm run tsrun` target. assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) ## Test Strategy - tests are copied from the [polyfill implementation](https://github.com/tc39/proposal-temporal/tree/main/polyfill/test) - tests should be removed if they relate to features that do not make sense for TS/AS, i.e. tests that validate the shape of an object do not make sense in a language with compile-time type checking - tests that fail because a feature has not been implemented yet should be left as failures. # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC # which Like the unix `which` utility. Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so `hash -r` is not needed when the PATH changes. ## USAGE ```javascript var which = require('which') // async usage which('node', function (er, resolvedPath) { // er is returned if no "node" is found on the PATH // if it is found, then the absolute path to the exec is returned }) // or promise which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... }) // sync usage // throws if not found var resolved = which.sync('node') // if nothrow option is used, returns null if not found resolved = which.sync('node', {nothrow: true}) // Pass options to override the PATH and PATHEXT environment vars. which('node', { path: someOtherPath }, function (er, resolved) { if (er) throw er console.log('found at %j', resolved) }) ``` ## CLI USAGE Same as the BSD `which(1)` binary. ``` usage: which [-as] program ... ``` ## OPTIONS You may pass an options object as the second argument. - `path`: Use instead of the `PATH` environment variable. - `pathExt`: Use instead of the `PATHEXT` environment variable. - `all`: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string. <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc ### esutils [![Build Status](https://secure.travis-ci.org/estools/esutils.svg)](http://travis-ci.org/estools/esutils) esutils ([esutils](http://github.com/estools/esutils)) is utility box for ECMAScript language tools. ### API ### ast #### ast.isExpression(node) Returns true if `node` is an Expression as defined in ECMA262 edition 5.1 section [11](https://es5.github.io/#x11). #### ast.isStatement(node) Returns true if `node` is a Statement as defined in ECMA262 edition 5.1 section [12](https://es5.github.io/#x12). #### ast.isIterationStatement(node) Returns true if `node` is an IterationStatement as defined in ECMA262 edition 5.1 section [12.6](https://es5.github.io/#x12.6). #### ast.isSourceElement(node) Returns true if `node` is a SourceElement as defined in ECMA262 edition 5.1 section [14](https://es5.github.io/#x14). #### ast.trailingStatement(node) Returns `Statement?` if `node` has trailing `Statement`. ```js if (cond) consequent; ``` When taking this `IfStatement`, returns `consequent;` statement. #### ast.isProblematicIfStatement(node) Returns true if `node` is a problematic IfStatement. If `node` is a problematic `IfStatement`, `node` cannot be represented as an one on one JavaScript code. ```js { type: 'IfStatement', consequent: { type: 'WithStatement', body: { type: 'IfStatement', consequent: {type: 'EmptyStatement'} } }, alternate: {type: 'EmptyStatement'} } ``` The above node cannot be represented as a JavaScript code, since the top level `else` alternate belongs to an inner `IfStatement`. ### code #### code.isDecimalDigit(code) Return true if provided code is decimal digit. #### code.isHexDigit(code) Return true if provided code is hexadecimal digit. #### code.isOctalDigit(code) Return true if provided code is octal digit. #### code.isWhiteSpace(code) Return true if provided code is white space. White space characters are formally defined in ECMA262. #### code.isLineTerminator(code) Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262. #### code.isIdentifierStart(code) Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262. #### code.isIdentifierPart(code) Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262. ### keyword #### keyword.isKeywordES5(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections [7.6.1.1](http://es5.github.io/#x7.6.1.1) and [7.6.1.2](http://es5.github.io/#x7.6.1.2), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isKeywordES6(id, strict) Returns `true` if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections [11.6.2.1](http://ecma-international.org/ecma-262/6.0/#sec-keywords) and [11.6.2.2](http://ecma-international.org/ecma-262/6.0/#sec-future-reserved-words), respectively. If the `strict` flag is truthy, this function additionally checks whether `id` is a Keyword or Future Reserved Word under strict mode. #### keyword.isReservedWordES5(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section [7.6.1](http://es5.github.io/#x7.6.1). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isReservedWordES6(id, strict) Returns `true` if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section [11.6.2](http://ecma-international.org/ecma-262/6.0/#sec-reserved-words). If the `strict` flag is truthy, this function additionally checks whether `id` is a Reserved Word under strict mode. #### keyword.isRestrictedWord(id) Returns `true` if provided identifier string is one of `eval` or `arguments`. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section [12.1.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers-static-semantics-early-errors). #### keyword.isIdentifierNameES5(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). #### keyword.isIdentifierNameES6(id) Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section [11.6](http://ecma-international.org/ecma-262/6.0/#sec-names-and-keywords). #### keyword.isIdentifierES5(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section [7.6](https://es5.github.io/#x7.6). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. #### keyword.isIdentifierES6(id, strict) Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section [12.1](http://ecma-international.org/ecma-262/6.0/#sec-identifiers). If the `strict` flag is truthy, this function additionally checks whether `id` is an Identifier under strict mode. ### License Copyright (C) 2013 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. [![npm version](https://img.shields.io/npm/v/eslint.svg)](https://www.npmjs.com/package/eslint) [![Downloads](https://img.shields.io/npm/dm/eslint.svg)](https://www.npmjs.com/package/eslint) [![Build Status](https://github.com/eslint/eslint/workflows/CI/badge.svg)](https://github.com/eslint/eslint/actions) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_shield) <br /> [![Open Collective Backers](https://img.shields.io/opencollective/backers/eslint)](https://opencollective.com/eslint) [![Open Collective Sponsors](https://img.shields.io/opencollective/sponsors/eslint)](https://opencollective.com/eslint) [![Follow us on Twitter](https://img.shields.io/twitter/follow/geteslint?label=Follow&style=social)](https://twitter.com/intent/user?screen_name=geteslint) # ESLint [Website](https://eslint.org) | [Configuring](https://eslint.org/docs/user-guide/configuring) | [Rules](https://eslint.org/docs/rules/) | [Contributing](https://eslint.org/docs/developer-guide/contributing) | [Reporting Bugs](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) | [Code of Conduct](https://eslint.org/conduct) | [Twitter](https://twitter.com/geteslint) | [Mailing List](https://groups.google.com/group/eslint) | [Chat Room](https://eslint.org/chat) ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions: * ESLint uses [Espree](https://github.com/eslint/espree) for JavaScript parsing. * ESLint uses an AST to evaluate patterns in code. * ESLint is completely pluggable, every single rule is a plugin and you can add more at runtime. ## Table of Contents 1. [Installation and Usage](#installation-and-usage) 2. [Configuration](#configuration) 3. [Code of Conduct](#code-of-conduct) 4. [Filing Issues](#filing-issues) 5. [Frequently Asked Questions](#faq) 6. [Releases](#releases) 7. [Security Policy](#security-policy) 8. [Semantic Versioning Policy](#semantic-versioning-policy) 9. [Stylistic Rule Updates](#stylistic-rule-updates) 10. [License](#license) 11. [Team](#team) 12. [Sponsors](#sponsors) 13. [Technology Sponsors](#technology-sponsors) ## <a name="installation-and-usage"></a>Installation and Usage Prerequisites: [Node.js](https://nodejs.org/) (`^10.12.0`, or `>=12.0.0`) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.) You can install ESLint using npm: ``` $ npm install eslint --save-dev ``` You should then set up a configuration file: ``` $ ./node_modules/.bin/eslint --init ``` After that, you can run ESLint on any file or directory like this: ``` $ ./node_modules/.bin/eslint yourfile.js ``` ## <a name="configuration"></a>Configuration After running `eslint --init`, you'll have a `.eslintrc` file in your directory. In it, you'll see some rules configured like this: ```json { "rules": { "semi": ["error", "always"], "quotes": ["error", "double"] } } ``` The names `"semi"` and `"quotes"` are the names of [rules](https://eslint.org/docs/rules) in ESLint. The first value is the error level of the rule and can be one of these values: * `"off"` or `0` - turn the rule off * `"warn"` or `1` - turn the rule on as a warning (doesn't affect exit code) * `"error"` or `2` - turn the rule on as an error (exit code will be 1) The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the [configuration docs](https://eslint.org/docs/user-guide/configuring)). ## <a name="code-of-conduct"></a>Code of Conduct ESLint adheres to the [JS Foundation Code of Conduct](https://eslint.org/conduct). ## <a name="filing-issues"></a>Filing Issues Before filing an issue, please be sure to read the guidelines for what you're reporting: * [Bug Report](https://eslint.org/docs/developer-guide/contributing/reporting-bugs) * [Propose a New Rule](https://eslint.org/docs/developer-guide/contributing/new-rules) * [Proposing a Rule Change](https://eslint.org/docs/developer-guide/contributing/rule-changes) * [Request a Change](https://eslint.org/docs/developer-guide/contributing/changes) ## <a name="faq"></a>Frequently Asked Questions ### I'm using JSCS, should I migrate to ESLint? Yes. [JSCS has reached end of life](https://eslint.org/blog/2016/07/jscs-end-of-life) and is no longer supported. We have prepared a [migration guide](https://eslint.org/docs/user-guide/migrating-from-jscs) to help you convert your JSCS settings to an ESLint configuration. We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process. ### Does Prettier replace ESLint? No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors. ### Why can't ESLint find my plugins? * Make sure your plugins (and ESLint) are both in your project's `package.json` as devDependencies (or dependencies, if your project uses ESLint at runtime). * Make sure you have run `npm install` and all your dependencies are installed. * Make sure your plugins' peerDependencies have been installed as well. You can use `npm view eslint-plugin-myplugin peerDependencies` to see what peer dependencies `eslint-plugin-myplugin` has. ### Does ESLint support JSX? Yes, ESLint natively supports parsing JSX syntax (this must be enabled in [configuration](https://eslint.org/docs/user-guide/configuring)). Please note that supporting JSX syntax *is not* the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn't recognize. We recommend using [eslint-plugin-react](https://www.npmjs.com/package/eslint-plugin-react) if you are using React and want React semantics. ### What ECMAScript versions does ESLint support? ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through [configuration](https://eslint.org/docs/user-guide/configuring). ### What about experimental features? ESLint's parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis. In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the [babel-eslint](https://github.com/babel/babel-eslint) parser and [eslint-plugin-babel](https://github.com/babel/eslint-plugin-babel) to use any option available in Babel. Once a language feature has been adopted into the ECMAScript standard (stage 4 according to the [TC39 process](https://tc39.github.io/process-document/)), we will accept issues and pull requests related to the new feature, subject to our [contributing guidelines](https://eslint.org/docs/developer-guide/contributing). Until then, please use the appropriate parser and plugin(s) for your experimental feature. ### Where to ask for help? Join our [Mailing List](https://groups.google.com/group/eslint) or [Chatroom](https://eslint.org/chat). ### Why doesn't ESLint lock dependency versions? Lock files like `package-lock.json` are helpful for deployed applications. They ensure that dependencies are consistent between environments and across deployments. Packages like `eslint` that get published to the npm registry do not include lock files. `npm install eslint` as a user will respect version constraints in ESLint's `package.json`. ESLint and its dependencies will be included in the user's lock file if one exists, but ESLint's own lock file would not be used. We intentionally don't lock dependency versions so that we have the latest compatible dependency versions in development and CI that our users get when installing ESLint in a project. The Twilio blog has a [deeper dive](https://www.twilio.com/blog/lockfiles-nodejs) to learn more. ## <a name="releases"></a>Releases We have scheduled releases every two weeks on Friday or Saturday. You can follow a [release issue](https://github.com/eslint/eslint/issues?q=is%3Aopen+is%3Aissue+label%3Arelease) for updates about the scheduling of any particular release. ## <a name="security-policy"></a>Security Policy ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## <a name="semantic-versioning-policy"></a>Semantic Versioning Policy ESLint follows [semantic versioning](https://semver.org). However, due to the nature of ESLint as a code quality tool, it's not always clear when a minor or major version bump occurs. To help clarify this for everyone, we've defined the following semantic versioning policy for ESLint: * Patch release (intended to not break your lint build) * A bug fix in a rule that results in ESLint reporting fewer linting errors. * A bug fix to the CLI or core (including formatters). * Improvements to documentation. * Non-user-facing changes such as refactoring code, adding, deleting, or modifying tests, and increasing test coverage. * Re-releasing after a failed release (i.e., publishing a release that doesn't work for anyone). * Minor release (might break your lint build) * A bug fix in a rule that results in ESLint reporting more linting errors. * A new rule is created. * A new option to an existing rule that does not result in ESLint reporting more linting errors by default. * A new addition to an existing rule to support a newly-added language feature (within the last 12 months) that will result in ESLint reporting more linting errors by default. * An existing rule is deprecated. * A new CLI capability is created. * New capabilities to the public API are added (new classes, new methods, new arguments to existing methods, etc.). * A new formatter is created. * `eslint:recommended` is updated and will result in strictly fewer linting errors (e.g., rule removals). * Major release (likely to break your lint build) * `eslint:recommended` is updated and may result in new linting errors (e.g., rule additions, most rule option updates). * A new option to an existing rule that results in ESLint reporting more linting errors by default. * An existing formatter is removed. * Part of the public API is removed or changed in an incompatible way. The public API includes: * Rule schemas * Configuration schema * Command-line options * Node.js API * Rule, formatter, parser, plugin APIs According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (`~`) in `package.json` e.g. `"eslint": "~3.1.0"` to guarantee the results of your builds. ## <a name="stylistic-rule-updates"></a>Stylistic Rule Updates Stylistic rules are frozen according to [our policy](https://eslint.org/blog/2020/05/changes-to-rules-policies) on how we evaluate new rules and rule changes. This means: * **Bug fixes**: We will still fix bugs in stylistic rules. * **New ECMAScript features**: We will also make sure stylistic rules are compatible with new ECMAScript features. * **New options**: We will **not** add any new options to stylistic rules unless an option is the only way to fix a bug or support a newly-added ECMAScript feature. ## <a name="license"></a>License [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint.svg?type=large)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Feslint%2Feslint?ref=badge_large) ## <a name="team"></a>Team These folks keep the project moving and are resources for help. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--teamstart--> ### Technical Steering Committee (TSC) The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/nzakas"> <img src="https://github.com/nzakas.png?s=75" width="75" height="75"><br /> Nicholas C. Zakas </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/btmills"> <img src="https://github.com/btmills.png?s=75" width="75" height="75"><br /> Brandon Mills </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/mdjermanovic"> <img src="https://github.com/mdjermanovic.png?s=75" width="75" height="75"><br /> Milos Djermanovic </a> </td></tr></tbody></table> ### Reviewers The people who review and implement new features. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/mysticatea"> <img src="https://github.com/mysticatea.png?s=75" width="75" height="75"><br /> Toru Nagashima </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/aladdin-add"> <img src="https://github.com/aladdin-add.png?s=75" width="75" height="75"><br /> 薛定谔的猫 </a> </td></tr></tbody></table> ### Committers The people who review and fix bugs and help triage issues. <table><tbody><tr><td align="center" valign="top" width="11%"> <a href="https://github.com/brettz9"> <img src="https://github.com/brettz9.png?s=75" width="75" height="75"><br /> Brett Zamir </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/bmish"> <img src="https://github.com/bmish.png?s=75" width="75" height="75"><br /> Bryan Mishkin </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/g-plane"> <img src="https://github.com/g-plane.png?s=75" width="75" height="75"><br /> Pig Fang </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/anikethsaha"> <img src="https://github.com/anikethsaha.png?s=75" width="75" height="75"><br /> Anix </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/yeonjuan"> <img src="https://github.com/yeonjuan.png?s=75" width="75" height="75"><br /> YeonJuan </a> </td><td align="center" valign="top" width="11%"> <a href="https://github.com/snitin315"> <img src="https://github.com/snitin315.png?s=75" width="75" height="75"><br /> Nitin Kumar </a> </td></tr></tbody></table> <!--teamend--> ## <a name="sponsors"></a>Sponsors The following companies, organizations, and individuals support ESLint's ongoing maintenance and development. [Become a Sponsor](https://opencollective.com/eslint) to get your logo on our README and website. <!-- NOTE: This section is autogenerated. Do not manually edit.--> <!--sponsorsstart--> <h3>Platinum Sponsors</h3> <p><a href="https://automattic.com"><img src="https://images.opencollective.com/photomatt/d0ef3e1/logo.png" alt="Automattic" height="undefined"></a></p><h3>Gold Sponsors</h3> <p><a href="https://nx.dev"><img src="https://images.opencollective.com/nx/0efbe42/logo.png" alt="Nx (by Nrwl)" height="96"></a> <a href="https://google.com/chrome"><img src="https://images.opencollective.com/chrome/dc55bd4/logo.png" alt="Chrome's Web Framework & Tools Performance Fund" height="96"></a> <a href="https://www.salesforce.com"><img src="https://images.opencollective.com/salesforce/ca8f997/logo.png" alt="Salesforce" height="96"></a> <a href="https://www.airbnb.com/"><img src="https://images.opencollective.com/airbnb/d327d66/logo.png" alt="Airbnb" height="96"></a> <a href="https://coinbase.com"><img src="https://avatars.githubusercontent.com/u/1885080?v=4" alt="Coinbase" height="96"></a> <a href="https://substack.com/"><img src="https://avatars.githubusercontent.com/u/53023767?v=4" alt="Substack" height="96"></a></p><h3>Silver Sponsors</h3> <p><a href="https://retool.com/"><img src="https://images.opencollective.com/retool/98ea68e/logo.png" alt="Retool" height="64"></a> <a href="https://liftoff.io/"><img src="https://images.opencollective.com/liftoff/5c4fa84/logo.png" alt="Liftoff" height="64"></a></p><h3>Bronze Sponsors</h3> <p><a href="https://www.crosswordsolver.org/anagram-solver/"><img src="https://images.opencollective.com/anagram-solver/2666271/logo.png" alt="Anagram Solver" height="32"></a> <a href="null"><img src="https://images.opencollective.com/bugsnag-stability-monitoring/c2cef36/logo.png" alt="Bugsnag Stability Monitoring" height="32"></a> <a href="https://mixpanel.com"><img src="https://images.opencollective.com/mixpanel/cd682f7/logo.png" alt="Mixpanel" height="32"></a> <a href="https://www.vpsserver.com"><img src="https://images.opencollective.com/vpsservercom/logo.png" alt="VPS Server" height="32"></a> <a href="https://icons8.com"><img src="https://images.opencollective.com/icons8/7fa1641/logo.png" alt="Icons8: free icons, photos, illustrations, and music" height="32"></a> <a href="https://discord.com"><img src="https://images.opencollective.com/discordapp/f9645d9/logo.png" alt="Discord" height="32"></a> <a href="https://themeisle.com"><img src="https://images.opencollective.com/themeisle/d5592fe/logo.png" alt="ThemeIsle" height="32"></a> <a href="https://www.firesticktricks.com"><img src="https://images.opencollective.com/fire-stick-tricks/b8fbe2c/logo.png" alt="Fire Stick Tricks" height="32"></a> <a href="https://www.practiceignition.com"><img src="https://avatars.githubusercontent.com/u/5753491?v=4" alt="Practice Ignition" height="32"></a></p> <!--sponsorsend--> ## <a name="technology-sponsors"></a>Technology Sponsors * Site search ([eslint.org](https://eslint.org)) is sponsored by [Algolia](https://www.algolia.com) * Hosting for ([eslint.org](https://eslint.org)) is sponsored by [Netlify](https://www.netlify.com) * Password management is sponsored by [1Password](https://www.1password.com) ### Esrecurse [![Build Status](https://travis-ci.org/estools/esrecurse.svg?branch=master)](https://travis-ci.org/estools/esrecurse) Esrecurse ([esrecurse](https://github.com/estools/esrecurse)) is [ECMAScript](https://www.ecma-international.org/publications/standards/Ecma-262.htm) recursive traversing functionality. ### Example Usage The following code will output all variables declared at the root of a file. ```javascript esrecurse.visit(ast, { XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); ``` We can use `Visitor` instance. ```javascript var visitor = new esrecurse.Visitor({ XXXStatement: function (node) { this.visit(node.left); // do something... this.visit(node.right); } }); visitor.visit(ast); ``` We can inherit `Visitor` instance easily. ```javascript class Derived extends esrecurse.Visitor { constructor() { super(null); } XXXStatement(node) { } } ``` ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { this.visit(node.left); // do something... this.visit(node.right); }; ``` And you can invoke default visiting operation inside custom visit operation. ```javascript function DerivedVisitor() { esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */); } util.inherits(DerivedVisitor, esrecurse.Visitor); DerivedVisitor.prototype.XXXStatement = function (node) { // do something... this.visitChildren(node); }; ``` The `childVisitorKeys` option does customize the behaviour of `this.visitChildren(node)`. We can use user-defined node types. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { // Extending the existing traversing rules. childVisitorKeys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } } ); ``` We can use the `fallback` option as well. If the `fallback` option is `"iteration"`, `esrecurse` would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: 'iteration' } ); ``` If the `fallback` option is a function, `esrecurse` calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. `node.parent`). ```javascript esrecurse.visit( ast, { Literal: function (node) { // do something... } }, { fallback: function (node) { return Object.keys(node).filter(function(key) { return key !== 'argument' }); } } ); ``` ### License Copyright (C) 2014 [Yusuke Suzuki](https://github.com/Constellation) (twitter: [@Constellation](https://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # Acorn A tiny, fast JavaScript parser written in JavaScript. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface **parse**`(input, options)` is the main interface to the library. The `input` parameter is a string, `options` can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the [ESTree spec](https://github.com/estree/estree). ```javascript let acorn = require("acorn"); console.log(acorn.parse("1 + 1")); ``` When encountering a syntax error, the parser will raise a `SyntaxError` object with a meaningful message. The error object will have a `pos` property that indicates the string offset at which the error occurred, and a `loc` object that contains a `{line, column}` object referring to that same position. Options can be provided by passing a second argument, which should be an object containing any of these fields: - **ecmaVersion**: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10. **NOTE**: Only 'stage 4' (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins. - **sourceType**: Indicate the mode the code should be parsed in. Can be either `"script"` or `"module"`. This influences global strict mode and parsing of `import` and `export` declarations. **NOTE**: If set to `"module"`, then static `import` / `export` syntax will be valid, even if `ecmaVersion` is less than 6. - **onInsertedSemicolon**: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if `locations` is on, also a `{line, column}` object representing this position. - **onTrailingComma**: Like `onInsertedSemicolon`, but for trailing commas. - **allowReserved**: If `false`, using a reserved word will generate an error. Defaults to `true` for `ecmaVersion` 3, `false` for higher versions. When given the value `"never"`, reserved words and keywords can also not be used as property names (as in Internet Explorer's old parser). - **allowReturnOutsideFunction**: By default, a return statement at the top level raises an error. Set this to `true` to accept such code. - **allowImportExportEverywhere**: By default, `import` and `export` declarations can only appear at a program's top level. Setting this option to `true` allows them anywhere where a statement is allowed. - **allowAwaitOutsideFunction**: By default, `await` expressions can only appear inside `async` functions. Setting this option to `true` allows to have top-level `await` expressions. They are still not allowed in non-`async` functions, though. - **allowHashBang**: When this is enabled (off by default), if the code starts with the characters `#!` (as in a shellscript), the first line will be treated as a comment. - **locations**: When `true`, each node has a `loc` object attached with `start` and `end` subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. Default is `false`. - **onToken**: If a function is passed for this option, each found token will be passed in same format as tokens returned from `tokenizer().getToken()`. If array is passed, each found token is pushed to it. Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **onComment**: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters: - `block`: `true` if the comment is a block comment, false if it is a line comment. - `text`: The content of the comment. - `start`: Character offset of the start of the comment. - `end`: Character offset of the end of the comment. When the `locations` options is on, the `{line, column}` locations of the comment’s start and end are passed as two additional parameters. If array is passed for this option, each found comment is pushed to it as object in Esprima format: ```javascript { "type": "Line" | "Block", "value": "comment text", "start": Number, "end": Number, // If `locations` option is on: "loc": { "start": {line: Number, column: Number} "end": {line: Number, column: Number} }, // If `ranges` option is on: "range": [Number, Number] } ``` Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **ranges**: Nodes have their start and end characters offsets recorded in `start` and `end` properties (directly on the node, rather than the `loc` object, which holds line/column data. To also add a [semi-standardized](https://bugzilla.mozilla.org/show_bug.cgi?id=745678) `range` property holding a `[start, end]` array with the same numbers, set the `ranges` option to `true`. - **program**: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the `program` option in subsequent parses. This will add the toplevel forms of the parsed file to the "Program" (top) node of an existing parse tree. - **sourceFile**: When the `locations` option is `true`, you can pass this option to add a `source` attribute in every node’s `loc` object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose. - **directSourceFile**: Like `sourceFile`, but a `sourceFile` property will be added (regardless of the `location` option) directly to the nodes, rather than the `loc` object. - **preserveParens**: If this option is `true`, parenthesized expressions are represented by (non-standard) `ParenthesizedExpression` nodes that have a single `expression` property containing the expression inside parentheses. **parseExpressionAt**`(input, offset, options)` will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression. **tokenizer**`(input, options)` returns an object with a `getToken` method that can be called repeatedly to get the next token, a `{start, end, type, value}` object (with added `loc` property when the `locations` option is enabled and `range` property when the `ranges` option is enabled). When the token's type is `tokTypes.eof`, you should stop calling the method, since it will keep returning that same token forever. In ES6 environment, returned result can be used as any other protocol-compliant iterable: ```javascript for (let token of acorn.tokenizer(str)) { // iterate over the tokens } // transform code to array of tokens: var tokens = [...acorn.tokenizer(str)]; ``` **tokTypes** holds an object mapping names to the token type objects that end up in the `type` properties of tokens. **getLineInfo**`(input, offset)` can be used to get a `{line, column}` object for a given program string and offset. ### The `Parser` class Instances of the **`Parser`** class contain all the state and logic that drives a parse. It has static methods `parse`, `parseExpressionAt`, and `tokenizer` that match the top-level functions by the same name. When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static `extend` method. ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); var JSXParser = acorn.Parser.extend(jsx()); JSXParser.parse("foo(<bar/>)"); ``` The `extend` method takes any number of plugin values, and returns a new `Parser` class that includes the extra parser logic provided by the plugins. ## Command line interface The `bin/acorn` utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options: - `--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10`: Sets the ECMAScript version to parse. Default is version 9. - `--module`: Sets the parsing mode to `"module"`. Is set to `"script"` otherwise. - `--locations`: Attaches a "loc" object to each node with "start" and "end" subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. - `--allow-hash-bang`: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment. - `--compact`: No whitespace is used in the AST output. - `--silent`: Do not output the AST, just return the exit status. - `--help`: Print the usage information and quit. The utility spits out the syntax tree as JSON data. ## Existing plugins - [`acorn-jsx`](https://github.com/RReverser/acorn-jsx): Parse [Facebook JSX syntax extensions](https://github.com/facebook/jsx) Plugins for ECMAScript proposals: - [`acorn-stage3`](https://github.com/acornjs/acorn-stage3): Parse most stage 3 proposals, bundling: - [`acorn-class-fields`](https://github.com/acornjs/acorn-class-fields): Parse [class fields proposal](https://github.com/tc39/proposal-class-fields) - [`acorn-import-meta`](https://github.com/acornjs/acorn-import-meta): Parse [import.meta proposal](https://github.com/tc39/proposal-import-meta) - [`acorn-private-methods`](https://github.com/acornjs/acorn-private-methods): parse [private methods, getters and setters proposal](https://github.com/tc39/proposal-private-methods)n # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. [![npm version](https://img.shields.io/npm/v/espree.svg)](https://www.npmjs.com/package/espree) [![Build Status](https://travis-ci.org/eslint/espree.svg?branch=master)](https://travis-ci.org/eslint/espree) [![npm downloads](https://img.shields.io/npm/dm/espree.svg)](https://www.npmjs.com/package/espree) [![Bountysource](https://www.bountysource.com/badge/tracker?tracker_id=9348450)](https://www.bountysource.com/trackers/9348450-eslint?utm_source=9348450&utm_medium=shield&utm_campaign=TRACKER_BADGE) # Espree Espree started out as a fork of [Esprima](http://esprima.org) v1.2.2, the last stable published released of Esprima before work on ECMAScript 6 began. Espree is now built on top of [Acorn](https://github.com/ternjs/acorn), which has a modular architecture that allows extension of core functionality. The goal of Espree is to produce output that is similar to Esprima with a similar API so that it can be used in place of Esprima. ## Usage Install: ``` npm i espree ``` And in your Node.js code: ```javascript const espree = require("espree"); const ast = espree.parse(code); ``` ## API ### `parse()` `parse` parses the given code and returns a abstract syntax tree (AST). It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). ```javascript const espree = require("espree"); const ast = espree.parse(code, options); ``` **Example :** ```js const ast = espree.parse('let foo = "bar"', { ecmaVersion: 6 }); console.log(ast); ``` <details><summary>Output</summary> <p> ``` Node { type: 'Program', start: 0, end: 15, body: [ Node { type: 'VariableDeclaration', start: 0, end: 15, declarations: [Array], kind: 'let' } ], sourceType: 'script' } ``` </p> </details> ### `tokenize()` `tokenize` returns the tokens of a given code. It takes two parameters. - `code` [string]() - the code which needs to be parsed. - `options (Optional)` [Object]() - read more about this [here](#options). Even if `options` is empty or undefined or `options.tokens` is `false`, it assigns it to `true` in order to get the `tokens` array **Example :** ```js const tokens = espree.tokenize('let foo = "bar"', { ecmaVersion: 6 }); console.log(tokens); ``` <details><summary>Output</summary> <p> ``` Token { type: 'Keyword', value: 'let', start: 0, end: 3 }, Token { type: 'Identifier', value: 'foo', start: 4, end: 7 }, Token { type: 'Punctuator', value: '=', start: 8, end: 9 }, Token { type: 'String', value: '"bar"', start: 10, end: 15 } ``` </p> </details> ### `version` Returns the current `espree` version ### `VisitorKeys` Returns all visitor keys for traversing the AST from [eslint-visitor-keys](https://github.com/eslint/eslint-visitor-keys) ### `latestEcmaVersion` Returns the latest ECMAScript supported by `espree` ### `supportedEcmaVersions` Returns an array of all supported ECMAScript versions ## Options ```js const options = { // attach range information to each node range: false, // attach line/column location information to each node loc: false, // create a top-level comments array containing all comments comment: false, // create a top-level tokens array containing all tokens tokens: false, // Set to 3, 5 (default), 6, 7, 8, 9, 10, 11, or 12 to specify the version of ECMAScript syntax you want to use. // You can also set to 2015 (same as 6), 2016 (same as 7), 2017 (same as 8), 2018 (same as 9), 2019 (same as 10), 2020 (same as 11), or 2021 (same as 12) to use the year-based naming. ecmaVersion: 5, // specify which type of script you're parsing ("script" or "module") sourceType: "script", // specify additional language features ecmaFeatures: { // enable JSX parsing jsx: false, // enable return in global scope globalReturn: false, // enable implied strict mode (if ecmaVersion >= 5) impliedStrict: false } } ``` ## Esprima Compatibility Going Forward The primary goal is to produce the exact same AST structure and tokens as Esprima, and that takes precedence over anything else. (The AST structure being the [ESTree](https://github.com/estree/estree) API with JSX extensions.) Separate from that, Espree may deviate from what Esprima outputs in terms of where and how comments are attached, as well as what additional information is available on AST nodes. That is to say, Espree may add more things to the AST nodes than Esprima does but the overall AST structure produced will be the same. Espree may also deviate from Esprima in the interface it exposes. ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/espree/issues). Espree is licensed under a permissive BSD 2-clause license. ## Security Policy We work hard to ensure that Espree is safe for everyone and that security issues are addressed quickly and responsibly. Read the full [security policy](https://github.com/eslint/.github/blob/master/SECURITY.md). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting * `npm run browserify` - creates a version of Espree that is usable in a browser ## Differences from Espree 2.x * The `tokenize()` method does not use `ecmaFeatures`. Any string will be tokenized completely based on ECMAScript 6 semantics. * Trailing whitespace no longer is counted as part of a node. * `let` and `const` declarations are no longer parsed by default. You must opt-in by using an `ecmaVersion` newer than `5` or setting `sourceType` to `module`. * The `esparse` and `esvalidate` binary scripts have been removed. * There is no `tolerant` option. We will investigate adding this back in the future. ## Known Incompatibilities In an effort to help those wanting to transition from other parsers to Espree, the following is a list of noteworthy incompatibilities with other parsers. These are known differences that we do not intend to change. ### Esprima 1.2.2 * Esprima counts trailing whitespace as part of each AST node while Espree does not. In Espree, the end of a node is where the last token occurs. * Espree does not parse `let` and `const` declarations by default. * Error messages returned for parsing errors are different. * There are two addition properties on every node and token: `start` and `end`. These represent the same data as `range` and are used internally by Acorn. ### Esprima 2.x * Esprima 2.x uses a different comment attachment algorithm that results in some comments being added in different places than Espree. The algorithm Espree uses is the same one used in Esprima 1.2.2. ## Frequently Asked Questions ### Why another parser [ESLint](http://eslint.org) had been relying on Esprima as its parser from the beginning. While that was fine when the JavaScript language was evolving slowly, the pace of development increased dramatically and Esprima had fallen behind. ESLint, like many other tools reliant on Esprima, has been stuck in using new JavaScript language features until Esprima updates, and that caused our users frustration. We decided the only way for us to move forward was to create our own parser, bringing us inline with JSHint and JSLint, and allowing us to keep implementing new features as we need them. We chose to fork Esprima instead of starting from scratch in order to move as quickly as possible with a compatible API. With Espree 2.0.0, we are no longer a fork of Esprima but rather a translation layer between Acorn and Esprima syntax. This allows us to put work back into a community-supported parser (Acorn) that is continuing to grow and evolve while maintaining an Esprima-compatible parser for those utilities still built on Esprima. ### Have you tried working with Esprima? Yes. Since the start of ESLint, we've regularly filed bugs and feature requests with Esprima and will continue to do so. However, there are some different philosophies around how the projects work that need to be worked through. The initial goal was to have Espree track Esprima and eventually merge the two back together, but we ultimately decided that building on top of Acorn was a better choice due to Acorn's plugin support. ### Why don't you just use Acorn? Acorn is a great JavaScript parser that produces an AST that is compatible with Esprima. Unfortunately, ESLint relies on more than just the AST to do its job. It relies on Esprima's tokens and comment attachment features to get a complete picture of the source code. We investigated switching to Acorn, but the inconsistencies between Esprima and Acorn created too much work for a project like ESLint. We are building on top of Acorn, however, so that we can contribute back and help make Acorn even better. ### What ECMAScript features do you support? Espree supports all ECMAScript 2020 features and partially supports ECMAScript 2021 features. Because ECMAScript 2021 is still under development, we are implementing features as they are finalized. Currently, Espree supports: * [Logical Assignment Operators](https://github.com/tc39/proposal-logical-assignment) * [Numeric Separators](https://github.com/tc39/proposal-numeric-separator) See [finished-proposals.md](https://github.com/tc39/proposals/blob/master/finished-proposals.md) to know what features are finalized. ### How do you determine which experimental features to support? In general, we do not support experimental JavaScript features. We may make exceptions from time to time depending on the maturity of the features. # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0,4.1,4.2?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <[email protected]> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # flat-cache > A stupidly simple key/value storage using files to persist the data [![NPM Version](http://img.shields.io/npm/v/flat-cache.svg?style=flat)](https://npmjs.org/package/flat-cache) [![Build Status](https://api.travis-ci.org/royriojas/flat-cache.svg?branch=master)](https://travis-ci.org/royriojas/flat-cache) ## install ```bash npm i --save flat-cache ``` ## Usage ```js var flatCache = require('flat-cache') // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var cache = flatCache.load('cacheId'); // sets a key on the cache cache.setKey('key', { foo: 'var' }); // get a key from the cache cache.getKey('key') // { foo: 'var' } // fetch the entire persisted object cache.all() // { 'key': { foo: 'var' } } // remove a key cache.removeKey('key'); // removes a key from the cache // save it to disk cache.save(); // very important, if you don't save no changes will be persisted. // cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys // loads the cache from a given directory, if one does // not exists for the given Id a new one will be prepared to be created var cache = flatCache.load('cacheId', path.resolve('./path/to/folder')); // The following methods are useful to clear the cache // delete a given cache flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists. // delete all cache flatCache.clearAll(); // remove the cache directory ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistance in order to make a script that will beutify files with `esformatter` only execute on the files that were changed since the last run. To make that possible we need to store the `fileSize` and `modificationTime` of the files. So a simple `key/value` storage was needed and Bam! this module was born. ## Important notes - If no directory is especified when the `load` method is called, a folder named `.cache` will be created inside the module directory when `cache.save` is called. If you're committing your `node_modules` to any vcs, you might want to ignore the default `.cache` folder, or specify a custom directory. - The values set on the keys of the cache should be `stringify-able` ones, meaning no circular references - All the changes to the cache state are done to memory - I could have used a timer or `Object.observe` to deliver the changes to disk, but I wanted to keep this module intentionally dumb and simple - Non visited keys are removed when `cache.save()` is called. If this is not desired, you can pass `true` to the save call like: `cache.save( true /* noPrune */ )`. ## License MIT ## Changelog [changelog](./changelog.md) # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl [Build]: http://img.shields.io/travis/litejs/natural-compare-lite.png [Coverage]: http://img.shields.io/coveralls/litejs/natural-compare-lite.png [1]: https://travis-ci.org/litejs/natural-compare-lite [2]: https://coveralls.io/r/litejs/natural-compare-lite [npm package]: https://npmjs.org/package/natural-compare-lite [GitHub repo]: https://github.com/litejs/natural-compare-lite @version 1.4.0 @date 2015-10-26 @stability 3 - Stable Natural Compare &ndash; [![Build][]][1] [![Coverage][]][2] =============== Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a "natural ordering". ```text Standard sorting: Natural order sorting: img1.png img1.png img10.png img2.png img12.png img10.png img2.png img12.png ``` String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function. ### Installation - In browser ```html <script src=min.natural-compare.js></script> ``` - In node.js: `npm install natural-compare-lite` ```javascript require("natural-compare-lite") ``` ### Usage ```javascript // Simple case sensitive example var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"]; a.sort(String.naturalCompare); // ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"] // Use wrapper function for case insensitivity a.sort(function(a, b){ return String.naturalCompare(a.toLowerCase(), b.toLowerCase()); }) // In most cases we want to sort an array of objects var a = [ {"street":"350 5th Ave", "room":"A-1021"} , {"street":"350 5th Ave", "room":"A-21046-b"} ]; // sort by street, then by room a.sort(function(a, b){ return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room); }) // When text transformation is needed (eg toLowerCase()), // it is best for performance to keep // transformed key in that object. // There are no need to do text transformation // on each comparision when sorting. var a = [ {"make":"Audi", "model":"A6"} , {"make":"Kia", "model":"Rio"} ]; // sort by make, then by model a.map(function(car){ car.sort_key = (car.make + " " + car.model).toLowerCase(); }) a.sort(function(a, b){ return String.naturalCompare(a.sort_key, b.sort_key); }) ``` - Works well with dates in ISO format eg "Rev 2012-07-26.doc". ### Custom alphabet It is possible to configure a custom alphabet to achieve a desired order. ```javascript // Estonian alphabet String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy" ["t", "z", "x", "õ"].sort(String.naturalCompare) // ["z", "t", "õ", "x"] // Russian alphabet String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя" ["Ё", "А", "Б"].sort(String.naturalCompare) // ["А", "Б", "Ё"] ``` External links -------------- - [GitHub repo][https://github.com/litejs/natural-compare-lite] - [jsperf test](http://jsperf.com/natural-sort-2/12) Licence ------- Copyright (c) 2012-2015 Lauri Rooden &lt;[email protected]&gt; [The MIT License](http://lauri.rooden.ee/mit-license.txt) # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard <a name="table"></a> # Table > Produces a string that represents array data in a text table. [![Travis build status](http://img.shields.io/travis/gajus/table/master.svg?style=flat-square)](https://travis-ci.org/gajus/table) [![Coveralls](https://img.shields.io/coveralls/gajus/table.svg?style=flat-square)](https://coveralls.io/github/gajus/table) [![NPM version](http://img.shields.io/npm/v/table.svg?style=flat-square)](https://www.npmjs.org/package/table) [![Canonical Code Style](https://img.shields.io/badge/code%20style-canonical-blue.svg?style=flat-square)](https://github.com/gajus/canonical) [![Twitter Follow](https://img.shields.io/twitter/follow/kuizinas.svg?style=social&label=Follow)](https://twitter.com/kuizinas) * [Table](#table) * [Features](#table-features) * [Install](#table-install) * [Usage](#table-usage) * [API](#table-api) * [table](#table-api-table-1) * [createStream](#table-api-createstream) * [getBorderCharacters](#table-api-getbordercharacters) ![Demo of table displaying a list of missions to the Moon.](./.README/demo.png) <a name="table-features"></a> ## Features * Works with strings containing [fullwidth](https://en.wikipedia.org/wiki/Halfwidth_and_fullwidth_forms) characters. * Works with strings containing [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code). * Configurable border characters. * Configurable content alignment per column. * Configurable content padding per column. * Configurable column width. * Text wrapping. <a name="table-install"></a> ## Install ```bash npm install table ``` [![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/gajus) [![Become a Patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/gajus) <a name="table-usage"></a> ## Usage ```js import { table } from 'table'; // Using commonjs? // const { table } = require('table'); const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; console.log(table(data)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ ``` <a name="table-api"></a> ## API <a name="table-api-table-1"></a> ### table Returns the string in the table format **Parameters:** - **_data_:** The data to display - Type: `any[][]` - Required: `true` - **_config_:** Table configuration - Type: `object` - Required: `false` <a name="table-api-table-1-config-border"></a> ##### config.border Type: `{ [type: string]: string }`\ Default: `honeywell` [template](#getbordercharacters) Custom borders. The keys are any of: - `topLeft`, `topRight`, `topBody`,`topJoin` - `bottomLeft`, `bottomRight`, `bottomBody`, `bottomJoin` - `joinLeft`, `joinRight`, `joinBody`, `joinJoin` - `bodyLeft`, `bodyRight`, `bodyJoin` - `headerJoin` ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: { topBody: `─`, topJoin: `┬`, topLeft: `┌`, topRight: `┐`, bottomBody: `─`, bottomJoin: `┴`, bottomLeft: `└`, bottomRight: `┘`, bodyLeft: `│`, bodyRight: `│`, bodyJoin: `│`, joinBody: `─`, joinLeft: `├`, joinRight: `┤`, joinJoin: `┼` } }; console.log(table(data, config)); ``` ``` ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ ``` <a name="table-api-table-1-config-drawverticalline"></a> ##### config.drawVerticalLine Type: `(lineIndex: number, columnCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a vertical line. This callback is called for each vertical border of the table. If the table has `n` columns, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawVerticalLine: (lineIndex, columnCount) => { return lineIndex === 0 || lineIndex === columnCount; } }; console.log(table(data, config)); ``` ``` ╔════════════╗ ║ 0A 0B 0C ║ ╟────────────╢ ║ 1A 1B 1C ║ ╟────────────╢ ║ 2A 2B 2C ║ ╟────────────╢ ║ 3A 3B 3C ║ ╟────────────╢ ║ 4A 4B 4C ║ ╚════════════╝ ``` <a name="table-api-table-1-config-drawhorizontalline"></a> ##### config.drawHorizontalLine Type: `(lineIndex: number, rowCount: number) => boolean`\ Default: `() => true` It is used to tell whether to draw a horizontal line. This callback is called for each horizontal border of the table. If the table has `n` rows, then the `index` parameter is alternatively received all numbers in range `[0, n]` inclusively. If the table has `n` rows and contains the header, then the range will be `[0, n+1]` inclusively. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ['3A', '3B', '3C'], ['4A', '4B', '4C'] ]; const config = { drawHorizontalLine: (lineIndex, rowCount) => { return lineIndex === 0 || lineIndex === 1 || lineIndex === rowCount - 1 || lineIndex === rowCount; } }; console.log(table(data, config)); ``` ``` ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ║ 2A │ 2B │ 2C ║ ║ 3A │ 3B │ 3C ║ ╟────┼────┼────╢ ║ 4A │ 4B │ 4C ║ ╚════╧════╧════╝ ``` <a name="table-api-table-1-config-singleline"></a> ##### config.singleLine Type: `boolean`\ Default: `false` If `true`, horizontal lines inside the table are not drawn. This option also overrides the `config.drawHorizontalLine` if specified. ```js const data = [ ['-rw-r--r--', '1', 'pandorym', 'staff', '1529', 'May 23 11:25', 'LICENSE'], ['-rw-r--r--', '1', 'pandorym', 'staff', '16327', 'May 23 11:58', 'README.md'], ['drwxr-xr-x', '76', 'pandorym', 'staff', '2432', 'May 23 12:02', 'dist'], ['drwxr-xr-x', '634', 'pandorym', 'staff', '20288', 'May 23 11:54', 'node_modules'], ['-rw-r--r--', '1,', 'pandorym', 'staff', '525688', 'May 23 11:52', 'package-lock.json'], ['-rw-r--r--@', '1', 'pandorym', 'staff', '2440', 'May 23 11:25', 'package.json'], ['drwxr-xr-x', '27', 'pandorym', 'staff', '864', 'May 23 11:25', 'src'], ['drwxr-xr-x', '20', 'pandorym', 'staff', '640', 'May 23 11:25', 'test'], ]; const config = { singleLine: true }; console.log(table(data, config)); ``` ``` ╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 1529 │ May 23 11:25 │ LICENSE ║ ║ -rw-r--r-- │ 1 │ pandorym │ staff │ 16327 │ May 23 11:58 │ README.md ║ ║ drwxr-xr-x │ 76 │ pandorym │ staff │ 2432 │ May 23 12:02 │ dist ║ ║ drwxr-xr-x │ 634 │ pandorym │ staff │ 20288 │ May 23 11:54 │ node_modules ║ ║ -rw-r--r-- │ 1, │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║ ║ -rw-r--r--@ │ 1 │ pandorym │ staff │ 2440 │ May 23 11:25 │ package.json ║ ║ drwxr-xr-x │ 27 │ pandorym │ staff │ 864 │ May 23 11:25 │ src ║ ║ drwxr-xr-x │ 20 │ pandorym │ staff │ 640 │ May 23 11:25 │ test ║ ╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝ ``` <a name="table-api-table-1-config-columns"></a> ##### config.columns Type: `Column[] | { [columnIndex: number]: Column }` Column specific configurations. <a name="table-api-table-1-config-columns-config-columns-width"></a> ###### config.columns[*].width Type: `number`\ Default: the maximum cell widths of the column Column width (excluding the paddings). ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: { 1: { width: 10 } } }; console.log(table(data, config)); ``` ``` ╔════╤════════════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────────────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────────────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════════════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-alignment"></a> ###### config.columns[*].alignment Type: `'center' | 'justify' | 'left' | 'right'`\ Default: `'left'` Cell content horizontal alignment ```js const data = [ ['0A', '0B', '0C', '0D 0E 0F'], ['1A', '1B', '1C', '1D 1E 1F'], ['2A', '2B', '2C', '2D 2E 2F'], ]; const config = { columnDefault: { width: 10, }, columns: [ { alignment: 'left' }, { alignment: 'center' }, { alignment: 'right' }, { alignment: 'justify' } ], }; console.log(table(data, config)); ``` ``` ╔════════════╤════════════╤════════════╤════════════╗ ║ 0A │ 0B │ 0C │ 0D 0E 0F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C │ 1D 1E 1F ║ ╟────────────┼────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C │ 2D 2E 2F ║ ╚════════════╧════════════╧════════════╧════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-verticalalignment"></a> ###### config.columns[*].verticalAlignment Type: `'top' | 'middle' | 'bottom'`\ Default: `'top'` Cell content vertical alignment ```js const data = [ ['A', 'B', 'C', 'DEF'], ]; const config = { columnDefault: { width: 1, }, columns: [ { verticalAlignment: 'top' }, { verticalAlignment: 'middle' }, { verticalAlignment: 'bottom' }, ], }; console.log(table(data, config)); ``` ``` ╔═══╤═══╤═══╤═══╗ ║ A │ │ │ D ║ ║ │ B │ │ E ║ ║ │ │ C │ F ║ ╚═══╧═══╧═══╧═══╝ ``` <a name="table-api-table-1-config-columns-config-columns-paddingleft"></a> ###### config.columns[*].paddingLeft Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the left. <a name="table-api-table-1-config-columns-config-columns-paddingright"></a> ###### config.columns[*].paddingRight Type: `number`\ Default: `1` The number of whitespaces used to pad the content on the right. The `paddingLeft` and `paddingRight` options do not count on the column width. So the column has `width = 5`, `paddingLeft = 2` and `paddingRight = 2` will have the total width is `9`. ```js const data = [ ['0A', 'AABBCC', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { columns: [ { paddingLeft: 3 }, { width: 2, paddingRight: 3 } ] }; console.log(table(data, config)); ``` ``` ╔══════╤══════╤════╗ ║ 0A │ AA │ 0C ║ ║ │ BB │ ║ ║ │ CC │ ║ ╟──────┼──────┼────╢ ║ 1A │ 1B │ 1C ║ ╟──────┼──────┼────╢ ║ 2A │ 2B │ 2C ║ ╚══════╧══════╧════╝ ``` <a name="table-api-table-1-config-columns-config-columns-truncate"></a> ###### config.columns[*].truncate Type: `number`\ Default: `Infinity` The number of characters is which the content will be truncated. To handle a content that overflows the container width, `table` package implements [text wrapping](#config.columns[*].wrapWord). However, sometimes you may want to truncate content that is too long to be displayed in the table. ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20, truncate: 100 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convall… ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columns-config-columns-wrapword"></a> ###### config.columns[*].wrapWord Type: `boolean`\ Default: `false` The `table` package implements auto text wrapping, i.e., text that has the width greater than the container width will be separated into multiple lines at the nearest space or one of the special characters: `\|/_.,;-`. When `wrapWord` is `false`: ```js const data = [ ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.'] ]; const config = { columns: [ { width: 20 } ] }; console.log(table(data, config)); ``` ``` ╔══════════════════════╗ ║ Lorem ipsum dolor si ║ ║ t amet, consectetur ║ ║ adipiscing elit. Pha ║ ║ sellus pulvinar nibh ║ ║ sed mauris convallis ║ ║ dapibus. Nunc venena ║ ║ tis tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` When `wrapWord` is `true`: ``` ╔══════════════════════╗ ║ Lorem ipsum dolor ║ ║ sit amet, ║ ║ consectetur ║ ║ adipiscing elit. ║ ║ Phasellus pulvinar ║ ║ nibh sed mauris ║ ║ convallis dapibus. ║ ║ Nunc venenatis ║ ║ tempus nulla sit ║ ║ amet viverra. ║ ╚══════════════════════╝ ``` <a name="table-api-table-1-config-columndefault"></a> ##### config.columnDefault Type: `Column`\ Default: `{}` The default configuration for all columns. Column-specific settings will overwrite the default values. <a name="table-api-table-1-config-header"></a> ##### config.header Type: `object` Header configuration. The header configuration inherits the most of the column's, except: - `content` **{string}**: the header content. - `width:` calculate based on the content width automatically. - `alignment:` `center` be default. - `verticalAlignment:` is not supported. - `config.border.topJoin` will be `config.border.topBody` for prettier. ```js const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'], ]; const config = { columnDefault: { width: 10, }, header: { alignment: 'center', content: 'THE HEADER\nThis is the table about something', }, } console.log(table(data, config)); ``` ``` ╔══════════════════════════════════════╗ ║ THE HEADER ║ ║ This is the table about something ║ ╟────────────┬────────────┬────────────╢ ║ 0A │ 0B │ 0C ║ ╟────────────┼────────────┼────────────╢ ║ 1A │ 1B │ 1C ║ ╟────────────┼────────────┼────────────╢ ║ 2A │ 2B │ 2C ║ ╚════════════╧════════════╧════════════╝ ``` <a name="table-api-createstream"></a> ### createStream `table` package exports `createStream` function used to draw a table and append rows. **Parameter:** - _**config:**_ the same as `table`'s, except `config.columnDefault.width` and `config.columnCount` must be provided. ```js import { createStream } from 'table'; const config = { columnDefault: { width: 50 }, columnCount: 1 }; const stream = createStream(config); setInterval(() => { stream.write([new Date()]); }, 500); ``` ![Streaming current date.](./.README/api/stream/streaming.gif) `table` package uses ANSI escape codes to overwrite the output of the last line when a new row is printed. The underlying implementation is explained in this [Stack Overflow answer](http://stackoverflow.com/a/32938658/368691). Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g. ```js import { createStream } from 'table'; import _ from 'lodash'; const config = { columnDefault: { width: 50 }, columnCount: 3, columns: [ { width: 10, alignment: 'right' }, { alignment: 'center' }, { width: 10 } ] }; const stream = createStream(config); let i = 0; setInterval(() => { let random; random = _.sample('abcdefghijklmnopqrstuvwxyz', _.random(1, 30)).join(''); stream.write([i++, new Date(), random]); }, 500); ``` ![Streaming random data.](./.README/api/stream/streaming-random.gif) <a name="table-api-getbordercharacters"></a> ### getBorderCharacters **Parameter:** - **_template_** - Type: `'honeywell' | 'norc' | 'ramac' | 'void'` - Required: `true` You can load one of the predefined border templates using `getBorderCharacters` function. ```js import { table, getBorderCharacters } from 'table'; const data = [ ['0A', '0B', '0C'], ['1A', '1B', '1C'], ['2A', '2B', '2C'] ]; const config = { border: getBorderCharacters(`name of the template`) }; console.log(table(data, config)); ``` ``` # honeywell ╔════╤════╤════╗ ║ 0A │ 0B │ 0C ║ ╟────┼────┼────╢ ║ 1A │ 1B │ 1C ║ ╟────┼────┼────╢ ║ 2A │ 2B │ 2C ║ ╚════╧════╧════╝ # norc ┌────┬────┬────┐ │ 0A │ 0B │ 0C │ ├────┼────┼────┤ │ 1A │ 1B │ 1C │ ├────┼────┼────┤ │ 2A │ 2B │ 2C │ └────┴────┴────┘ # ramac (ASCII; for use in terminals that do not support Unicode characters) +----+----+----+ | 0A | 0B | 0C | |----|----|----| | 1A | 1B | 1C | |----|----|----| | 2A | 2B | 2C | +----+----+----+ # void (no borders; see "borderless table" section of the documentation) 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` Raise [an issue](https://github.com/gajus/table/issues) if you'd like to contribute a new border template. <a name="table-api-getbordercharacters-borderless-table"></a> #### Borderless Table Simply using `void` border character template creates a table with a lot of unnecessary spacing. To create a more pleasant to the eye table, reset the padding and remove the joining rows, e.g. ```js const output = table(data, { border: getBorderCharacters('void'), columnDefault: { paddingLeft: 0, paddingRight: 1 }, drawHorizontalLine: () => false } ); console.log(output); ``` ``` 0A 0B 0C 1A 1B 1C 2A 2B 2C ``` # cross-spawn [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build status][appveyor-image]][appveyor-url] [![Coverage Status][codecov-image]][codecov-url] [![Dependency status][david-dm-image]][david-dm-url] [![Dev Dependency status][david-dm-dev-image]][david-dm-dev-url] [npm-url]:https://npmjs.org/package/cross-spawn [downloads-image]:https://img.shields.io/npm/dm/cross-spawn.svg [npm-image]:https://img.shields.io/npm/v/cross-spawn.svg [travis-url]:https://travis-ci.org/moxystudio/node-cross-spawn [travis-image]:https://img.shields.io/travis/moxystudio/node-cross-spawn/master.svg [appveyor-url]:https://ci.appveyor.com/project/satazor/node-cross-spawn [appveyor-image]:https://img.shields.io/appveyor/ci/satazor/node-cross-spawn/master.svg [codecov-url]:https://codecov.io/gh/moxystudio/node-cross-spawn [codecov-image]:https://img.shields.io/codecov/c/github/moxystudio/node-cross-spawn/master.svg [david-dm-url]:https://david-dm.org/moxystudio/node-cross-spawn [david-dm-image]:https://img.shields.io/david/moxystudio/node-cross-spawn.svg [david-dm-dev-url]:https://david-dm.org/moxystudio/node-cross-spawn?type=dev [david-dm-dev-image]:https://img.shields.io/david/dev/moxystudio/node-cross-spawn.svg A cross platform solution to node's spawn and spawnSync. ## Installation Node.js version 8 and up: `$ npm install cross-spawn` Node.js version 7 and under: `$ npm install cross-spawn@6` ## Why Node has issues when using spawn on Windows: - It ignores [PATHEXT](https://github.com/joyent/node/issues/2318) - It does not support [shebangs](https://en.wikipedia.org/wiki/Shebang_(Unix)) - Has problems running commands with [spaces](https://github.com/nodejs/node/issues/7367) - Has problems running commands with posix relative paths (e.g.: `./my-folder/my-executable`) - Has an [issue](https://github.com/moxystudio/node-cross-spawn/issues/82) with command shims (files in `node_modules/.bin/`), where arguments with quotes and parenthesis would result in [invalid syntax error](https://github.com/moxystudio/node-cross-spawn/blob/e77b8f22a416db46b6196767bcd35601d7e11d54/test/index.test.js#L149) - No `options.shell` support on node `<v4.8` All these issues are handled correctly by `cross-spawn`. There are some known modules, such as [win-spawn](https://github.com/ForbesLindesay/win-spawn), that try to solve this but they are either broken or provide faulty escaping of shell arguments. ## Usage Exactly the same way as node's [`spawn`](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options) or [`spawnSync`](https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options), so it's a drop in replacement. ```js const spawn = require('cross-spawn'); // Spawn NPM asynchronously const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); // Spawn NPM synchronously const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' }); ``` ## Caveats ### Using `options.shell` as an alternative to `cross-spawn` Starting from node `v4.8`, `spawn` has a `shell` option that allows you run commands from within a shell. This new option solves the [PATHEXT](https://github.com/joyent/node/issues/2318) issue but: - It's not supported in node `<v4.8` - You must manually escape the command and arguments which is very error prone, specially when passing user input - There are a lot of other unresolved issues from the [Why](#why) section that you must take into account If you are using the `shell` option to spawn a command in a cross platform way, consider using `cross-spawn` instead. You have been warned. ### `options.shell` support While `cross-spawn` adds support for `options.shell` in node `<v4.8`, all of its enhancements are disabled. This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using `options.shell` you are probably targeting a specific platform anyway and you don't want things to get into your way. ### Shebangs support While `cross-spawn` handles shebangs on Windows, its support is limited. More specifically, it just supports `#!/usr/bin/env <program>` where `<program>` must not contain any arguments. If you would like to have the shebang support improved, feel free to contribute via a pull-request. Remember to always test your code on Windows! ## Tests `$ npm test` `$ npm test -- --watch` during development ## License Released under the [MIT License](https://www.opensource.org/licenses/mit-license.php). # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # isexe Minimal module to check if a file is executable, and a normal file. Uses `fs.stat` and tests against the `PATHEXT` environment variable on Windows. ## USAGE ```javascript var isexe = require('isexe') isexe('some-file-name', function (err, isExe) { if (err) { console.error('probably file does not exist or something', err) } else if (isExe) { console.error('this thing can be run') } else { console.error('cannot be run') } }) // same thing but synchronous, throws errors var isExe = isexe.sync('some-file-name') // treat errors as just "not executable" isexe('maybe-missing-file', { ignoreErrors: true }, callback) var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true }) ``` ## API ### `isexe(path, [options], [callback])` Check if the path is executable. If no callback provided, and a global `Promise` object is available, then a Promise will be returned. Will raise whatever errors may be raised by `fs.stat`, unless `options.ignoreErrors` is set to true. ### `isexe.sync(path, [options])` Same as `isexe` but returns the value and throws any errors raised. ### Options * `ignoreErrors` Treat all errors as "no, this is not executable", but don't raise them. * `uid` Number to use as the user id * `gid` Number to use as the group id * `pathExt` List of path extensions to use instead of `PATHEXT` environment variable on Windows. # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # line-column [![Build Status](https://travis-ci.org/io-monad/line-column.svg?branch=master)](https://travis-ci.org/io-monad/line-column) [![Coverage Status](https://coveralls.io/repos/github/io-monad/line-column/badge.svg?branch=master)](https://coveralls.io/github/io-monad/line-column?branch=master) [![npm version](https://badge.fury.io/js/line-column.svg)](https://badge.fury.io/js/line-column) Node module to convert efficiently index to/from line-column in a string. ## Install npm install line-column ## Usage ### lineColumn(str, options = {}) Returns a `LineColumnFinder` instance for given string `str`. #### Options | Key | Description | Default | | ------- | ----------- | ------- | | `origin` | The origin value of line number and column number | `1` | ### lineColumn(str, index) This is just a shorthand for `lineColumn(str).fromIndex(index)`. ### LineColumnFinder#fromIndex(index) Find line and column from index in the string. Parameters: - `index` - `number` Index in the string. (0-origin) Returns: - `{ line: x, col: y }` Found line number and column number. - `null` if the given index is out of range. ### LineColumnFinder#toIndex(line, column) Find index from line and column in the string. Parameters: - `line` - `number` Line number in the string. - `column` - `number` Column number in the string. or - `{ line: x, col: y }` - `Object` line and column numbers in the string.<br>A key name `column` can be used instead of `col`. or - `[ line, col ]` - `Array` line and column numbers in the string. Returns: - `number` Found index in the string. - `-1` if the given line or column is out of range. ## Example ```js var lineColumn = require("line-column"); var testString = [ "ABCDEFG\n", // line:0, index:0 "HIJKLMNOPQRSTU\n", // line:1, index:8 "VWXYZ\n", // line:2, index:23 "日本語の文字\n", // line:3, index:29 "English words" // line:4, index:36 ].join(""); // length:49 lineColumn(testString).fromIndex(3) // { line: 1, col: 4 } lineColumn(testString).fromIndex(33) // { line: 4, col: 5 } lineColumn(testString).toIndex(1, 4) // 3 lineColumn(testString).toIndex(4, 5) // 33 // Shorthand of .fromIndex (compatible with find-line-column) lineColumn(testString, 33) // { line:4, col: 5 } // Object or Array is also acceptable lineColumn(testString).toIndex({ line: 4, col: 5 }) // 33 lineColumn(testString).toIndex({ line: 4, column: 5 }) // 33 lineColumn(testString).toIndex([4, 5]) // 33 // You can cache it for the same string. It is so efficient. (See benchmark) var finder = lineColumn(testString); finder.fromIndex(33) // { line: 4, column: 5 } finder.toIndex(4, 5) // 33 // For 0-origin line and column numbers var oneOrigin = lineColumn(testString, { origin: 0 }); oneOrigin.fromIndex(33) // { line: 3, column: 4 } oneOrigin.toIndex(3, 4) // 33 ``` ## Testing npm test ## Benchmark The popular package [find-line-column](https://www.npmjs.com/package/find-line-column) provides the same "index to line-column" feature. Here is some benchmarking on `line-column` vs `find-line-column`. You can run this benchmark by `npm run benchmark`. See [benchmark/](benchmark/) for the source code. ``` long text + line-column (not cached) x 72,989 ops/sec ±0.83% (89 runs sampled) long text + line-column (cached) x 13,074,242 ops/sec ±0.32% (89 runs sampled) long text + find-line-column x 33,887 ops/sec ±0.54% (84 runs sampled) short text + line-column (not cached) x 1,636,766 ops/sec ±0.77% (82 runs sampled) short text + line-column (cached) x 21,699,686 ops/sec ±1.04% (82 runs sampled) short text + find-line-column x 382,145 ops/sec ±1.04% (85 runs sampled) ``` As you might have noticed, even not cached version of `line-column` is 2x - 4x faster than `find-line-column`, and cached version of `line-column` is remarkable 50x - 380x faster. ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -am 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## License MIT (See LICENSE) # `BlockLAB` 📄 Descripción ================== BlockLAB es un contrato inteligente en el que puede crear un usuario al que se le puede añadir la información de análisis clínicos. Las siguientes son las principales funcionalidades de este contrato inteligente: 1. Crear un usuario. 2. Consultar un usuario por su id. 3. Añadir un análisis. 4. Consultar análisis 📦 Instalación ================ Para ejecutar este proyecto localmente, debe seguir los siguientes pasos: Paso 1: Prerequisitos ---- 1. Asegúrese de haber instalado [Node.js] ≥ 12 (recomendamos usar [nvm]) 2. Asegúrese de haber instalado yarn: `npm install -g yarn` 3. Instalar dependencias: `yarn install` 4. Cree una cuenta de prueba NEAR [https://wallet.testnet.near.org/] 5. Instale NEAR CLI globalmente: [near-cli] es una interfaz de línea de comandos (CLI) para interactuar con NEAR blockchain yarn install --global near-cli Step 2: Configuración de NEAR CLI ---- Configure su near-cli para autorizar su cuenta de prueba creada recientemente: near login Paso 3: Cree y realice una implementación de desarrollo de contrato inteligente ---- Cree el código del contrato inteligente de NEARLancers e implemente el servidor de desarrollo local: `yarn build` (consulte`package.json` para obtener una lista completa de `scripts` que puede ejecutar con`yarn`). Este script le devuelve un contrato inteligente provisional implementado (guárdelo para usarlo más tarde). Para desplegar el contrato generado con `yarn build` en testnet [https://wallet.testnet.near.org/], ejecutar el comando `yarn deploy` el cual nos regresará el id del contrato desplegado el cuál usaremos para ejecutar cada uno de los métodos que contiene el contrato. 📑 Explorando los métodos de contrato inteligente NEARLancers ================== Los siguientes comandos le permiten interactuar con los métodos del contrato inteligente utilizando NEAR CLI (para esto, debe tener implementado un contrato inteligente provisional). Comando para crear usuario: ---- near call $CONTRACT registrarUsuario '{ "idCuenta":"string", "nombre":"string", "telefono":"string", "correo":"string", "password":"string"}' --account-id <your test account> Comando para consultar todos los usuarios: ---- near view $CONTRACT consultarUsuarios Comando para guardar una analica: ---- near call $CONTRACT registrarServicio '{ "descripción":"string", "idUsuario":"string"}' --account-id <your test account> Comando para consultar todos la analítica de un usuario: ---- near view $CONTRACT consultarAnaliticasUsuario '{"idUsuario":"string"}' 🖥️ Interfáz gráfica de usuario ---- https://www.figma.com/files/project/39502420/Team-project?fuid=1025893206186787859 # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. # json-schema-traverse Traverse JSON Schema passing each schema object to callback [![Build Status](https://travis-ci.org/epoberezkin/json-schema-traverse.svg?branch=master)](https://travis-ci.org/epoberezkin/json-schema-traverse) [![npm version](https://badge.fury.io/js/json-schema-traverse.svg)](https://www.npmjs.com/package/json-schema-traverse) [![Coverage Status](https://coveralls.io/repos/github/epoberezkin/json-schema-traverse/badge.svg?branch=master)](https://coveralls.io/github/epoberezkin/json-schema-traverse?branch=master) ## Install ``` npm install json-schema-traverse ``` ## Usage ```javascript const traverse = require('json-schema-traverse'); const schema = { properties: { foo: {type: 'string'}, bar: {type: 'integer'} } }; traverse(schema, {cb}); // cb is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // Or: traverse(schema, {cb: {pre, post}}); // pre is called 3 times with: // 1. root schema // 2. {type: 'string'} // 3. {type: 'integer'} // // post is called 3 times with: // 1. {type: 'string'} // 2. {type: 'integer'} // 3. root schema ``` Callback function `cb` is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a `{pre, post}` object as `cb`, and then `pre` will be called before traversing child elements, and `post` will be called after all child elements have been traversed. Callback is passed these parameters: - _schema_: the current schema object - _JSON pointer_: from the root schema to the current schema object - _root schema_: the schema passed to `traverse` object - _parent JSON pointer_: from the root schema to the parent schema object (see below) - _parent keyword_: the keyword inside which this schema appears (e.g. `properties`, `anyOf`, etc.) - _parent schema_: not necessarily parent object/array; in the example above the parent schema for `{type: 'string'}` is the root schema - _index/property_: index or property name in the array/object containing multiple schemas; in the example above for `{type: 'string'}` the property name is `'foo'` ## Traverse objects in all unknown keywords ```javascript const traverse = require('json-schema-traverse'); const schema = { mySchema: { minimum: 1, maximum: 2 } }; traverse(schema, {allKeys: true, cb}); // cb is called 2 times with: // 1. root schema // 2. mySchema ``` Without option `allKeys: true` callback will be called only with root schema. ## License [MIT](https://github.com/epoberezkin/json-schema-traverse/blob/master/LICENSE) # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) [![NPM version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![Test coverage][coveralls-image]][coveralls-url] [![Downloads][downloads-image]][downloads-url] [![Join the chat at https://gitter.im/eslint/doctrine](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/eslint/doctrine?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # Doctrine Doctrine is a [JSDoc](http://usejsdoc.org) parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file). ## Installation You can install Doctrine using [npm](https://npmjs.com): ``` $ npm install doctrine --save-dev ``` Doctrine can also be used in web browsers using [Browserify](http://browserify.org). ## Usage Require doctrine inside of your JavaScript: ```js var doctrine = require("doctrine"); ``` ### parse() The primary method is `parse()`, which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are: * `unwrap` - set to `true` to delete the leading `/**`, any `*` that begins a line, and the trailing `*/` from the source text. Default: `false`. * `tags` - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if `tags` is `["param"]`, then only `@param` tags will be returned. Default: `null`. * `recoverable` - set to `true` to keep parsing even when syntax errors occur. Default: `false`. * `sloppy` - set to `true` to allow optional parameters to be specified in brackets (`@param {string} [foo]`). Default: `false`. * `lineNumbers` - set to `true` to add `lineNumber` to each node, specifying the line on which the node is found in the source. Default: `false`. * `range` - set to `true` to add `range` to each node, specifying the start and end index of the node in the original comment. Default: `false`. Here's a simple example: ```js var ast = doctrine.parse( [ "/**", " * This function comment is parsed by doctrine", " * @param {{ok:String}} userName", "*/" ].join('\n'), { unwrap: true }); ``` This example returns the following AST: { "description": "This function comment is parsed by doctrine", "tags": [ { "title": "param", "description": null, "type": { "type": "RecordType", "fields": [ { "type": "FieldType", "key": "ok", "value": { "type": "NameExpression", "name": "String" } } ] }, "name": "userName" } ] } See the [demo page](http://eslint.org/doctrine/demo/) more detail. ## Team These folks keep the project moving and are resources for help: * Nicholas C. Zakas ([@nzakas](https://github.com/nzakas)) - project lead * Yusuke Suzuki ([@constellation](https://github.com/constellation)) - reviewer ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/doctrine/issues). ## Frequently Asked Questions ### Can I pass a whole JavaScript file to Doctrine? No. Doctrine can only parse JSDoc comments, so you'll need to pass just the JSDoc comment to Doctrine in order to work. ### License #### doctrine Copyright JS Foundation and other contributors, https://js.foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. #### esprima some of functions is derived from esprima Copyright (C) 2012, 2011 [Ariya Hidayat](http://ariya.ofilabs.com/about) (twitter: [@ariyahidayat](http://twitter.com/ariyahidayat)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #### closure-compiler some of extensions is derived from closure-compiler Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ ### Where to ask for help? Join our [Chatroom](https://gitter.im/eslint/doctrine) [npm-image]: https://img.shields.io/npm/v/doctrine.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/doctrine [travis-image]: https://img.shields.io/travis/eslint/doctrine/master.svg?style=flat-square [travis-url]: https://travis-ci.org/eslint/doctrine [coveralls-image]: https://img.shields.io/coveralls/eslint/doctrine/master.svg?style=flat-square [coveralls-url]: https://coveralls.io/r/eslint/doctrine?branch=master [downloads-image]: http://img.shields.io/npm/dm/doctrine.svg?style=flat-square [downloads-url]: https://www.npmjs.com/package/doctrine [![Build Status](https://api.travis-ci.org/adaltas/node-csv-stringify.svg)](https://travis-ci.org/#!/adaltas/node-csv-stringify) [![NPM](https://img.shields.io/npm/dm/csv-stringify)](https://www.npmjs.com/package/csv-stringify) [![NPM](https://img.shields.io/npm/v/csv-stringify)](https://www.npmjs.com/package/csv-stringify) This package is a stringifier converting records into a CSV text and implementing the Node.js [`stream.Transform` API](https://nodejs.org/api/stream.html). It also provides the easier synchronous and callback-based APIs for conveniency. It is both extremely easy to use and powerful. It was first released in 2010 and is tested against big data sets by a large community. ## Documentation * [Project homepage](http://csv.js.org/stringify/) * [API](http://csv.js.org/stringify/api/) * [Options](http://csv.js.org/stringify/options/) * [Examples](http://csv.js.org/stringify/examples/) ## Main features * Follow the Node.js streaming API * Simplicity with the optional callback API * Support for custom formatters, delimiters, quotes, escape characters and header * Support big datasets * Complete test coverage and samples for inspiration * Only 1 external dependency * to be used conjointly with `csv-generate`, `csv-parse` and `stream-transform` * MIT License ## Usage The module is built on the Node.js Stream API. For the sake of simplicity, a simple callback API is also provided. To give you a quick look, here's an example of the callback API: ```javascript const stringify = require('csv-stringify') const assert = require('assert') // import stringify from 'csv-stringify' // import assert from 'assert/strict' const input = [ [ '1', '2', '3', '4' ], [ 'a', 'b', 'c', 'd' ] ] stringify(input, function(err, output) { const expected = '1,2,3,4\na,b,c,d\n' assert.strictEqual(output, expected, `output.should.eql ${expected}`) console.log("Passed.", output) }) ``` ## Development Tests are executed with mocha. To install it, run `npm install` followed by `npm test`. It will install mocha and its dependencies in your project "node_modules" directory and run the test suite. The tests run against the CoffeeScript source files. To generate the JavaScript files, run `npm run build`. The test suite is run online with [Travis](https://travis-ci.org/#!/adaltas/node-csv-stringify). See the [Travis definition file](https://github.com/adaltas/node-csv-stringify/blob/master/.travis.yml) to view the tested Node.js version. ## Contributors * David Worms: <https://github.com/wdavidw> [csv_home]: https://github.com/adaltas/node-csv [stream_transform]: http://nodejs.org/api/stream.html#stream_class_stream_transform [examples]: http://csv.js.org/stringify/examples/ [csv]: https://github.com/adaltas/node-csv # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo: [demo](https://estools.github.io/esquery/) The following selectors are supported: * AST node type: `ForStatement` * [wildcard](http://dev.w3.org/csswg/selectors4/#universal-selector): `*` * [attribute existence](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr]` * [attribute value](http://dev.w3.org/csswg/selectors4/#attribute-selectors): `[attr="foo"]` or `[attr=123]` * attribute regex: `[attr=/foo.*/]` or (with flags) `[attr=/foo.*/is]` * attribute conditions: `[attr!="foo"]`, `[attr>2]`, `[attr<3]`, `[attr>=2]`, or `[attr<=3]` * nested attribute: `[attr.level2="foo"]` * field: `FunctionDeclaration > Identifier.id` * [First](http://dev.w3.org/csswg/selectors4/#the-first-child-pseudo) or [last](http://dev.w3.org/csswg/selectors4/#the-last-child-pseudo) child: `:first-child` or `:last-child` * [nth-child](http://dev.w3.org/csswg/selectors4/#the-nth-child-pseudo) (no ax+b support): `:nth-child(2)` * [nth-last-child](http://dev.w3.org/csswg/selectors4/#the-nth-last-child-pseudo) (no ax+b support): `:nth-last-child(1)` * [descendant](http://dev.w3.org/csswg/selectors4/#descendant-combinators): `ancestor descendant` * [child](http://dev.w3.org/csswg/selectors4/#child-combinators): `parent > child` * [following sibling](http://dev.w3.org/csswg/selectors4/#general-sibling-combinators): `node ~ sibling` * [adjacent sibling](http://dev.w3.org/csswg/selectors4/#adjacent-sibling-combinators): `node + adjacent` * [negation](http://dev.w3.org/csswg/selectors4/#negation-pseudo): `:not(ForStatement)` * [has](https://drafts.csswg.org/selectors-4/#has-pseudo): `:has(ForStatement)` * [matches-any](http://dev.w3.org/csswg/selectors4/#matches): `:matches([attr] > :first-child, :last-child)` * [subject indicator](http://dev.w3.org/csswg/selectors4/#subject): `!IfStatement > [name="foo"]` * class of AST node: `:statement`, `:expression`, `:declaration`, `:function`, or `:pattern` [![Build Status](https://travis-ci.org/estools/esquery.png?branch=master)](https://travis-ci.org/estools/esquery) ### Estraverse [![Build Status](https://secure.travis-ci.org/estools/estraverse.svg)](http://travis-ci.org/estools/estraverse) Estraverse ([estraverse](http://github.com/estools/estraverse)) is [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) traversal functions from [esmangle project](http://github.com/estools/esmangle). ### Documentation You can find usage docs at [wiki page](https://github.com/estools/estraverse/wiki/Usage). ### Example Usage The following code will output all variables declared at the root of a file. ```javascript estraverse.traverse(ast, { enter: function (node, parent) { if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration') return estraverse.VisitorOption.Skip; }, leave: function (node, parent) { if (node.type == 'VariableDeclarator') console.log(node.id.name); } }); ``` We can use `this.skip`, `this.remove` and `this.break` functions instead of using Skip, Remove and Break. ```javascript estraverse.traverse(ast, { enter: function (node) { this.break(); } }); ``` And estraverse provides `estraverse.replace` function. When returning node from `enter`/`leave`, current node is replaced with it. ```javascript result = estraverse.replace(tree, { enter: function (node) { // Replace it with replaced. if (node.type === 'Literal') return replaced; } }); ``` By passing `visitor.keys` mapping, we can extend estraverse traversing functionality. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Extending the existing traversing rules. keys: { // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ] TestExpression: ['argument'] } }); ``` By passing `visitor.fallback` option, we can control the behavior when encountering unknown nodes. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Iterating the child **nodes** of unknown nodes. fallback: 'iteration' }); ``` When `visitor.fallback` is a function, we can determine which keys to visit on each node. ```javascript // This tree contains a user-defined `TestExpression` node. var tree = { type: 'TestExpression', // This 'argument' is the property containing the other **node**. argument: { type: 'Literal', value: 20 }, // This 'extended' is the property not containing the other **node**. extended: true }; estraverse.traverse(tree, { enter: function (node) { }, // Skip the `argument` property of each node fallback: function(node) { return Object.keys(node).filter(function(key) { return key !== 'argument'; }); } }); ``` ### License Copyright (C) 2012-2016 [Yusuke Suzuki](http://github.com/Constellation) (twitter: [@Constellation](http://twitter.com/Constellation)) and other contributors. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/img/ajv.svg"> &nbsp; # Ajv JSON schema validator The fastest JSON validator for Node.js and browser. Supports JSON Schema draft-04/06/07/2019-09/2020-12 ([draft-04 support](https://ajv.js.org/json-schema.html#draft-04) requires ajv-draft-04 package) and JSON Type Definition [RFC8927](https://datatracker.ietf.org/doc/rfc8927/). [![build](https://github.com/ajv-validator/ajv/workflows/build/badge.svg)](https://github.com/ajv-validator/ajv/actions?query=workflow%3Abuild) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Platinum sponsors [<img src="https://ajv.js.org/img/mozilla.svg" width="45%">](https://www.mozilla.org)<img src="https://ajv.js.org/img/gap.svg" width="8%">[<img src="https://ajv.js.org/img/reserved.svg" width="45%">](https://opencollective.com/ajv) ## Ajv online event - May 20, 10am PT / 6pm UK We will talk about: - new features of Ajv version 8. - the improvements sponsored by Mozilla's MOSS grant. - how Ajv is used in JavaScript applications. Speakers: - [Evgeny Poberezkin](https://github.com/epoberezkin), the creator of Ajv. - [Mehan Jayasuriya](https://github.com/mehan), Program Officer at Mozilla Foundation, leading the [MOSS](https://www.mozilla.org/en-US/moss/) and other programs investing in the open source and community ecosystems. - [Matteo Collina](https://github.com/mcollina), Technical Director at NearForm and Node.js Technical Steering Committee member, creator of Fastify web framework. - [Kin Lane](https://github.com/kinlane), Chief Evangelist at Postman. Studying the tech, business & politics of APIs since 2010. Presidential Innovation Fellow during the Obama administration. - [Ulysse Carion](https://github.com/ucarion), the creator of JSON Type Definition specification. [Gajus Kuizinas](https://github.com/gajus) will host the event. Please [register here](https://us02web.zoom.us/webinar/register/2716192553618/WN_erJ_t4ICTHOnGC1SOybNnw). ## Contributing More than 100 people contributed to Ajv, and we would love to have you join the development. We welcome implementing new features that will benefit many users and ideas to improve our documentation. Please review [Contributing guidelines](./CONTRIBUTING.md) and [Code components](https://ajv.js.org/components.html). ## Documentation All documentation is available on the [Ajv website](https://ajv.js.org). Some useful site links: - [Getting started](https://ajv.js.org/guide/getting-started.html) - [JSON Schema vs JSON Type Definition](https://ajv.js.org/guide/schema-language.html) - [API reference](https://ajv.js.org/api.html) - [Strict mode](https://ajv.js.org/strict-mode.html) - [Standalone validation code](https://ajv.js.org/standalone.html) - [Security considerations](https://ajv.js.org/security.html) - [Command line interface](https://ajv.js.org/packages/ajv-cli.html) - [Frequently Asked Questions](https://ajv.js.org/faq.html) ## <a name="sponsors"></a>Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Performance Ajv generates code to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=62,4,1&chs=600x416&chxl=-1:|ajv|@exodus&#x2F;schemasafe|is-my-json-valid|djv|@cfworker&#x2F;json-schema|jsonschema&chd=t:100,69.2,51.5,13.1,5.1,1.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements JSON Schema [draft-06/07/2019-09/2020-12](http://json-schema.org/) standards (draft-04 is supported in v6): - all validation keywords (see [JSON Schema validation keywords](https://ajv.js.org/json-schema.html)) - [OpenAPI](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md) extensions: - NEW: keyword [discriminator](https://ajv.js.org/json-schema.html#discriminator). - keyword [nullable](https://ajv.js.org/json-schema.html#nullable). - full support of remote references (remote schemas have to be added with `addSchema` or compiled to be available) - support of recursive references between schemas - correct string lengths for strings with unicode pairs - JSON Schema [formats](https://ajv.js.org/guide/formats.html) (with [ajv-formats](https://github.com/ajv-validator/ajv-formats) plugin). - [validates schemas against meta-schema](https://ajv.js.org/api.html#api-validateschema) - NEW: supports [JSON Type Definition](https://datatracker.ietf.org/doc/rfc8927/): - all keywords (see [JSON Type Definition schema forms](https://ajv.js.org/json-type-definition.html)) - meta-schema for JTD schemas - "union" keyword and user-defined keywords (can be used inside "metadata" member of the schema) - supports [browsers](https://ajv.js.org/guide/environments.html#browsers) and Node.js 10.x - current - [asynchronous loading](https://ajv.js.org/guide/managing-schemas.html#asynchronous-schema-loading) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](https://ajv.js.org/options.html#allerrors) - [error messages with parameters](https://ajv.js.org/api.html#validation-errors) describing error reasons to allow error message generation - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [removing-additional-properties](https://ajv.js.org/guide/modifying-data.html#removing-additional-properties) - [assigning defaults](https://ajv.js.org/guide/modifying-data.html#assigning-defaults) to missing properties and items - [coercing data](https://ajv.js.org/guide/modifying-data.html#coercing-data-types) to the types specified in `type` keywords - [user-defined keywords](https://ajv.js.org/guide/user-keywords.html) - additional extension keywords with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [\$data reference](https://ajv.js.org/guide/combining-schemas.html#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](https://ajv.js.org/guide/async-validation.html) of user-defined formats and keywords ## Install To install version 8: ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://runkit.com/npm/ajv In JavaScript: ```javascript // or ESM/TypeScript import import Ajv from "ajv" // Node.js require: const Ajv = require("ajv") const ajv = new Ajv() // options can be passed, e.g. {allErrors: true} const schema = { type: "object", properties: { foo: {type: "integer"}, bar: {type: "string"} }, required: ["foo"], additionalProperties: false, } const data = { foo: 1, bar: "abc" } const validate = ajv.compile(schema) const valid = validate(data) if (!valid) console.log(validate.errors) ``` Learn how to use Ajv and see more examples in the [Guide: getting started](https://ajv.js.org/guide/getting-started.html) ## Changes history See [https://github.com/ajv-validator/ajv/releases](https://github.com/ajv-validator/ajv/releases) **Please note**: [Changes in version 8.0.0](https://github.com/ajv-validator/ajv/releases/tag/v8.0.0) [Version 7.0.0](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](./CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to [email protected] - it will be reviewed by the project team. ## Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](./LICENSE) # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. <img align="right" alt="Ajv logo" width="160" src="https://ajv.js.org/images/ajv_logo.png"> # Ajv: Another JSON Schema Validator The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07. [![Build Status](https://travis-ci.org/ajv-validator/ajv.svg?branch=master)](https://travis-ci.org/ajv-validator/ajv) [![npm](https://img.shields.io/npm/v/ajv.svg)](https://www.npmjs.com/package/ajv) [![npm (beta)](https://img.shields.io/npm/v/ajv/beta)](https://www.npmjs.com/package/ajv/v/7.0.0-beta.0) [![npm downloads](https://img.shields.io/npm/dm/ajv.svg)](https://www.npmjs.com/package/ajv) [![Coverage Status](https://coveralls.io/repos/github/ajv-validator/ajv/badge.svg?branch=master)](https://coveralls.io/github/ajv-validator/ajv?branch=master) [![Gitter](https://img.shields.io/gitter/room/ajv-validator/ajv.svg)](https://gitter.im/ajv-validator/ajv) [![GitHub Sponsors](https://img.shields.io/badge/$-sponsors-brightgreen)](https://github.com/sponsors/epoberezkin) ## Ajv v7 beta is released [Ajv version 7.0.0-beta.0](https://github.com/ajv-validator/ajv/tree/v7-beta) is released with these changes: - to reduce the mistakes in JSON schemas and unexpected validation results, [strict mode](./docs/strict-mode.md) is added - it prohibits ignored or ambiguous JSON Schema elements. - to make code injection from untrusted schemas impossible, [code generation](./docs/codegen.md) is fully re-written to be safe. - to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas. - schemas are compiled to ES6 code (ES5 code generation is supported with an option). - to improve reliability and maintainability the code is migrated to TypeScript. **Please note**: - the support for JSON-Schema draft-04 is removed - if you have schemas using "id" attributes you have to replace them with "\$id" (or continue using version 6 that will be supported until 02/28/2021). - all formats are separated to ajv-formats package - they have to be explicitely added if you use them. See [release notes](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) for the details. To install the new version: ```bash npm install ajv@beta ``` See [Getting started with v7](https://github.com/ajv-validator/ajv/tree/v7-beta#usage) for code example. ## Mozilla MOSS grant and OpenJS Foundation [<img src="https://www.poberezkin.com/images/mozilla.png" width="240" height="68">](https://www.mozilla.org/en-US/moss/) &nbsp;&nbsp;&nbsp; [<img src="https://www.poberezkin.com/images/openjs.png" width="220" height="68">](https://openjsf.org/blog/2020/08/14/ajv-joins-openjs-foundation-as-an-incubation-project/) Ajv has been awarded a grant from Mozilla’s [Open Source Support (MOSS) program](https://www.mozilla.org/en-US/moss/) in the “Foundational Technology” track! It will sponsor the development of Ajv support of [JSON Schema version 2019-09](https://tools.ietf.org/html/draft-handrews-json-schema-02) and of [JSON Type Definition](https://tools.ietf.org/html/draft-ucarion-json-type-definition-04). Ajv also joined [OpenJS Foundation](https://openjsf.org/) – having this support will help ensure the longevity and stability of Ajv for all its users. This [blog post](https://www.poberezkin.com/posts/2020-08-14-ajv-json-validator-mozilla-open-source-grant-openjs-foundation.html) has more details. I am looking for the long term maintainers of Ajv – working with [ReadySet](https://www.thereadyset.co/), also sponsored by Mozilla, to establish clear guidelines for the role of a "maintainer" and the contribution standards, and to encourage a wider, more inclusive, contribution from the community. ## Please [sponsor Ajv development](https://github.com/sponsors/epoberezkin) Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant! Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released. Please sponsor Ajv via: - [GitHub sponsors page](https://github.com/sponsors/epoberezkin) (GitHub will match it) - [Ajv Open Collective️](https://opencollective.com/ajv) Thank you. #### Open Collective sponsors <a href="https://opencollective.com/ajv"><img src="https://opencollective.com/ajv/individuals.svg?width=890"></a> <a href="https://opencollective.com/ajv/organization/0/website"><img src="https://opencollective.com/ajv/organization/0/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/1/website"><img src="https://opencollective.com/ajv/organization/1/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/2/website"><img src="https://opencollective.com/ajv/organization/2/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/3/website"><img src="https://opencollective.com/ajv/organization/3/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/4/website"><img src="https://opencollective.com/ajv/organization/4/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/5/website"><img src="https://opencollective.com/ajv/organization/5/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/6/website"><img src="https://opencollective.com/ajv/organization/6/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/7/website"><img src="https://opencollective.com/ajv/organization/7/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/8/website"><img src="https://opencollective.com/ajv/organization/8/avatar.svg"></a> <a href="https://opencollective.com/ajv/organization/9/website"><img src="https://opencollective.com/ajv/organization/9/avatar.svg"></a> ## Using version 6 [JSON Schema draft-07](http://json-schema.org/latest/json-schema-validation.html) is published. [Ajv version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0) that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes). __Please note__: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance: ```javascript ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json')); ``` To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId: ```javascript var ajv = new Ajv({schemaId: 'id'}); // If you want to use both draft-04 and draft-06/07 schemas: // var ajv = new Ajv({schemaId: 'auto'}); ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json')); ``` ## Contents - [Performance](#performance) - [Features](#features) - [Getting started](#getting-started) - [Frequently Asked Questions](https://github.com/ajv-validator/ajv/blob/master/FAQ.md) - [Using in browser](#using-in-browser) - [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) - [Command line interface](#command-line-interface) - Validation - [Keywords](#validation-keywords) - [Annotation keywords](#annotation-keywords) - [Formats](#formats) - [Combining schemas with $ref](#ref) - [$data reference](#data-reference) - NEW: [$merge and $patch keywords](#merge-and-patch-keywords) - [Defining custom keywords](#defining-custom-keywords) - [Asynchronous schema compilation](#asynchronous-schema-compilation) - [Asynchronous validation](#asynchronous-validation) - [Security considerations](#security-considerations) - [Security contact](#security-contact) - [Untrusted schemas](#untrusted-schemas) - [Circular references in objects](#circular-references-in-javascript-objects) - [Trusted schemas](#security-risks-of-trusted-schemas) - [ReDoS attack](#redos-attack) - Modifying data during validation - [Filtering data](#filtering-data) - [Assigning defaults](#assigning-defaults) - [Coercing data types](#coercing-data-types) - API - [Methods](#api) - [Options](#options) - [Validation errors](#validation-errors) - [Plugins](#plugins) - [Related packages](#related-packages) - [Some packages using Ajv](#some-packages-using-ajv) - [Tests, Contributing, Changes history](#tests) - [Support, Code of conduct, License](#open-source-software-support) ## Performance Ajv generates code using [doT templates](https://github.com/olado/doT) to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization. Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks: - [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) - 50% faster than the second place - [jsck benchmark](https://github.com/pandastrike/jsck#benchmarks) - 20-190% faster - [z-schema benchmark](https://rawgit.com/zaggino/z-schema/master/benchmark/results.html) - [themis benchmark](https://cdn.rawgit.com/playlyfe/themis/master/benchmark/results.html) Performance of different validators by [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark): [![performance](https://chart.googleapis.com/chart?chxt=x,y&cht=bhs&chco=76A4FB&chls=2.0&chbh=32,4,1&chs=600x416&chxl=-1:|djv|ajv|json-schema-validator-generator|jsen|is-my-json-valid|themis|z-schema|jsck|skeemas|json-schema-library|tv4&chd=t:100,98,72.1,66.8,50.1,15.1,6.1,3.8,1.2,0.7,0.2)](https://github.com/ebdrup/json-schema-benchmark/blob/master/README.md#performance) ## Features - Ajv implements full JSON Schema [draft-06/07](http://json-schema.org/) and draft-04 standards: - all validation keywords (see [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md)) - full support of remote refs (remote schemas have to be added with `addSchema` or compiled to be available) - support of circular references between schemas - correct string lengths for strings with unicode pairs (can be turned off) - [formats](#formats) defined by JSON Schema draft-07 standard and custom formats (can be turned off) - [validates schemas against meta-schema](#api-validateschema) - supports [browsers](#using-in-browser) and Node.js 0.10-14.x - [asynchronous loading](#asynchronous-schema-compilation) of referenced schemas during compilation - "All errors" validation mode with [option allErrors](#options) - [error messages with parameters](#validation-errors) describing error reasons to allow creating custom error messages - i18n error messages support with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package - [filtering data](#filtering-data) from additional properties - [assigning defaults](#assigning-defaults) to missing properties and items - [coercing data](#coercing-data-types) to the types specified in `type` keywords - [custom keywords](#defining-custom-keywords) - draft-06/07 keywords `const`, `contains`, `propertyNames` and `if/then/else` - draft-06 boolean schemas (`true`/`false` as a schema to always pass/fail). - keywords `switch`, `patternRequired`, `formatMaximum` / `formatMinimum` and `formatExclusiveMaximum` / `formatExclusiveMinimum` from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) with [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - [$data reference](#data-reference) to use values from the validated data as values for the schema keywords - [asynchronous validation](#asynchronous-validation) of custom formats and keywords ## Install ``` npm install ajv ``` ## <a name="usage"></a>Getting started Try it in the Node.js REPL: https://tonicdev.com/npm/ajv The fastest validation call: ```javascript // Node.js require: var Ajv = require('ajv'); // or ESM/TypeScript import import Ajv from 'ajv'; var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true} var validate = ajv.compile(schema); var valid = validate(data); if (!valid) console.log(validate.errors); ``` or with less code ```javascript // ... var valid = ajv.validate(schema, data); if (!valid) console.log(ajv.errors); // ... ``` or ```javascript // ... var valid = ajv.addSchema(schema, 'mySchema') .validate('mySchema', data); if (!valid) console.log(ajv.errorsText()); // ... ``` See [API](#api) and [Options](#options) for more details. Ajv compiles schemas to functions and caches them in all cases (using schema serialized with [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again. The best performance is achieved when using compiled functions returned by `compile` or `getSchema` methods (there is no additional function call). __Please note__: every time a validation function or `ajv.validate` are called `errors` property is overwritten. You need to copy `errors` array reference to another variable if you want to use it later (e.g., in the callback). See [Validation errors](#validation-errors) __Note for TypeScript users__: `ajv` provides its own TypeScript declarations out of the box, so you don't need to install the deprecated `@types/ajv` module. ## Using in browser You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle. If you need to use Ajv in several bundles you can create a separate UMD bundle using `npm run bundle` script (thanks to [siddo420](https://github.com/siddo420)). Then you need to load Ajv in the browser: ```html <script src="ajv.min.js"></script> ``` This bundle can be used with different module systems; it creates global `Ajv` if no module system is found. The browser bundle is available on [cdnjs](https://cdnjs.com/libraries/ajv). Ajv is tested with these browsers: [![Sauce Test Status](https://saucelabs.com/browser-matrix/epoberezkin.svg)](https://saucelabs.com/u/epoberezkin) __Please note__: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue [#234](https://github.com/ajv-validator/ajv/issues/234)). ### Ajv and Content Security Policies (CSP) If you're using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a `script-src` directive that includes the value `'unsafe-eval'`. :warning: NOTE, however, that `unsafe-eval` is NOT recommended in a secure CSP[[1]](https://developer.chrome.com/extensions/contentSecurityPolicy#relaxing-eval), as it has the potential to open the document to cross-site scripting (XSS) attacks. In order to make use of Ajv without easing your CSP, you can [pre-compile a schema using the CLI](https://github.com/ajv-validator/ajv-cli#compile-schemas). This will transpile the schema JSON into a JavaScript file that exports a `validate` function that works simlarly to a schema compiled at runtime. Note that pre-compilation of schemas is performed using [ajv-pack](https://github.com/ajv-validator/ajv-pack) and there are [some limitations to the schema features it can compile](https://github.com/ajv-validator/ajv-pack#limitations). A successfully pre-compiled schema is equivalent to the same schema compiled at runtime. ## Command line interface CLI is available as a separate npm package [ajv-cli](https://github.com/ajv-validator/ajv-cli). It supports: - compiling JSON Schemas to test their validity - BETA: generating standalone module exporting a validation function to be used without Ajv (using [ajv-pack](https://github.com/ajv-validator/ajv-pack)) - migrate schemas to draft-07 (using [json-schema-migrate](https://github.com/epoberezkin/json-schema-migrate)) - validating data file(s) against JSON Schema - testing expected validity of data against JSON Schema - referenced schemas - custom meta-schemas - files in JSON, JSON5, YAML, and JavaScript format - all Ajv options - reporting changes in data after validation in [JSON-patch](https://tools.ietf.org/html/rfc6902) format ## Validation keywords Ajv supports all validation keywords from draft-07 of JSON Schema standard: - [type](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#type) - [for numbers](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-numbers) - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf - [for strings](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-strings) - maxLength, minLength, pattern, format - [for arrays](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-arrays) - maxItems, minItems, uniqueItems, items, additionalItems, [contains](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#contains) - [for objects](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-objects) - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, [propertyNames](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#propertynames) - [for all types](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#keywords-for-all-types) - enum, [const](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#const) - [compound keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#compound-keywords) - not, oneOf, anyOf, allOf, [if/then/else](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#ifthenelse) With [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package Ajv also supports validation keywords from [JSON Schema extension proposals](https://github.com/json-schema/json-schema/wiki/v5-Proposals) for JSON Schema standard: - [patternRequired](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#patternrequired-proposed) - like `required` but with patterns that some property should match. - [formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md#formatmaximum--formatminimum-and-exclusiveformatmaximum--exclusiveformatminimum-proposed) - setting limits for date, time, etc. See [JSON Schema validation keywords](https://github.com/ajv-validator/ajv/blob/master/KEYWORDS.md) for more details. ## Annotation keywords JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation. - `title` and `description`: information about the data represented by that schema - `$comment` (NEW in draft-07): information for developers. With option `$comment` Ajv logs or passes the comment string to the user-supplied function. See [Options](#options). - `default`: a default value of the data instance, see [Assigning defaults](#assigning-defaults). - `examples` (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema. - `readOnly` and `writeOnly` (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.). - `contentEncoding`: [RFC 2045](https://tools.ietf.org/html/rfc2045#section-6.1 ), e.g., "base64". - `contentMediaType`: [RFC 2046](https://tools.ietf.org/html/rfc2046), e.g., "image/png". __Please note__: Ajv does not implement validation of the keywords `examples`, `contentEncoding` and `contentMediaType` but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance. ## Formats Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use "format" keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see [ReDoS attack](#redos-attack). __Please note__: if you need to use "format" keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios. The following formats are implemented for string validation with "format" keyword: - _date_: full-date according to [RFC3339](http://tools.ietf.org/html/rfc3339#section-5.6). - _time_: time with optional time-zone. - _date-time_: date-time from the same source (time-zone is mandatory). `date`, `time` and `date-time` validate ranges in `full` mode and only regexp in `fast` mode (see [options](#options)). - _uri_: full URI. - _uri-reference_: URI reference, including full and relative URIs. - _uri-template_: URI template according to [RFC6570](https://tools.ietf.org/html/rfc6570) - _url_ (deprecated): [URL record](https://url.spec.whatwg.org/#concept-url). - _email_: email address. - _hostname_: host name according to [RFC1034](http://tools.ietf.org/html/rfc1034#section-3.5). - _ipv4_: IP address v4. - _ipv6_: IP address v6. - _regex_: tests whether a string is a valid regular expression by passing it to RegExp constructor. - _uuid_: Universally Unique IDentifier according to [RFC4122](http://tools.ietf.org/html/rfc4122). - _json-pointer_: JSON-pointer according to [RFC6901](https://tools.ietf.org/html/rfc6901). - _relative-json-pointer_: relative JSON-pointer according to [this draft](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00). __Please note__: JSON Schema draft-07 also defines formats `iri`, `iri-reference`, `idn-hostname` and `idn-email` for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here. There are two modes of format validation: `fast` and `full`. This mode affects formats `date`, `time`, `date-time`, `uri`, `uri-reference`, and `email`. See [Options](#options) for details. You can add additional formats and replace any of the formats above using [addFormat](#api-addformat) method. The option `unknownFormats` allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See [Options](#options) for details. You can find regular expressions used for format validation and the sources that were used in [formats.js](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js). ## <a name="ref"></a>Combining schemas with $ref You can structure your validation logic across multiple schema files and have schemas reference each other using `$ref` keyword. Example: ```javascript var schema = { "$id": "http://example.com/schemas/schema.json", "type": "object", "properties": { "foo": { "$ref": "defs.json#/definitions/int" }, "bar": { "$ref": "defs.json#/definitions/str" } } }; var defsSchema = { "$id": "http://example.com/schemas/defs.json", "definitions": { "int": { "type": "integer" }, "str": { "type": "string" } } }; ``` Now to compile your schema you can either pass all schemas to Ajv instance: ```javascript var ajv = new Ajv({schemas: [schema, defsSchema]}); var validate = ajv.getSchema('http://example.com/schemas/schema.json'); ``` or use `addSchema` method: ```javascript var ajv = new Ajv; var validate = ajv.addSchema(defsSchema) .compile(schema); ``` See [Options](#options) and [addSchema](#api) method. __Please note__: - `$ref` is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of `addSchema` method or as a property name in `schemas` option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using `compileAsync` method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See [Asynchronous schema compilation](#asynchronous-schema-compilation). ## $data reference With `$data` option you can use values from the validated data as the values for the schema keywords. See [proposal](https://github.com/json-schema-org/json-schema-spec/issues/51) for more information about how it works. `$data` reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems. The value of "$data" should be a [JSON-pointer](https://tools.ietf.org/html/rfc6901) to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a [relative JSON-pointer](http://tools.ietf.org/html/draft-luff-relative-json-pointer-00) (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema). Examples. This schema requires that the value in property `smaller` is less or equal than the value in the property larger: ```javascript var ajv = new Ajv({$data: true}); var schema = { "properties": { "smaller": { "type": "number", "maximum": { "$data": "1/larger" } }, "larger": { "type": "number" } } }; var validData = { smaller: 5, larger: 7 }; ajv.validate(schema, validData); // true ``` This schema requires that the properties have the same format as their field names: ```javascript var schema = { "additionalProperties": { "type": "string", "format": { "$data": "0#" } } }; var validData = { 'date-time': '1963-06-19T08:30:06.283185Z', email: '[email protected]' } ``` `$data` reference is resolved safely - it won't throw even if some property is undefined. If `$data` resolves to `undefined` the validation succeeds (with the exclusion of `const` keyword). If `$data` resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails. ## $merge and $patch keywords With the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) you can use the keywords `$merge` and `$patch` that allow extending JSON Schemas with patches using formats [JSON Merge Patch (RFC 7396)](https://tools.ietf.org/html/rfc7396) and [JSON Patch (RFC 6902)](https://tools.ietf.org/html/rfc6902). To add keywords `$merge` and `$patch` to Ajv instance use this code: ```javascript require('ajv-merge-patch')(ajv); ``` Examples. Using `$merge`: ```json { "$merge": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": { "properties": { "q": { "type": "number" } } } } } ``` Using `$patch`: ```json { "$patch": { "source": { "type": "object", "properties": { "p": { "type": "string" } }, "additionalProperties": false }, "with": [ { "op": "add", "path": "/properties/q", "value": { "type": "number" } } ] } } ``` The schemas above are equivalent to this schema: ```json { "type": "object", "properties": { "p": { "type": "string" }, "q": { "type": "number" } }, "additionalProperties": false } ``` The properties `source` and `with` in the keywords `$merge` and `$patch` can use absolute or relative `$ref` to point to other schemas previously added to the Ajv instance or to the fragments of the current schema. See the package [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) for more information. ## Defining custom keywords The advantages of using custom keywords are: - allow creating validation scenarios that cannot be expressed using JSON Schema - simplify your schemas - help bringing a bigger part of the validation logic to your schemas - make your schemas more expressive, less verbose and closer to your application domain - implement custom data processors that modify your data (`modifying` option MUST be used in keyword definition) and/or create side effects while the data is being validated If a keyword is used only for side-effects and its validation result is pre-defined, use option `valid: true/false` in keyword definition to simplify both generated code (no error handling in case of `valid: true`) and your keyword functions (no need to return any validation result). The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas. You can define custom keywords with [addKeyword](#api-addkeyword) method. Keywords are defined on the `ajv` instance level - new instances will not have previously defined keywords. Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema. Example. `range` and `exclusiveRange` keywords using compiled schema: ```javascript ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) { var min = sch[0]; var max = sch[1]; return parentSchema.exclusiveRange === true ? function (data) { return data > min && data < max; } : function (data) { return data >= min && data <= max; } } }); var schema = { "range": [2, 4], "exclusiveRange": true }; var validate = ajv.compile(schema); console.log(validate(2.01)); // true console.log(validate(3.99)); // true console.log(validate(2)); // false console.log(validate(4)); // false ``` Several custom keywords (typeof, instanceof, range and propertyNames) are defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package - they can be used for your schemas and as a starting point for your own custom keywords. See [Defining custom keywords](https://github.com/ajv-validator/ajv/blob/master/CUSTOM.md) for more details. ## Asynchronous schema compilation During asynchronous compilation remote references are loaded using supplied function. See `compileAsync` [method](#api-compileAsync) and `loadSchema` [option](#options). Example: ```javascript var ajv = new Ajv({ loadSchema: loadSchema }); ajv.compileAsync(schema).then(function (validate) { var valid = validate(data); // ... }); function loadSchema(uri) { return request.json(uri).then(function (res) { if (res.statusCode >= 400) throw new Error('Loading error: ' + res.statusCode); return res.body; }); } ``` __Please note__: [Option](#options) `missingRefs` should NOT be set to `"ignore"` or `"fail"` for asynchronous compilation to work. ## Asynchronous validation Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add `async: true` in the keyword or format definition (see [addFormat](#api-addformat), [addKeyword](#api-addkeyword) and [Defining custom keywords](#defining-custom-keywords)). If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have `"$async": true` keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without `$async` keyword Ajv will throw an exception during schema compilation. __Please note__: all asynchronous subschemas that are referenced from the current or other schemas should have `"$async": true` keyword as well, otherwise the schema compilation will fail. Validation function for an asynchronous custom format/keyword should return a promise that resolves with `true` or `false` (or rejects with `new Ajv.ValidationError(errors)` if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to [es7 async functions](http://tc39.github.io/ecmascript-asyncawait/) that can optionally be transpiled with [nodent](https://github.com/MatAtBread/nodent). Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via `processCode` option. See [Options](#options). The compiled validation function has `$async: true` property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas. Validation result will be a promise that resolves with validated data or rejects with an exception `Ajv.ValidationError` that contains the array of validation errors in `errors` property. Example: ```javascript var ajv = new Ajv; // require('ajv-async')(ajv); ajv.addKeyword('idExists', { async: true, type: 'number', validate: checkIdExists }); function checkIdExists(schema, data) { return knex(schema.table) .select('id') .where('id', data) .then(function (rows) { return !!rows.length; // true if record is found }); } var schema = { "$async": true, "properties": { "userId": { "type": "integer", "idExists": { "table": "users" } }, "postId": { "type": "integer", "idExists": { "table": "posts" } } } }; var validate = ajv.compile(schema); validate({ userId: 1, postId: 19 }) .then(function (data) { console.log('Data is valid', data); // { userId: 1, postId: 19 } }) .catch(function (err) { if (!(err instanceof Ajv.ValidationError)) throw err; // data is invalid console.log('Validation errors:', err.errors); }); ``` ### Using transpilers with asynchronous validation functions. [ajv-async](https://github.com/ajv-validator/ajv-async) uses [nodent](https://github.com/MatAtBread/nodent) to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser). #### Using nodent ```javascript var ajv = new Ajv; require('ajv-async')(ajv); // in the browser if you want to load ajv-async bundle separately you can: // window.ajvAsync(ajv); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` #### Using other transpilers ```javascript var ajv = new Ajv({ processCode: transpileFunc }); var validate = ajv.compile(schema); // transpiled es7 async function validate(data).then(successFunc).catch(errorFunc); ``` See [Options](#options). ## Security considerations JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider. ##### Security contact To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues. ##### Untrusted schemas Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application. If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. [#557](https://github.com/ajv-validator/ajv/issues/557)) - validating certain data can be slow It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in `pattern` and `patternProperties` keywords. Regardless the measures you take, using untrusted schemas increases security risks. ##### Circular references in JavaScript objects Ajv does not support schemas and validated data that have circular references in objects. See [issue #802](https://github.com/ajv-validator/ajv/issues/802). An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references. ##### Security risks of trusted schemas Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to): - `pattern` and `format` for large strings - in some cases using `maxLength` can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see [ReDoS attack](#redos-attack)). - `patternProperties` for large property names - use `propertyNames` to mitigate, but some regular expressions can have exponential evaluation time as well. - `uniqueItems` for large non-scalar arrays - use `maxItems` to mitigate __Please note__: The suggestions above to prevent slow validation would only work if you do NOT use `allErrors: true` in production code (using it would continue validation after validation errors). You can validate your JSON schemas against [this meta-schema](https://github.com/ajv-validator/ajv/blob/master/lib/refs/json-schema-secure.json) to check that these recommendations are followed: ```javascript const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json')); const schema1 = {format: 'email'}; isSchemaSecure(schema1); // false const schema2 = {format: 'email', maxLength: MAX_LENGTH}; isSchemaSecure(schema2); // true ``` __Please note__: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results. ##### Content Security Policies (CSP) See [Ajv and Content Security Policies (CSP)](#ajv-and-content-security-policies-csp) ## ReDoS attack Certain regular expressions can lead to the exponential evaluation time even with relatively short strings. Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see [safe-regex](https://github.com/substack/safe-regex), for example. __Please note__: some formats that Ajv implements use [regular expressions](https://github.com/ajv-validator/ajv/blob/master/lib/compile/formats.js) that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources __it is strongly recommended__ to consider the following: - making assessment of "format" implementations in Ajv. - using `format: 'fast'` option that simplifies some of the regular expressions (although it does not guarantee that they are safe). - replacing format implementations provided by Ajv with your own implementations of "format" keyword that either uses different regular expressions or another approach to format validation. Please see [addFormat](#api-addformat) method. - disabling format validation by ignoring "format" keyword with option `format: false` Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios. ## Filtering data With [option `removeAdditional`](#options) (added by [andyscott](https://github.com/andyscott)) you can filter data during the validation. This option modifies original data. Example: ```javascript var ajv = new Ajv({ removeAdditional: true }); var schema = { "additionalProperties": false, "properties": { "foo": { "type": "number" }, "bar": { "additionalProperties": { "type": "number" }, "properties": { "baz": { "type": "string" } } } } } var data = { "foo": 0, "additional1": 1, // will be removed; `additionalProperties` == false "bar": { "baz": "abc", "additional2": 2 // will NOT be removed; `additionalProperties` != false }, } var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 } ``` If `removeAdditional` option in the example above were `"all"` then both `additional1` and `additional2` properties would have been removed. If the option were `"failing"` then property `additional1` would have been removed regardless of its value and property `additional2` would have been removed only if its value were failing the schema in the inner `additionalProperties` (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed). __Please note__: If you use `removeAdditional` option with `additionalProperties` keyword inside `anyOf`/`oneOf` keywords your validation can fail with this schema, for example: ```json { "type": "object", "oneOf": [ { "properties": { "foo": { "type": "string" } }, "required": [ "foo" ], "additionalProperties": false }, { "properties": { "bar": { "type": "integer" } }, "required": [ "bar" ], "additionalProperties": false } ] } ``` The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties. With the option `removeAdditional: true` the validation will pass for the object `{ "foo": "abc"}` but will fail for the object `{"bar": 1}`. It happens because while the first subschema in `oneOf` is validated, the property `bar` is removed because it is an additional property according to the standard (because it is not included in `properties` keyword in the same schema). While this behaviour is unexpected (issues [#129](https://github.com/ajv-validator/ajv/issues/129), [#134](https://github.com/ajv-validator/ajv/issues/134)), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way: ```json { "type": "object", "properties": { "foo": { "type": "string" }, "bar": { "type": "integer" } }, "additionalProperties": false, "oneOf": [ { "required": [ "foo" ] }, { "required": [ "bar" ] } ] } ``` The schema above is also more efficient - it will compile into a faster function. ## Assigning defaults With [option `useDefaults`](#options) Ajv will assign values from `default` keyword in the schemas of `properties` and `items` (when it is the array of schemas) to the missing properties and items. With the option value `"empty"` properties and items equal to `null` or `""` (empty string) will be considered missing and assigned defaults. This option modifies original data. __Please note__: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema. Example 1 (`default` in `properties`): ```javascript var ajv = new Ajv({ useDefaults: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "string", "default": "baz" } }, "required": [ "foo", "bar" ] }; var data = { "foo": 1 }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": "baz" } ``` Example 2 (`default` in `items`): ```javascript var schema = { "type": "array", "items": [ { "type": "number" }, { "type": "string", "default": "foo" } ] } var data = [ 1 ]; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // [ 1, "foo" ] ``` `default` keywords in other cases are ignored: - not in `properties` or `items` subschemas - in schemas inside `anyOf`, `oneOf` and `not` (see [#42](https://github.com/ajv-validator/ajv/issues/42)) - in `if` subschema of `switch` keyword - in schemas generated by custom macro keywords The [`strictDefaults` option](#options) customizes Ajv's behavior for the defaults that Ajv ignores (`true` raises an error, and `"log"` outputs a warning). ## Coercing data types When you are validating user inputs all your data properties are usually strings. The option `coerceTypes` allows you to have your data types coerced to the types specified in your schema `type` keywords, both to pass the validation and to use the correctly typed data afterwards. This option modifies original data. __Please note__: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value. Example 1: ```javascript var ajv = new Ajv({ coerceTypes: true }); var schema = { "type": "object", "properties": { "foo": { "type": "number" }, "bar": { "type": "boolean" } }, "required": [ "foo", "bar" ] }; var data = { "foo": "1", "bar": "false" }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": 1, "bar": false } ``` Example 2 (array coercions): ```javascript var ajv = new Ajv({ coerceTypes: 'array' }); var schema = { "properties": { "foo": { "type": "array", "items": { "type": "number" } }, "bar": { "type": "boolean" } } }; var data = { "foo": "1", "bar": ["false"] }; var validate = ajv.compile(schema); console.log(validate(data)); // true console.log(data); // { "foo": [1], "bar": false } ``` The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords). See [Coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md) for details. ## API ##### new Ajv(Object options) -&gt; Object Create Ajv instance. ##### .compile(Object schema) -&gt; Function&lt;Object data&gt; Generate validating function and cache the compiled schema for future use. Validating function returns a boolean value. This function has properties `errors` and `schema`. Errors encountered during the last validation are assigned to `errors` property (it is assigned `null` if there was no errors). `schema` property contains the reference to the original schema. The schema passed to this method will be validated against meta-schema unless `validateSchema` option is false. If schema is invalid, an error will be thrown. See [options](#options). ##### <a name="api-compileAsync"></a>.compileAsync(Object schema [, Boolean meta] [, Function callback]) -&gt; Promise Asynchronous version of `compile` method that loads missing remote schemas using asynchronous function in `options.loadSchema`. This function returns a Promise that resolves to a validation function. An optional callback passed to `compileAsync` will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when: - missing schema can't be loaded (`loadSchema` returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved. - schema (or some loaded/referenced schema) is invalid. The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded. You can asynchronously compile meta-schema by passing `true` as the second parameter. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### .validate(Object schema|String key|String ref, data) -&gt; Boolean Validate data using passed schema (it will be compiled and cached). Instead of the schema you can use the key that was previously passed to `addSchema`, the schema id if it was present in the schema or any previously resolved reference. Validation errors will be available in the `errors` property of Ajv instance (`null` if there were no errors). __Please note__: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later. If the schema is asynchronous (has `$async` keyword on the top level) this method returns a Promise. See [Asynchronous validation](#asynchronous-validation). ##### .addSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole. Array of schemas can be passed (schemas should have ids), the second parameter will be ignored. Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key. Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data. Although `addSchema` does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time. By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by `validateSchema` option. __Please note__: Ajv uses the [method chaining syntax](https://en.wikipedia.org/wiki/Method_chaining) for all methods with the prefix `add*` and `remove*`. This allows you to do nice things like the following. ```javascript var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri); ``` ##### .addMetaSchema(Array&lt;Object&gt;|Object schema [, String key]) -&gt; Ajv Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of `addSchema` because there may be instance options that would compile a meta schema incorrectly (at the moment it is `removeAdditional` option). There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option `meta` is set to `false`. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See `validateSchema`. ##### <a name="api-validateschema"></a>.validateSchema(Object schema) -&gt; Boolean Validates schema. This method should be used to validate schemas rather than `validate` due to the inconsistency of `uri` format in JSON Schema standard. By default this method is called automatically when the schema is added, so you rarely need to use it directly. If schema doesn't have `$schema` property, it is validated against draft 6 meta-schema (option `meta` should not be false). If schema has `$schema` property, then the schema with this id (that should be previously added) is used to validate passed schema. Errors will be available at `ajv.errors`. ##### .getSchema(String key) -&gt; Function&lt;Object data&gt; Retrieve compiled schema previously added with `addSchema` by the key passed to `addSchema` or by its full reference (id). The returned validating function has `schema` property with the reference to the original schema. ##### .removeSchema([Object schema|String key|String ref|RegExp pattern]) -&gt; Ajv Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references. Schema can be removed using: - key passed to `addSchema` - it's full reference (id) - RegExp that should match schema id or key (meta-schemas won't be removed) - actual schema object that will be stable-stringified to remove schema from cache If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared. ##### <a name="api-addformat"></a>.addFormat(String name, String|RegExp|Function|Object format) -&gt; Ajv Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance. Strings are converted to RegExp. Function should return validation result as `true` or `false`. If object is passed it should have properties `validate`, `compare` and `async`: - _validate_: a string, RegExp or a function as described above. - _compare_: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords `formatMaximum`/`formatMinimum` (defined in [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) package). It should return `1` if the first value is bigger than the second value, `-1` if it is smaller and `0` if it is equal. - _async_: an optional `true` value if `validate` is an asynchronous function; in this case it should return a promise that resolves with a value `true` or `false`. - _type_: an optional type of data that the format applies to. It can be `"string"` (default) or `"number"` (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass. Custom formats can be also added via `formats` option. ##### <a name="api-addkeyword"></a>.addKeyword(String keyword, Object definition) -&gt; Ajv Add custom validation keyword to Ajv instance. Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance. Keyword must start with a letter, `_` or `$`, and may continue with letters, numbers, `_`, `$`, or `-`. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions. Example Keywords: - `"xyz-example"`: valid, and uses prefix for the xyz project to avoid name collisions. - `"example"`: valid, but not recommended as it could collide with future versions of JSON Schema etc. - `"3-example"`: invalid as numbers are not allowed to be the first character in a keyword Keyword definition is an object with the following properties: - _type_: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types. - _validate_: validating function - _compile_: compiling function - _macro_: macro function - _inline_: compiling function that returns code (as string) - _schema_: an optional `false` value used with "validate" keyword to not pass schema - _metaSchema_: an optional meta-schema for keyword schema - _dependencies_: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation - _modifying_: `true` MUST be passed if keyword modifies data - _statements_: `true` can be passed in case inline keyword generates statements (as opposed to expression) - _valid_: pass `true`/`false` to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - _$data_: an optional `true` value to support [$data reference](#data-reference) as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - _async_: an optional `true` value if the validation function is asynchronous (whether it is compiled or passed in _validate_ property); in this case it should return a promise that resolves with a value `true` or `false`. This option is ignored in case of "macro" and "inline" keywords. - _errors_: an optional boolean or string `"full"` indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation. _compile_, _macro_ and _inline_ are mutually exclusive, only one should be used at a time. _validate_ can be used separately or in addition to them to support $data reference. __Please note__: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate `type` keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed. See [Defining custom keywords](#defining-custom-keywords) for more details. ##### .getKeyword(String keyword) -&gt; Object|Boolean Returns custom keyword definition, `true` for pre-defined keywords and `false` if the keyword is unknown. ##### .removeKeyword(String keyword) -&gt; Ajv Removes custom or pre-defined keyword so you can redefine them. While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results. __Please note__: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use `removeSchema` method and compile them again. ##### .errorsText([Array&lt;Object&gt; errors [, Object options]]) -&gt; String Returns the text with all errors in a String. Options can have properties `separator` (string used to separate errors, ", " by default) and `dataVar` (the variable name that dataPaths are prefixed with, "data" by default). ## Options Defaults: ```javascript { // validation and reporting options: $data: false, allErrors: false, verbose: false, $comment: false, // NEW in Ajv version 6.0 jsonPointers: false, uniqueItems: true, unicode: true, nullable: false, format: 'fast', formats: {}, unknownFormats: true, schemas: {}, logger: undefined, // referenced schema options: schemaId: '$id', missingRefs: true, extendRefs: 'ignore', // recommended 'fail' loadSchema: undefined, // function(uri: string): Promise {} // options to modify validated data: removeAdditional: false, useDefaults: false, coerceTypes: false, // strict mode options strictDefaults: false, strictKeywords: false, strictNumbers: false, // asynchronous validation options: transpile: undefined, // requires ajv-async package // advanced options: meta: true, validateSchema: true, addUsedSchema: true, inlineRefs: true, passContext: false, loopRequired: Infinity, ownProperties: false, multipleOfPrecision: false, errorDataPath: 'object', // deprecated messages: true, sourceCode: false, processCode: undefined, // function (str: string, schema: object): string {} cache: new Cache, serialize: undefined } ``` ##### Validation and reporting options - _$data_: support [$data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See [API](#api). - _allErrors_: check all rules collecting all errors. Default is to return after the first error. - _verbose_: include the reference to the part of the schema (`schema` and `parentSchema`) and validated data in errors (false by default). - _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$comment` keyword to a function. Option values: - `false` (default): ignore $comment keyword. - `true`: log the keyword value to console. - function: pass the keyword value, its schema path and root schema to the specified function - _jsonPointers_: set `dataPath` property of errors using [JSON Pointers](https://tools.ietf.org/html/rfc6901) instead of JavaScript property access notation. - _uniqueItems_: validate `uniqueItems` keyword (true by default). - _unicode_: calculate correct length of strings with unicode pairs (true by default). Pass `false` to use `.length` of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - _nullable_: support keyword "nullable" from [Open API 3 specification](https://swagger.io/docs/specification/data-models/data-types/). - _format_: formats validation mode. Option values: - `"fast"` (default) - simplified and fast validation (see [Formats](#formats) for details of which formats are available and affected by this option). - `"full"` - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - `false` - ignore all format keywords. - _formats_: an object with custom formats. Keys and values will be passed to `addFormat` method. - _keywords_: an object with custom keywords. Keys and values will be passed to `addKeyword` method. - _unknownFormats_: handling of unknown formats. Option values: - `true` (default) - if an unknown format is encountered the exception is thrown during schema compilation. If `format` keyword value is [$data reference](#data-reference) and it is unknown the validation will fail. - `[String]` - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. If `format` keyword value is [$data reference](#data-reference) and it is not in this array the validation will fail. - `"ignore"` - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification. - _schemas_: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method `addSchema(value, key)` will be called for each schema in this object. - _logger_: sets the logging method. Default is the global `console` object that should have methods `log`, `warn` and `error`. See [Error logging](#error-logging). Option values: - custom logger - it should have methods `log`, `warn` and `error`. If any of these methods is missing an exception will be thrown. - `false` - logging is disabled. ##### Referenced schema options - _schemaId_: this option defines which keywords are used as schema URI. Option value: - `"$id"` (default) - only use `$id` keyword as schema URI (as specified in JSON Schema draft-06/07), ignore `id` keyword (if it is present a warning will be logged). - `"id"` - only use `id` keyword as schema URI (as specified in JSON Schema draft-04), ignore `$id` keyword (if it is present a warning will be logged). - `"auto"` - use both `$id` and `id` keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation. - _missingRefs_: handling of missing referenced schemas. Option values: - `true` (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties `missingRef` (with hash fragment) and `missingSchema` (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). - `"ignore"` - to log error during compilation and always pass validation. - `"fail"` - to log error and successfully compile schema but fail validation if this rule is checked. - _extendRefs_: validation of other keywords when `$ref` is present in the schema. Option values: - `"ignore"` (default) - when `$ref` is used other keywords are ignored (as per [JSON Reference](https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03#section-3) standard). A warning will be logged during the schema compilation. - `"fail"` (recommended) - if other validation keywords are used together with `$ref` the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing. - `true` - validate all keywords in the schemas with `$ref` (the default behaviour in versions before 5.0.0). - _loadSchema_: asynchronous function that will be used to load remote schemas when `compileAsync` [method](#api-compileAsync) is used and some reference is missing (option `missingRefs` should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in [Asynchronous compilation](#asynchronous-schema-compilation). ##### Options to modify validated data - _removeAdditional_: remove additional properties - see example in [Filtering data](#filtering-data). This option is not used if schema is added with `addMetaSchema` method. Option values: - `false` (default) - not to remove additional properties - `"all"` - all additional properties are removed, regardless of `additionalProperties` keyword in schema (and no validation is made for them). - `true` - only additional properties with `additionalProperties` keyword equal to `false` are removed. - `"failing"` - additional properties that fail schema validation will be removed (where `additionalProperties` keyword is `false` or schema). - _useDefaults_: replace missing or undefined properties and items with the values from corresponding `default` keywords. Default behaviour is to ignore `default` keywords. This option is not used if schema is added with `addMetaSchema` method. See examples in [Assigning defaults](#assigning-defaults). Option values: - `false` (default) - do not use defaults - `true` - insert defaults by value (object literal is used). - `"empty"` - in addition to missing or undefined, use defaults for properties and items that are equal to `null` or `""` (an empty string). - `"shared"` (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well. - _coerceTypes_: change data type of data to match `type` keyword. See the example in [Coercing data types](#coercing-data-types) and [coercion rules](https://github.com/ajv-validator/ajv/blob/master/COERCION.md). Option values: - `false` (default) - no type coercion. - `true` - coerce scalar data types. - `"array"` - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema). ##### Strict mode options - _strictDefaults_: report ignored `default` keywords in schemas. Option values: - `false` (default) - ignored defaults are not reported - `true` - if an ignored default is present, throw an error - `"log"` - if an ignored default is present, log warning - _strictKeywords_: report unknown keywords in schemas. Option values: - `false` (default) - unknown keywords are not reported - `true` - if an unknown keyword is present, throw an error - `"log"` - if an unknown keyword is present, log warning - _strictNumbers_: validate numbers strictly, failing validation for NaN and Infinity. Option values: - `false` (default) - NaN or Infinity will pass validation for numeric types - `true` - NaN or Infinity will not pass validation for numeric types ##### Asynchronous validation options - _transpile_: Requires [ajv-async](https://github.com/ajv-validator/ajv-async) package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values: - `undefined` (default) - transpile with [nodent](https://github.com/MatAtBread/nodent) if async functions are not supported. - `true` - always transpile with nodent. - `false` - do not transpile; if async functions are not supported an exception will be thrown. ##### Advanced options - _meta_: add [meta-schema](http://json-schema.org/documentation.html) so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no `$schema` keyword. This default meta-schema MUST have `$schema` keyword. - _validateSchema_: validate added/compiled schemas against meta-schema (true by default). `$schema` property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with `addMetaSchema` method. Option values: - `true` (default) - if the validation fails, throw the exception. - `"log"` - if the validation fails, log error. - `false` - skip schema validation. - _addUsedSchema_: by default methods `compile` and `validate` add schemas to the instance if they have `$id` (or `id`) property that doesn't start with "#". If `$id` is present and it is not unique the exception will be thrown. Set this option to `false` to skip adding schemas to the instance and the `$id` uniqueness check when these methods are used. This option does not affect `addSchema` method. - _inlineRefs_: Affects compilation of referenced schemas. Option values: - `true` (default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. - `false` - to not inline referenced schemas (they will be compiled as separate functions). - integer number - to limit the maximum number of keywords of the schema that will be inlined. - _passContext_: pass validation context to custom keyword functions. If this option is `true` and you pass some context to the compiled validation function with `validate.call(context, data)`, the `context` will be available as `this` in your custom keywords. By default `this` is Ajv instance. - _loopRequired_: by default `required` keyword is compiled into a single expression (or a sequence of statements in `allErrors` mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which `required` keyword will be validated in a loop - smaller validation function size but also worse performance. - _ownProperties_: by default Ajv iterates over all enumerable object properties; when this option is `true` only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - _multipleOfPrecision_: by default `multipleOf` keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue [#84](https://github.com/ajv-validator/ajv/issues/84)). If you need to use fractional dividers set this option to some positive integer N to have `multipleOf` validated using this formula: `Math.abs(Math.round(division) - division) < 1e-N` (it is slower but allows for float arithmetics deviations). - _errorDataPath_ (deprecated): set `dataPath` to point to 'object' (default) or to 'property' when validating keywords `required`, `additionalProperties` and `dependencies`. - _messages_: Include human-readable messages in errors. `true` by default. `false` can be passed when custom messages are used (e.g. with [ajv-i18n](https://github.com/ajv-validator/ajv-i18n)). - _sourceCode_: add `sourceCode` property to validating function (for debugging; this code can be different from the result of toString call). - _processCode_: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options: - `beautify` that formatted the generated function using [js-beautify](https://github.com/beautify-web/js-beautify). If you want to beautify the generated code pass a function calling `require('js-beautify').js_beautify` as `processCode: code => js_beautify(code)`. - `transpile` that transpiled asynchronous validation function. You can still use `transpile` option with [ajv-async](https://github.com/ajv-validator/ajv-async) package. See [Asynchronous validation](#asynchronous-validation) for more information. - _cache_: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache [sacjs](https://github.com/epoberezkin/sacjs) can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods `put(key, value)`, `get(key)`, `del(key)` and `clear()`. - _serialize_: an optional function to serialize schema to cache key. Pass `false` to use schema itself as a key (e.g., if WeakMap used as a cache). By default [fast-json-stable-stringify](https://github.com/epoberezkin/fast-json-stable-stringify) is used. ## Validation errors In case of validation failure, Ajv assigns the array of errors to `errors` property of validation function (or to `errors` property of Ajv instance when `validate` or `validateSchema` methods were called). In case of [asynchronous validation](#asynchronous-validation), the returned promise is rejected with exception `Ajv.ValidationError` that has `errors` property. ### Error objects Each error is an object with the following properties: - _keyword_: validation keyword. - _dataPath_: the path to the part of the data that was validated. By default `dataPath` uses JavaScript property access notation (e.g., `".prop[1].subProp"`). When the option `jsonPointers` is true (see [Options](#options)) `dataPath` will be set using JSON pointer standard (e.g., `"/prop/1/subProp"`). - _schemaPath_: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation. - _params_: the object with the additional information about error that can be used to create custom error messages (e.g., using [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) package). See below for parameters set by all keywords. - _message_: the standard error message (can be excluded with option `messages` set to false). - _schema_: the schema of the keyword (added with `verbose` option). - _parentSchema_: the schema containing the keyword (added with `verbose` option) - _data_: the data validated by the keyword (added with `verbose` option). __Please note__: `propertyNames` keyword schema validation errors have an additional property `propertyName`, `dataPath` points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property `keyword` equal to `"propertyNames"`. ### Error parameters Properties of `params` object in errors depend on the keyword that failed validation. - `maxItems`, `minItems`, `maxLength`, `minLength`, `maxProperties`, `minProperties` - property `limit` (number, the schema of the keyword). - `additionalItems` - property `limit` (the maximum number of allowed items in case when `items` keyword is an array of schemas and `additionalItems` is false). - `additionalProperties` - property `additionalProperty` (the property not used in `properties` and `patternProperties` keywords). - `dependencies` - properties: - `property` (dependent property), - `missingProperty` (required missing dependency - only the first one is reported currently) - `deps` (required dependencies, comma separated list as a string), - `depsCount` (the number of required dependencies). - `format` - property `format` (the schema of the keyword). - `maximum`, `minimum` - properties: - `limit` (number, the schema of the keyword), - `exclusive` (boolean, the schema of `exclusiveMaximum` or `exclusiveMinimum`), - `comparison` (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=") - `multipleOf` - property `multipleOf` (the schema of the keyword) - `pattern` - property `pattern` (the schema of the keyword) - `required` - property `missingProperty` (required property that is missing). - `propertyNames` - property `propertyName` (an invalid property name). - `patternRequired` (in ajv-keywords) - property `missingPattern` (required pattern that did not match any property). - `type` - property `type` (required type(s), a string, can be a comma-separated list) - `uniqueItems` - properties `i` and `j` (indices of duplicate items). - `const` - property `allowedValue` pointing to the value (the schema of the keyword). - `enum` - property `allowedValues` pointing to the array of values (the schema of the keyword). - `$ref` - property `ref` with the referenced schema URI. - `oneOf` - property `passingSchemas` (array of indices of passing schemas, null if no schema passes). - custom keywords (in case keyword definition doesn't create errors) - property `keyword` (the keyword name). ### Error logging Using the `logger` option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - **Required Methods**: `log`, `warn`, `error` ```javascript var otherLogger = new OtherLogger(); var ajv = new Ajv({ logger: { log: console.log.bind(console), warn: function warn() { otherLogger.logWarn.apply(otherLogger, arguments); }, error: function error() { otherLogger.logError.apply(otherLogger, arguments); console.error.apply(console, arguments); } } }); ``` ## Plugins Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions: - it exports a function - this function accepts ajv instance as the first parameter and returns the same instance to allow chaining - this function can accept an optional configuration as the second parameter If you have published a useful plugin please submit a PR to add it to the next section. ## Related packages - [ajv-async](https://github.com/ajv-validator/ajv-async) - plugin to configure async validation mode - [ajv-bsontype](https://github.com/BoLaMN/ajv-bsontype) - plugin to validate mongodb's bsonType formats - [ajv-cli](https://github.com/jessedc/ajv-cli) - command line interface - [ajv-errors](https://github.com/ajv-validator/ajv-errors) - plugin for custom error messages - [ajv-i18n](https://github.com/ajv-validator/ajv-i18n) - internationalised error messages - [ajv-istanbul](https://github.com/ajv-validator/ajv-istanbul) - plugin to instrument generated validation code to measure test coverage of your schemas - [ajv-keywords](https://github.com/ajv-validator/ajv-keywords) - plugin with custom validation keywords (select, typeof, etc.) - [ajv-merge-patch](https://github.com/ajv-validator/ajv-merge-patch) - plugin with keywords $merge and $patch - [ajv-pack](https://github.com/ajv-validator/ajv-pack) - produces a compact module exporting validation functions - [ajv-formats-draft2019](https://github.com/luzlab/ajv-formats-draft2019) - format validators for draft2019 that aren't already included in ajv (ie. `idn-hostname`, `idn-email`, `iri`, `iri-reference` and `duration`). ## Some packages using Ajv - [webpack](https://github.com/webpack/webpack) - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser - [jsonscript-js](https://github.com/JSONScript/jsonscript-js) - the interpreter for [JSONScript](http://www.jsonscript.org) - scripted processing of existing endpoints and services - [osprey-method-handler](https://github.com/mulesoft-labs/osprey-method-handler) - Express middleware for validating requests and responses based on a RAML method object, used in [osprey](https://github.com/mulesoft/osprey) - validating API proxy generated from a RAML definition - [har-validator](https://github.com/ahmadnassri/har-validator) - HTTP Archive (HAR) validator - [jsoneditor](https://github.com/josdejong/jsoneditor) - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org - [JSON Schema Lint](https://github.com/nickcmaynard/jsonschemalint) - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com - [objection](https://github.com/vincit/objection.js) - SQL-friendly ORM for Node.js - [table](https://github.com/gajus/table) - formats data into a string table - [ripple-lib](https://github.com/ripple/ripple-lib) - a JavaScript API for interacting with [Ripple](https://ripple.com) in Node.js and the browser - [restbase](https://github.com/wikimedia/restbase) - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content - [hippie-swagger](https://github.com/CacheControl/hippie-swagger) - [Hippie](https://github.com/vesln/hippie) wrapper that provides end to end API testing with swagger validation - [react-form-controlled](https://github.com/seeden/react-form-controlled) - React controlled form components with validation - [rabbitmq-schema](https://github.com/tjmehta/rabbitmq-schema) - a schema definition module for RabbitMQ graphs and messages - [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema - [chai-ajv-json-schema](https://github.com/peon374/chai-ajv-json-schema) - chai plugin to us JSON Schema with expect in mocha tests - [grunt-jsonschema-ajv](https://github.com/SignpostMarv/grunt-jsonschema-ajv) - Grunt plugin for validating files against JSON Schema - [extract-text-webpack-plugin](https://github.com/webpack-contrib/extract-text-webpack-plugin) - extract text from bundle into a file - [electron-builder](https://github.com/electron-userland/electron-builder) - a solution to package and build a ready for distribution Electron app - [addons-linter](https://github.com/mozilla/addons-linter) - Mozilla Add-ons Linter - [gh-pages-generator](https://github.com/epoberezkin/gh-pages-generator) - multi-page site generator converting markdown files to GitHub pages - [ESLint](https://github.com/eslint/eslint) - the pluggable linting utility for JavaScript and JSX ## Tests ``` npm install git submodule update --init npm test ``` ## Contributing All validation functions are generated using doT templates in [dot](https://github.com/ajv-validator/ajv/tree/master/lib/dot) folder. Templates are precompiled so doT is not a run-time dependency. `npm run build` - compiles templates to [dotjs](https://github.com/ajv-validator/ajv/tree/master/lib/dotjs) folder. `npm run watch` - automatically compiles templates when files in dot folder change Please see [Contributing guidelines](https://github.com/ajv-validator/ajv/blob/master/CONTRIBUTING.md) ## Changes history See https://github.com/ajv-validator/ajv/releases __Please note__: [Changes in version 7.0.0-beta](https://github.com/ajv-validator/ajv/releases/tag/v7.0.0-beta.0) [Version 6.0.0](https://github.com/ajv-validator/ajv/releases/tag/v6.0.0). ## Code of conduct Please review and follow the [Code of conduct](https://github.com/ajv-validator/ajv/blob/master/CODE_OF_CONDUCT.md). Please report any unacceptable behaviour to [email protected] - it will be reviewed by the project team. ## Open-source software support Ajv is a part of [Tidelift subscription](https://tidelift.com/subscription/pkg/npm-ajv?utm_source=npm-ajv&utm_medium=referral&utm_campaign=readme) - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers. ## License [MIT](https://github.com/ajv-validator/ajv/blob/master/LICENSE) JS-YAML - YAML 1.2 parser / writer for JavaScript ================================================= [![Build Status](https://travis-ci.org/nodeca/js-yaml.svg?branch=master)](https://travis-ci.org/nodeca/js-yaml) [![NPM version](https://img.shields.io/npm/v/js-yaml.svg)](https://www.npmjs.org/package/js-yaml) __[Online Demo](http://nodeca.github.com/js-yaml/)__ This is an implementation of [YAML](http://yaml.org/), a human-friendly data serialization language. Started as [PyYAML](http://pyyaml.org/) port, it was completely rewritten from scratch. Now it's very fast, and supports 1.2 spec. Installation ------------ ### YAML module for node.js ``` npm install js-yaml ``` ### CLI executable If you want to inspect your YAML files from CLI, install js-yaml globally: ``` npm install -g js-yaml ``` #### Usage ``` usage: js-yaml [-h] [-v] [-c] [-t] file Positional arguments: file File with YAML document(s) Optional arguments: -h, --help Show this help message and exit. -v, --version Show program's version number and exit. -c, --compact Display errors in compact mode -t, --trace Show stack trace on error ``` ### Bundled YAML library for browsers ``` html <!-- esprima required only for !!js/function --> <script src="esprima.js"></script> <script src="js-yaml.min.js"></script> <script type="text/javascript"> var doc = jsyaml.load('greeting: hello\nname: world'); </script> ``` Browser support was done mostly for the online demo. If you find any errors - feel free to send pull requests with fixes. Also note, that IE and other old browsers needs [es5-shims](https://github.com/kriskowal/es5-shim) to operate. Notes: 1. We have no resources to support browserified version. Don't expect it to be well tested. Don't expect fast fixes if something goes wrong there. 2. `!!js/function` in browser bundle will not work by default. If you really need it - load `esprima` parser first (via amd or directly). 3. `!!bin` in browser will return `Array`, because browsers do not support node.js `Buffer` and adding Buffer shims is completely useless on practice. API --- Here we cover the most 'useful' methods. If you need advanced details (creating your own tags), see [wiki](https://github.com/nodeca/js-yaml/wiki) and [examples](https://github.com/nodeca/js-yaml/tree/master/examples) for more info. ``` javascript const yaml = require('js-yaml'); const fs = require('fs'); // Get document, or throw exception on error try { const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8')); console.log(doc); } catch (e) { console.log(e); } ``` ### safeLoad (string [ , options ]) **Recommended loading way.** Parses `string` as single YAML document. Returns either a plain object, a string or `undefined`, or throws `YAMLException` on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data. options: - `filename` _(default: null)_ - string to be used as a file path in error/warning messages. - `onWarning` _(default: null)_ - function to call on warning messages. Loader will call this function with an instance of `YAMLException` for each warning. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ - specifies a schema to use. - `FAILSAFE_SCHEMA` - only strings, arrays and plain objects: http://www.yaml.org/spec/1.2/spec.html#id2802346 - `JSON_SCHEMA` - all JSON-supported types: http://www.yaml.org/spec/1.2/spec.html#id2803231 - `CORE_SCHEMA` - same as `JSON_SCHEMA`: http://www.yaml.org/spec/1.2/spec.html#id2804923 - `DEFAULT_SAFE_SCHEMA` - all supported YAML types, without unsafe ones (`!!js/undefined`, `!!js/regexp` and `!!js/function`): http://yaml.org/type/ - `DEFAULT_FULL_SCHEMA` - all supported YAML types. - `json` _(default: false)_ - compatibility with JSON.parse behaviour. If true, then duplicate keys in a mapping will override values rather than throwing an error. NOTE: This function **does not** understand multi-document sources, it throws exception on those. NOTE: JS-YAML **does not** support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use `Null` and `NULL` as `null`, etc. The core schema also has no such restrictions. It allows binary notation for integers. ### load (string [ , options ]) **Use with care with untrusted sources**. The same as `safeLoad()` but uses `DEFAULT_FULL_SCHEMA` by default - adds some JavaScript-specific types: `!!js/function`, `!!js/regexp` and `!!js/undefined`. For untrusted sources, you must additionally validate object structure to avoid injections: ``` javascript const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"'; // I'm just converting that string, what could possibly go wrong? require('js-yaml').load(untrusted_code) + '' ``` ### safeLoadAll (string [, iterator] [, options ]) Same as `safeLoad()`, but understands multi-document sources. Applies `iterator` to each document if specified, or returns array of documents. ``` javascript const yaml = require('js-yaml'); yaml.safeLoadAll(data, function (doc) { console.log(doc); }); ``` ### loadAll (string [, iterator] [ , options ]) Same as `safeLoadAll()` but uses `DEFAULT_FULL_SCHEMA` by default. ### safeDump (object [ , options ]) Serializes `object` as a YAML document. Uses `DEFAULT_SAFE_SCHEMA`, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the `skipInvalid` option to `true`. options: - `indent` _(default: 2)_ - indentation width to use (in spaces). - `noArrayIndent` _(default: false)_ - when true, will not add an indentation level to array elements - `skipInvalid` _(default: false)_ - do not throw on invalid types (like function in the safe schema) and skip pairs and single values with such types. - `flowLevel` (default: -1) - specifies level of nesting, when to switch from block to flow style for collections. -1 means block style everwhere - `styles` - "tag" => "style" map. Each tag may have own set of styles. - `schema` _(default: `DEFAULT_SAFE_SCHEMA`)_ specifies a schema to use. - `sortKeys` _(default: `false`)_ - if `true`, sort keys when dumping YAML. If a function, use the function to sort the keys. - `lineWidth` _(default: `80`)_ - set max line width. - `noRefs` _(default: `false`)_ - if `true`, don't convert duplicate objects into references - `noCompatMode` _(default: `false`)_ - if `true` don't try to be compatible with older yaml versions. Currently: don't quote "yes", "no" and so on, as required for YAML 1.1 - `condenseFlow` _(default: `false`)_ - if `true` flow sequences will be condensed, omitting the space between `a, b`. Eg. `'[a,b]'`, and omitting the space between `key: value` and quoting the key. Eg. `'{"a":b}'` Can be useful when using yaml for pretty URL query params as spaces are %-encoded. The following table show availlable styles (e.g. "canonical", "binary"...) available for each tag (.e.g. !!null, !!int ...). Yaml output is shown on the right side after `=>` (default setting) or `->`: ``` none !!null "canonical" -> "~" "lowercase" => "null" "uppercase" -> "NULL" "camelcase" -> "Null" !!int "binary" -> "0b1", "0b101010", "0b1110001111010" "octal" -> "01", "052", "016172" "decimal" => "1", "42", "7290" "hexadecimal" -> "0x1", "0x2A", "0x1C7A" !!bool "lowercase" => "true", "false" "uppercase" -> "TRUE", "FALSE" "camelcase" -> "True", "False" !!float "lowercase" => ".nan", '.inf' "uppercase" -> ".NAN", '.INF' "camelcase" -> ".NaN", '.Inf' ``` Example: ``` javascript safeDump (object, { 'styles': { '!!null': 'canonical' // dump null as ~ }, 'sortKeys': true // sort object keys }); ``` ### dump (object [ , options ]) Same as `safeDump()` but without limits (uses `DEFAULT_FULL_SCHEMA` by default). Supported YAML types -------------------- The list of standard YAML tags and corresponding JavaScipt types. See also [YAML tag discussion](http://pyyaml.org/wiki/YAMLTagDiscussion) and [YAML types repository](http://yaml.org/type/). ``` !!null '' # null !!bool 'yes' # bool !!int '3...' # number !!float '3.14...' # number !!binary '...base64...' # buffer !!timestamp 'YYYY-...' # date !!omap [ ... ] # array of key-value pairs !!pairs [ ... ] # array or array pairs !!set { ... } # array of objects with given keys and null values !!str '...' # string !!seq [ ... ] # array !!map { ... } # object ``` **JavaScript-specific tags** ``` !!js/regexp /pattern/gim # RegExp !!js/undefined '' # Undefined !!js/function 'function () {...}' # Function ``` Caveats ------- Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling `toString()` method) them at the moment of adding them. ``` yaml --- ? [ foo, bar ] : - baz ? { foo: bar } : - baz - baz ``` ``` javascript { "foo,bar": ["baz"], "[object Object]": ["baz", "baz"] } ``` Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded. ``` yaml &anchor foo: foo: bar *anchor: duplicate key baz: bat *anchor: duplicate key ``` js-yaml for enterprise ---------------------- Available as part of the Tidelift Subscription The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-js-yaml?utm_source=npm-js-yaml&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. # prelude.ls [![Build Status](https://travis-ci.org/gkz/prelude-ls.png?branch=master)](https://travis-ci.org/gkz/prelude-ls) is a functionally oriented utility library. It is powerful and flexible. Almost all of its functions are curried. It is written in, and is the recommended base library for, <a href="http://livescript.net">LiveScript</a>. See **[the prelude.ls site](http://preludels.com)** for examples, a reference, and more. You can install via npm `npm install prelude-ls` ### Development `make test` to test `make build` to build `lib` from `src` `make build-browser` to build browser versions functional-red-black-tree ========================= A [fully persistent](http://en.wikipedia.org/wiki/Persistent_data_structure) [red-black tree](http://en.wikipedia.org/wiki/Red%E2%80%93black_tree) written 100% in JavaScript. Works both in node.js and in the browser via [browserify](http://browserify.org/). Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy. Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module. # Install npm install functional-red-black-tree # Example Here is an example of some basic usage: ```javascript //Load the library var createTree = require("functional-red-black-tree") //Create a tree var t1 = createTree() //Insert some items into the tree var t2 = t1.insert(1, "foo") var t3 = t2.insert(2, "bar") //Remove something var t4 = t3.remove(1) ``` # API ```javascript var createTree = require("functional-red-black-tree") ``` ## Overview - [Tree methods](#tree-methods) - [`var tree = createTree([compare])`](#var-tree-=-createtreecompare) - [`tree.keys`](#treekeys) - [`tree.values`](#treevalues) - [`tree.length`](#treelength) - [`tree.get(key)`](#treegetkey) - [`tree.insert(key, value)`](#treeinsertkey-value) - [`tree.remove(key)`](#treeremovekey) - [`tree.find(key)`](#treefindkey) - [`tree.ge(key)`](#treegekey) - [`tree.gt(key)`](#treegtkey) - [`tree.lt(key)`](#treeltkey) - [`tree.le(key)`](#treelekey) - [`tree.at(position)`](#treeatposition) - [`tree.begin`](#treebegin) - [`tree.end`](#treeend) - [`tree.forEach(visitor(key,value)[, lo[, hi]])`](#treeforEachvisitorkeyvalue-lo-hi) - [`tree.root`](#treeroot) - [Node properties](#node-properties) - [`node.key`](#nodekey) - [`node.value`](#nodevalue) - [`node.left`](#nodeleft) - [`node.right`](#noderight) - [Iterator methods](#iterator-methods) - [`iter.key`](#iterkey) - [`iter.value`](#itervalue) - [`iter.node`](#iternode) - [`iter.tree`](#itertree) - [`iter.index`](#iterindex) - [`iter.valid`](#itervalid) - [`iter.clone()`](#iterclone) - [`iter.remove()`](#iterremove) - [`iter.update(value)`](#iterupdatevalue) - [`iter.next()`](#iternext) - [`iter.prev()`](#iterprev) - [`iter.hasNext`](#iterhasnext) - [`iter.hasPrev`](#iterhasprev) ## Tree methods ### `var tree = createTree([compare])` Creates an empty functional tree * `compare` is an optional comparison function, same semantics as array.sort() **Returns** An empty tree ordered by `compare` ### `tree.keys` A sorted array of all the keys in the tree ### `tree.values` An array array of all the values in the tree ### `tree.length` The number of items in the tree ### `tree.get(key)` Retrieves the value associated to the given key * `key` is the key of the item to look up **Returns** The value of the first node associated to `key` ### `tree.insert(key, value)` Creates a new tree with the new pair inserted. * `key` is the key of the item to insert * `value` is the value of the item to insert **Returns** A new tree with `key` and `value` inserted ### `tree.remove(key)` Removes the first item with `key` in the tree * `key` is the key of the item to remove **Returns** A new tree with the given item removed if it exists ### `tree.find(key)` Returns an iterator pointing to the first item in the tree with `key`, otherwise `null`. ### `tree.ge(key)` Find the first item in the tree whose key is `>= key` * `key` is the key to search for **Returns** An iterator at the given element. ### `tree.gt(key)` Finds the first item in the tree whose key is `> key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.lt(key)` Finds the last item in the tree whose key is `< key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.le(key)` Finds the last item in the tree whose key is `<= key` * `key` is the key to search for **Returns** An iterator at the given element ### `tree.at(position)` Finds an iterator starting at the given element * `position` is the index at which the iterator gets created **Returns** An iterator starting at position ### `tree.begin` An iterator pointing to the first element in the tree ### `tree.end` An iterator pointing to the last element in the tree ### `tree.forEach(visitor(key,value)[, lo[, hi]])` Walks a visitor function over the nodes of the tree in order. * `visitor(key,value)` is a callback that gets executed on each node. If a truthy value is returned from the visitor, then iteration is stopped. * `lo` is an optional start of the range to visit (inclusive) * `hi` is an optional end of the range to visit (non-inclusive) **Returns** The last value returned by the callback ### `tree.root` Returns the root node of the tree ## Node properties Each node of the tree has the following properties: ### `node.key` The key associated to the node ### `node.value` The value associated to the node ### `node.left` The left subtree of the node ### `node.right` The right subtree of the node ## Iterator methods ### `iter.key` The key of the item referenced by the iterator ### `iter.value` The value of the item referenced by the iterator ### `iter.node` The value of the node at the iterator's current position. `null` is iterator is node valid. ### `iter.tree` The tree associated to the iterator ### `iter.index` Returns the position of this iterator in the sequence. ### `iter.valid` Checks if the iterator is valid ### `iter.clone()` Makes a copy of the iterator ### `iter.remove()` Removes the item at the position of the iterator **Returns** A new binary search tree with `iter`'s item removed ### `iter.update(value)` Updates the value of the node in the tree at this iterator **Returns** A new binary search tree with the corresponding node updated ### `iter.next()` Advances the iterator to the next position ### `iter.prev()` Moves the iterator backward one element ### `iter.hasNext` If true, then the iterator is not at the end of the sequence ### `iter.hasPrev` If true, then the iterator is not at the beginning of the sequence # Credits (c) 2013 Mikola Lysenko. MIT License # ESLint Scope ESLint Scope is the [ECMAScript](http://www.ecma-international.org/publications/standards/Ecma-262.htm) scope analyzer used in ESLint. It is a fork of [escope](http://github.com/estools/escope). ## Usage Install: ``` npm i eslint-scope --save ``` Example: ```js var eslintScope = require('eslint-scope'); var espree = require('espree'); var estraverse = require('estraverse'); var ast = espree.parse(code); var scopeManager = eslintScope.analyze(ast); var currentScope = scopeManager.acquire(ast); // global scope estraverse.traverse(ast, { enter: function(node, parent) { // do stuff if (/Function/.test(node.type)) { currentScope = scopeManager.acquire(node); // get current function scope } }, leave: function(node, parent) { if (/Function/.test(node.type)) { currentScope = currentScope.upper; // set to parent scope } // do stuff } }); ``` ## Contributing Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the [ESLint Contributor Guidelines](http://eslint.org/docs/developer-guide/contributing), so please be sure to read them before contributing. If you're not sure where to dig in, check out the [issues](https://github.com/eslint/eslint-scope/issues). ## Build Commands * `npm test` - run all linting and tests * `npm run lint` - run all linting ## License ESLint Scope is licensed under a permissive BSD 2-clause license. # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # is-glob [![NPM version](https://img.shields.io/npm/v/is-glob.svg?style=flat)](https://www.npmjs.com/package/is-glob) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![NPM total downloads](https://img.shields.io/npm/dt/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![Linux Build Status](https://img.shields.io/travis/micromatch/is-glob.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/is-glob) [![Windows Build Status](https://img.shields.io/appveyor/ci/micromatch/is-glob.svg?style=flat&label=AppVeyor)](https://ci.appveyor.com/project/micromatch/is-glob) > Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-glob ``` You might also be interested in [is-valid-glob](https://github.com/jonschlinkert/is-valid-glob) and [has-glob](https://github.com/jonschlinkert/has-glob). ## Usage ```js var isGlob = require('is-glob'); ``` ### Default behavior **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js'); isGlob('*.js'); isGlob('**/abc.js'); isGlob('abc/*.js'); isGlob('abc/(aaa|bbb).js'); isGlob('abc/[a-z].js'); isGlob('abc/{a,b}.js'); //=> true ``` Extglobs ```js isGlob('abc/@(a).js'); isGlob('abc/!(a).js'); isGlob('abc/+(a).js'); isGlob('abc/*(a).js'); isGlob('abc/?(a).js'); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('abc/\\@(a).js'); isGlob('abc/\\!(a).js'); isGlob('abc/\\+(a).js'); isGlob('abc/\\*(a).js'); isGlob('abc/\\?(a).js'); isGlob('\\!foo.js'); isGlob('\\*.js'); isGlob('\\*\\*/abc.js'); isGlob('abc/\\*.js'); isGlob('abc/\\(aaa|bbb).js'); isGlob('abc/\\[a-z].js'); isGlob('abc/\\{a,b}.js'); //=> false ``` Patterns that do not have glob patterns return `false`: ```js isGlob('abc.js'); isGlob('abc/def/ghi.js'); isGlob('foo.js'); isGlob('abc/@.js'); isGlob('abc/+.js'); isGlob('abc/?.js'); isGlob(); isGlob(null); //=> false ``` Arrays are also `false` (If you want to check if an array has a glob pattern, use [has-glob](https://github.com/jonschlinkert/has-glob)): ```js isGlob(['**/*.js']); isGlob(['foo.js']); //=> false ``` ### Option strict When `options.strict === false` the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return `false` may return `true`. This is done so that matching libraries like [micromatch](https://github.com/micromatch/micromatch) have a chance at determining if the pattern is a glob or not. **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js', {strict: false}); isGlob('*.js', {strict: false}); isGlob('**/abc.js', {strict: false}); isGlob('abc/*.js', {strict: false}); isGlob('abc/(aaa|bbb).js', {strict: false}); isGlob('abc/[a-z].js', {strict: false}); isGlob('abc/{a,b}.js', {strict: false}); //=> true ``` Extglobs ```js isGlob('abc/@(a).js', {strict: false}); isGlob('abc/!(a).js', {strict: false}); isGlob('abc/+(a).js', {strict: false}); isGlob('abc/*(a).js', {strict: false}); isGlob('abc/?(a).js', {strict: false}); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('\\!foo.js', {strict: false}); isGlob('\\*.js', {strict: false}); isGlob('\\*\\*/abc.js', {strict: false}); isGlob('abc/\\*.js', {strict: false}); isGlob('abc/\\(aaa|bbb).js', {strict: false}); isGlob('abc/\\[a-z].js', {strict: false}); isGlob('abc/\\{a,b}.js', {strict: false}); //=> false ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [assemble](https://www.npmjs.com/package/assemble): Get the rocks out of your socks! Assemble makes you fast at creating web projects… [more](https://github.com/assemble/assemble) | [homepage](https://github.com/assemble/assemble "Get the rocks out of your socks! Assemble makes you fast at creating web projects. Assemble is used by thousands of projects for rapid prototyping, creating themes, scaffolds, boilerplates, e-books, UI components, API documentation, blogs, building websit") * [base](https://www.npmjs.com/package/base): Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks | [homepage](https://github.com/node-base/base "Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks") * [update](https://www.npmjs.com/package/update): Be scalable! Update is a new, open source developer framework and CLI for automating updates… [more](https://github.com/update/update) | [homepage](https://github.com/update/update "Be scalable! Update is a new, open source developer framework and CLI for automating updates of any kind in code projects.") * [verb](https://www.npmjs.com/package/verb): Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used… [more](https://github.com/verbose/verb) | [homepage](https://github.com/verbose/verb "Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used on hundreds of projects of all sizes to generate everything from API docs to readmes.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 47 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [doowb](https://github.com/doowb) | | 1 | [phated](https://github.com/phated) | | 1 | [danhper](https://github.com/danhper) | | 1 | [paulmillr](https://github.com/paulmillr) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 27, 2019._ A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # lodash.merge v4.6.2 The [Lodash](https://lodash.com/) method `_.merge` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.merge ``` In Node.js: ```js var merge = require('lodash.merge'); ``` See the [documentation](https://lodash.com/docs#merge) or [package source](https://github.com/lodash/lodash/blob/4.6.2-npm-packages/lodash.merge) for more details. ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) # Optionator <a name="optionator" /> Optionator is a JavaScript/Node.js option parsing and help generation library used by [eslint](http://eslint.org), [Grasp](http://graspjs.com), [LiveScript](http://livescript.net), [esmangle](https://github.com/estools/esmangle), [escodegen](https://github.com/estools/escodegen), and [many more](https://www.npmjs.com/browse/depended/optionator). For an online demo, check out the [Grasp online demo](http://www.graspjs.com/#demo). [About](#about) &middot; [Usage](#usage) &middot; [Settings Format](#settings-format) &middot; [Argument Format](#argument-format) ## Why? The problem with other option parsers, such as `yargs` or `minimist`, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application. $ cmd --halp Invalid option '--halp' - perhaps you meant '--help'? $ cmd --count str Invalid value for option 'count' - expected type Int, received value: str. Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier. ## About Optionator uses [type-check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) behind the scenes to cast and verify input according the specified types. MIT license. Version 0.9.1 npm install optionator For updates on Optionator, [follow me on twitter](https://twitter.com/gkzahariev). Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify. ## Usage `require('optionator');` returns a function. It has one property, `VERSION`, the current version of the library as a string. This function is called with an object specifying your options and other information, see the [settings format section](#settings-format). This in turn returns an object with three properties, `parse`, `parseArgv`, `generateHelp`, and `generateHelpForOption`, which are all functions. ```js var optionator = require('optionator')({ prepend: 'Usage: cmd [options]', append: 'Version 1.0.0', options: [{ option: 'help', alias: 'h', type: 'Boolean', description: 'displays help' }, { option: 'count', alias: 'c', type: 'Int', description: 'number of things', example: 'cmd --count 2' }] }); var options = optionator.parseArgv(process.argv); if (options.help) { console.log(optionator.generateHelp()); } ... ``` ### parse(input, parseOptions) `parse` processes the `input` according to your settings, and returns an object with the results. ##### arguments * input - `[String] | Object | String` - the input you wish to parse * parseOptions - `{slice: Int}` - all options optional - `slice` specifies how much to slice away from the beginning if the input is an array or string - by default `0` for string, `2` for array (works with `process.argv`) ##### returns `Object` - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the `_` key. ##### example ```js parse(['node', 't.js', '--count', '2', 'positional']); // {count: 2, _: ['positional']} parse('--count 2 positional'); // {count: 2, _: ['positional']} parse({count: 2, _:['positional']}); // {count: 2, _: ['positional']} ``` ### parseArgv(input) `parseArgv` works exactly like `parse`, but only for array input and it slices off the first two elements. ##### arguments * input - `[String]` - the input you wish to parse ##### returns See "returns" section in "parse" ##### example ```js parseArgv(process.argv); ``` ### generateHelp(helpOptions) `generateHelp` produces help text based on your settings. ##### arguments * helpOptions - `{showHidden: Boolean, interpolate: Object}` - all options optional - `showHidden` specifies whether to show options with `hidden: true` specified, by default it is `false` - `interpolate` specify data to be interpolated in `prepend` and `append` text, `{{key}}` is the format - eg. `generateHelp({interpolate:{version: '0.4.2'}})`, will change this `append` text: `Version {{version}}` to `Version 0.4.2` ##### returns `String` - the generated help text ##### example ```js generateHelp(); /* "Usage: cmd [options] positional -h, --help displays help -c, --count Int number of things Version 1.0.0 "*/ ``` ### generateHelpForOption(optionName) `generateHelpForOption` produces expanded help text for the specified with `optionName` option. If an `example` was specified for the option, it will be displayed, and if a `longDescription` was specified, it will display that instead of the `description`. ##### arguments * optionName - `String` - the name of the option to display ##### returns `String` - the generated help text for the option ##### example ```js generateHelpForOption('count'); /* "-c, --count Int description: number of things example: cmd --count 2 "*/ ``` ## Settings Format When your `require('optionator')`, you get a function that takes in a settings object. This object has the type: { prepend: String, append: String, options: [{heading: String} | { option: String, alias: [String] | String, type: String, enum: [String], default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: [String] | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: [String] | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | [String]]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object } All of the properties are optional (the `Maybe` has been excluded for brevities sake), except for having either `heading: String` or `option: String` in each object in the `options` array. ### Top Level Properties * `prepend` is an optional string to be placed before the options in the help text * `append` is an optional string to be placed after the options in the help text * `options` is a required array specifying your options and headings, the options and headings will be displayed in the order specified * `helpStyle` is an optional object which enables you to change the default appearance of some aspects of the help text * `mutuallyExclusive` is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is present * `concatRepeatedArrays` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `mergeRepeatedObjects` see description under the "Option Properties" heading - use at the top level is deprecated, if you want to set this for all options, use the `defaults` property * `positionalAnywhere` is an optional boolean (defaults to `true`) - when `true` it allows positional arguments anywhere, when `false`, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with `positionalAnywhere: false`, the arguments `--flag --boom 12 --crack` would have two positional arguments: `12` and `--crack` * `typeAliases` is an optional object, it allows you to set aliases for types, eg. `{Path: 'String'}` would allow you to use the type `Path` as an alias for the type `String` * `defaults` is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do `default: { type: "String" }` to set the default type of all options to `String`, and then override that default in an individual option by setting the `type` property #### Heading Properties * `heading` a required string, the name of the heading #### Option Properties * `option` the required name of the option - use dash-case, without the leading dashes * `alias` is an optional string or array of strings which specify any aliases for the option * `type` is a required string in the [type check](https://github.com/gkz/type-check) [format](https://github.com/gkz/type-check#type-format), this will be used to cast the inputted value and validate it * `enum` is an optional array of strings, each string will be parsed by [levn](https://github.com/gkz/levn) - the argument value must be one of the resulting values - each potential value must validate against the specified `type` * `default` is a optional string, which will be parsed by [levn](https://github.com/gkz/levn) and used as the default value if none is set - the value must validate against the specified `type` * `restPositional` is an optional boolean - if set to `true`, everything after the option will be taken to be a positional argument, even if it looks like a named argument * `required` is an optional boolean - if set to `true`, the option parsing will fail if the option is not defined * `overrideRequired` is a optional boolean - if set to `true` and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use `--help` or `--version` flags * `concatRepeatedArrays` is an optional boolean or tuple with boolean and options object (defaults to `false`) - when set to `true` and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option `g` of type `[String]`: `-g a -g b -g c,d` will result in `['a','b','c','d']` You can supply an options object by giving the following value: `[true, options]`. The one currently supported option is `oneValuePerFlag`, this only allows one array value per flag. This is useful if your potential values contain a comma. * `mergeRepeatedObjects` is an optional boolean (defaults to `false`) - when set to `true` and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option `g` of type `Object`: `-g a:1 -g b:2 -g c:3,d:4` will result in `{a: 1, b: 2, c: 3, d: 4}` * `dependsOn` is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether `'and'` or `'or'` is first, it will either check whether all (`['and', 'option-a', 'option-b']`), or at least one (`['or', 'option-a', 'option-b']`) other options are set * `description` is an optional string, which will be displayed next to the option in the help text * `longDescription` is an optional string, it will be displayed instead of the `description` when `generateHelpForOption` is used * `example` is an optional string or array of strings with example(s) for the option - these will be displayed when `generateHelpForOption` is used #### Help Style Properties * `aliasSeparator` is an optional string, separates multiple names from each other - default: ' ,' * `typeSeparator` is an optional string, separates the type from the names - default: ' ' * `descriptionSeparator` is an optional string , separates the description from the padded name and type - default: ' ' * `initialIndent` is an optional int - the amount of indent for options - default: 2 * `secondaryIndent` is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4 * `maxPadFactor` is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5 ## Argument Format At the highest level there are two types of arguments: named, and positional. Name arguments of any length are prefixed with `--` (eg. `--go`), and those of one character may be prefixed with either `--` or `-` (eg. `-g`). There are two types of named arguments: boolean flags (eg. `--problemo`, `-p`) which take no value and result in a `true` if they are present, the falsey `undefined` if they are not present, or `false` if present and explicitly prefixed with `no` (eg. `--no-problemo`). Named arguments with values (eg. `--tseries 800`, `-t 800`) are the other type. If the option has a type `Boolean` it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value. For more information about how to properly set types to get the value you want, take a look at the [type check](https://github.com/gkz/type-check) and [levn](https://github.com/gkz/levn) pages. You can group single character arguments that use a single `-`, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. `-ba 2` is equivalent to `-b -a 2`. Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in `cmd -b one -a 2 two` where `b` is a boolean flag, and `a` has the type `Number`, there are two positional arguments, `one` and `two`. Everything after an `--` is positional, even if it looks like a named argument. You may optionally use `=` to separate option names from values, for example: `--count=2`. If you specify the option `NUM`, then any argument using a single `-` followed by a number will be valid and will set the value of `NUM`. Eg. `-2` will be parsed into `NUM: 2`. If duplicate named arguments are present, the last one will be taken. ## Technical About `optionator` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It uses [levn](https://github.com/gkz/levn) to cast arguments to their specified type, and uses [type-check](https://github.com/gkz/type-check) to validate values. It also uses the [prelude.ls](http://preludels.com/) library. Shims used when bundling asc for browser usage. # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg Like `chown -R`. Takes the same arguments as `fs.chown()` # type-check [![Build Status](https://travis-ci.org/gkz/type-check.png?branch=master)](https://travis-ci.org/gkz/type-check) <a name="type-check" /> `type-check` is a library which allows you to check the types of JavaScript values at runtime with a Haskell like type syntax. It is great for checking external input, for testing, or even for adding a bit of safety to your internal code. It is a major component of [levn](https://github.com/gkz/levn). MIT license. Version 0.4.0. Check out the [demo](http://gkz.github.io/type-check/). For updates on `type-check`, [follow me on twitter](https://twitter.com/gkzahariev). npm install type-check ## Quick Examples ```js // Basic types: var typeCheck = require('type-check').typeCheck; typeCheck('Number', 1); // true typeCheck('Number', 'str'); // false typeCheck('Error', new Error); // true typeCheck('Undefined', undefined); // true // Comment typeCheck('count::Number', 1); // true // One type OR another type: typeCheck('Number | String', 2); // true typeCheck('Number | String', 'str'); // true // Wildcard, matches all types: typeCheck('*', 2) // true // Array, all elements of a single type: typeCheck('[Number]', [1, 2, 3]); // true typeCheck('[Number]', [1, 'str', 3]); // false // Tuples, or fixed length arrays with elements of different types: typeCheck('(String, Number)', ['str', 2]); // true typeCheck('(String, Number)', ['str']); // false typeCheck('(String, Number)', ['str', 2, 5]); // false // Object properties: typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true typeCheck('{x: Number, y: Boolean}', {x: 2}); // false typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true // A particular type AND object properties: typeCheck('RegExp{source: String, ...}', /re/i); // true typeCheck('RegExp{source: String, ...}', {source: 're'}); // false // Custom types: var opt = {customTypes: {Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}}; typeCheck('Even', 2, opt); // true // Nested: var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}' typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true ``` Check out the [type syntax format](#syntax) and [guide](#guide). ## Usage `require('type-check');` returns an object that exposes four properties. `VERSION` is the current version of the library as a string. `typeCheck`, `parseType`, and `parsedTypeCheck` are functions. ```js // typeCheck(type, input, options); typeCheck('Number', 2); // true // parseType(type); var parsedType = parseType('Number'); // object // parsedTypeCheck(parsedType, input, options); parsedTypeCheck(parsedType, 2); // true ``` ### typeCheck(type, input, options) `typeCheck` checks a JavaScript value `input` against `type` written in the [type format](#type-format) (and taking account the optional `options`) and returns whether the `input` matches the `type`. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js typeCheck('Number', 2); // true ``` ### parseType(type) `parseType` parses string `type` written in the [type format](#type-format) into an object representing the parsed type. ##### arguments * type - `String` - the type written in the [type format](#type-format) which to parse ##### returns `Object` - an object in the parsed type format representing the parsed type ##### example ```js parseType('Number'); // [{type: 'Number'}] ``` ### parsedTypeCheck(parsedType, input, options) `parsedTypeCheck` checks a JavaScript value `input` against parsed `type` in the parsed type format (and taking account the optional `options`) and returns whether the `input` matches the `type`. Use this in conjunction with `parseType` if you are going to use a type more than once. ##### arguments * type - `Object` - the type in the parsed type format which to check against * input - `*` - any JavaScript value, which is to be checked against the type * options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types) ##### returns `Boolean` - whether the input matches the type ##### example ```js parsedTypeCheck([{type: 'Number'}], 2); // true var parsedType = parseType('String'); parsedTypeCheck(parsedType, 'str'); // true ``` <a name="type-format" /> ## Type Format ### Syntax White space is ignored. The root node is a __Types__. * __Identifier__ = `[\$\w]+` - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. `String` * __Type__ = an `Identifier`, an `Identifier` followed by a `Structure`, just a `Structure`, or a wildcard `*` - eg. `String`, `Object{x: Number}`, `{x: Number}`, `Array{0: String, 1: Boolean, length: Number}`, `*` * __Types__ = optionally a comment (an `Identifier` followed by a `::`), optionally the identifier `Maybe`, one or more `Type`, separated by `|` - eg. `Number`, `String | Date`, `Maybe Number`, `Maybe Boolean | String` * __Structure__ = `Fields`, or a `Tuple`, or an `Array` - eg. `{x: Number}`, `(String, Number)`, `[Date]` * __Fields__ = a `{`, followed one or more `Field` separated by a comma `,` (trailing comma `,` is permitted), optionally an `...` (always preceded by a comma `,`), followed by a `}` - eg. `{x: Number, y: String}`, `{k: Function, ...}` * __Field__ = an `Identifier`, followed by a colon `:`, followed by `Types` - eg. `x: Date | String`, `y: Boolean` * __Tuple__ = a `(`, followed by one or more `Types` separated by a comma `,` (trailing comma `,` is permitted), followed by a `)` - eg `(Date)`, `(Number, Date)` * __Array__ = a `[` followed by exactly one `Types` followed by a `]` - eg. `[Boolean]`, `[Boolean | Null]` ### Guide `type-check` uses `Object.toString` to find out the basic type of a value. Specifically, ```js {}.toString.call(VALUE).slice(8, -1) {}.toString.call(true).slice(8, -1) // 'Boolean' ``` A basic type, eg. `Number`, uses this check. This is much more versatile than using `typeof` - for example, with `document`, `typeof` produces `'object'` which isn't that useful, and our technique produces `'HTMLDocument'`. You may check for multiple types by separating types with a `|`. The checker proceeds from left to right, and passes if the value is any of the types - eg. `String | Boolean` first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false. Adding a `Maybe` in front of a list of multiple types is the same as also checking for `Null` and `Undefined` - eg. `Maybe String` is equivalent to `Undefined | Null | String`. You may add a comment to remind you of what the type is for by following an identifier with a `::` before a type (or multiple types). The comment is simply thrown out. The wildcard `*` matches all types. There are three types of structures for checking the contents of a value: 'fields', 'tuple', and 'array'. If used by itself, a 'fields' structure will pass with any type of object as long as it is an instance of `Object` and the properties pass - this allows for duck typing - eg. `{x: Boolean}`. To check if the properties pass, and the value is of a certain type, you can specify the type - eg. `Error{message: String}`. If you want to make a field optional, you can simply use `Maybe` - eg. `{x: Boolean, y: Maybe String}` will still pass if `y` is undefined (or null). If you don't care if the value has properties beyond what you have specified, you can use the 'etc' operator `...` - eg. `{x: Boolean, ...}` will match an object with an `x` property that is a boolean, and with zero or more other properties. For an array, you must specify one or more types (separated by `|`) - it will pass for something of any length as long as each element passes the types provided - eg. `[Number]`, `[Number | String]`. A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. `(String, Number)`. An array and tuple structure check that the value is of type `Array` by default, but if another type is specified, they will check for that instead - eg. `Int32Array[Number]`. You can use the wildcard `*` to search for any type at all. Check out the [type precedence](https://github.com/zaboco/type-precedence) library for type-check. ## Options Options is an object. It is an optional parameter to the `typeCheck` and `parsedTypeCheck` functions. The only current option is `customTypes`. <a name="custom-types" /> ### Custom Types __Example:__ ```js var options = { customTypes: { Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; } } } }; typeCheck('Even', 2, options); // true typeCheck('Even', 3, options); // false ``` `customTypes` allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a `typeOf` property - a string, and `validate` property - a function. The `typeOf` property is the type the value should be (optional - if not set only `validate` will be used), and `validate` is a function which should return true if the value is of that type. `validate` receives one parameter, which is the value that we are checking. ## Technical About `type-check` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It also uses the [prelude.ls](http://preludels.com/) library. # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(low: `number`, high: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` # file-entry-cache > Super simple cache for file metadata, useful for process that work o a given series of files > and that only need to repeat the job on the changed ones since the previous run of the process — Edit [![NPM Version](http://img.shields.io/npm/v/file-entry-cache.svg?style=flat)](https://npmjs.org/package/file-entry-cache) [![Build Status](http://img.shields.io/travis/royriojas/file-entry-cache.svg?style=flat)](https://travis-ci.org/royriojas/file-entry-cache) ## install ```bash npm i --save file-entry-cache ``` ## Usage The module exposes two functions `create` and `createFromFile`. ## `create(cacheName, [directory, useCheckSum])` - **cacheName**: the name of the cache to be created - **directory**: Optional the directory to load the cache from - **usecheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ## `createFromFile(pathToCache, [useCheckSum])` - **pathToCache**: the path to the cache file (this combines the cache name and directory) - **useCheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file. ```js // loads the cache, if one does not exists for the given // Id a new one will be prepared to be created var fileEntryCache = require('file-entry-cache'); var cache = fileEntryCache.create('testCache'); var files = expand('../fixtures/*.txt'); // the first time this method is called, will return all the files var oFiles = cache.getUpdatedFiles(files); // this will persist this to disk checking each file stats and // updating the meta attributes `size` and `mtime`. // custom fields could also be added to the meta object and will be persisted // in order to retrieve them later cache.reconcile(); // use this if you want the non visited file entries to be kept in the cache // for more than one execution // // cache.reconcile( true /* noPrune */) // on a second run var cache2 = fileEntryCache.create('testCache'); // will return now only the files that were modified or none // if no files were modified previous to the execution of this function var oFiles = cache.getUpdatedFiles(files); // if you want to prevent a file from being considered non modified // something useful if a file failed some sort of validation // you can then remove the entry from the cache doing cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles` // that will effectively make the file to appear again as modified until the validation is passed. In that // case you should not remove it from the cache // if you need all the files, so you can determine what to do with the changed ones // you can call var oFiles = cache.normalizeEntries(files); // oFiles will be an array of objects like the following entry = { key: 'some/name/file', the path to the file changed: true, // if the file was changed since previous run meta: { size: 3242, // the size of the file mtime: 231231231, // the modification time of the file data: {} // some extra field stored for this file (useful to save the result of a transformation on the file } } ``` ## Motivation for this module I needed a super simple and dumb **in-memory cache** with optional disk persistence (write-back cache) in order to make a script that will beautify files with `esformatter` to execute only on the files that were changed since the last run. In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second. This module uses [flat-cache](https://www.npmjs.com/package/flat-cache) a super simple `key/value` cache storage with optional file persistance. The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for `getChangedFiles` will return only the files that were modified. Making the process to end faster. This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the `meta` field, of the entries. Anything added to the meta field will be persisted. Those processes won't need to call `getChangedFiles` they will instead call `normalizeEntries` that will return the entries with a `changed` field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed. In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed. ## Important notes - The values set on the meta attribute of the entries should be `stringify-able` ones if possible, flat-cache uses `circular-json` to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular values - All the changes to the cache state are done to memory first and only persisted after reconcile. ## License MIT # lodash.truncate v4.4.2 The [lodash](https://lodash.com/) method `_.truncate` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.truncate ``` In Node.js: ```js var truncate = require('lodash.truncate'); ``` See the [documentation](https://lodash.com/docs#truncate) or [package source](https://github.com/lodash/lodash/blob/4.4.2-npm-packages/lodash.truncate) for more details. Overview [![Build Status](https://travis-ci.org/lydell/js-tokens.svg?branch=master)](https://travis-ci.org/lydell/js-tokens) ======== A regex that tokenizes JavaScript. ```js var jsTokens = require("js-tokens").default var jsString = "var foo=opts.foo;\n..." jsString.match(jsTokens) // ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...] ``` Installation ============ `npm install js-tokens` ```js import jsTokens from "js-tokens" // or: var jsTokens = require("js-tokens").default ``` Usage ===== ### `jsTokens` ### A regex with the `g` flag that matches JavaScript tokens. The regex _always_ matches, even invalid JavaScript and the empty string. The next match is always directly after the previous. ### `var token = matchToToken(match)` ### ```js import {matchToToken} from "js-tokens" // or: var matchToToken = require("js-tokens").matchToToken ``` Takes a `match` returned by `jsTokens.exec(string)`, and returns a `{type: String, value: String}` object. The following types are available: - string - comment - regex - number - name - punctuator - whitespace - invalid Multi-line comments and strings also have a `closed` property indicating if the token was closed or not (see below). Comments and strings both come in several flavors. To distinguish them, check if the token starts with `//`, `/*`, `'`, `"` or `` ` ``. Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use [is-keyword-js] to tell them apart. Whitespace includes both line terminators and other whitespace. [is-keyword-js]: https://github.com/crissdev/is-keyword-js ECMAScript support ================== The intention is to always support the latest ECMAScript version whose feature set has been finalized. If adding support for a newer version requires changes, a new version with a major verion bump will be released. Currently, ECMAScript 2018 is supported. Invalid code handling ===================== Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input. Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input. Unterminated regex literals are likely matched as division and whatever is inside the regex. Invalid ASCII characters have their own capturing group. Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the [ES2018](#es2018) section. Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple. Strings may contain invalid escape sequences. Limitations =========== Tokenizing JavaScript using regexes—in fact, _one single regex_—won’t be perfect. But that’s not the point either. You may compare jsTokens with [esprima] by using `esprima-compare.js`. See `npm run esprima-compare`! [esprima]: http://esprima.org/ ### Template string interpolation ### Template strings are matched as single tokens, from the starting `` ` `` to the ending `` ` ``, including interpolations (whose tokens are not matched individually). Matching template string interpolations requires recursive balancing of `{` and `}`—something that JavaScript regexes cannot do. Only one level of nesting is supported. ### Division and regex literals collision ### Consider this example: ```js var g = 9.82 var number = bar / 2/g var regex = / 2/g ``` A human can easily understand that in the `number` line we’re dealing with division, and in the `regex` line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the `/` characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the [ES2018](#es2018) section). When the `jsTokens` regex scans throught the above, it will see the following at the end of both the `number` and `regex` rows: ```js / 2/g ``` It is then impossible to know if that is a regex literal, or part of an expression dealing with division. Here is a similar case: ```js foo /= 2/g foo(/= 2/g) ``` The first line divides the `foo` variable with `2/g`. The second line calls the `foo` function with the regex literal `/= 2/g`. Again, since `jsTokens` only sees forwards, it cannot tell the two cases apart. There are some cases where we _can_ tell division and regex literals apart, though. First off, we have the simple cases where there’s only one slash in the line: ```js var foo = 2/g foo /= 2 ``` Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line. Secondly, not every character is a valid regex flag. ```js var number = bar / 2/e ``` The above example is also correctly identified as division, because `e` is not a valid regex flag. I initially wanted to future-proof by allowing `[a-zA-Z]*` (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard `g`, `m`, `i`, `y` and `u` flags are allowed. This means that the above example will be identified as division as long as you don’t rename the `e` variable to some permutation of `gmiyus` 1 to 6 characters long. Lastly, we can look _forward_ for information. - If the token following what looks like a regex literal is not valid after a regex literal, but is valid in a division expression, then the regex literal is treated as division instead. For example, a flagless regex cannot be followed by a string, number or name, but all of those three can be the denominator of a division. - Generally, if what looks like a regex literal is followed by an operator, the regex literal is treated as division instead. This is because regexes are seldomly used with operators (such as `+`, `*`, `&&` and `==`), but division could likely be part of such an expression. Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as: If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code). ### ES2018 ### ES2018 added some nice regex improvements to the language. - [Unicode property escapes] should allow telling names and invalid non-ASCII characters apart without blowing up the regex size. - [Lookbehind assertions] should allow matching telling division and regex literals apart in more cases. - [Named capture groups] might simplify some things. These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features. [Unicode property escapes]: http://2ality.com/2017/07/regexp-unicode-property-escapes.html [Lookbehind assertions]: http://2ality.com/2017/05/regexp-lookbehind-assertions.html [Named capture groups]: http://2ality.com/2017/05/regexp-named-capture-groups.html License ======= [MIT](LICENSE). # v8-compile-cache [![Build Status](https://travis-ci.org/zertosh/v8-compile-cache.svg?branch=master)](https://travis-ci.org/zertosh/v8-compile-cache) `v8-compile-cache` attaches a `require` hook to use [V8's code cache](https://v8project.blogspot.com/2015/07/code-caching.html) to speed up instantiation time. The "code cache" is the work of parsing and compiling done by V8. The ability to tap into V8 to produce/consume this cache was introduced in [Node v5.7.0](https://nodejs.org/en/blog/release/v5.7.0/). ## Usage 1. Add the dependency: ```sh $ npm install --save v8-compile-cache ``` 2. Then, in your entry module add: ```js require('v8-compile-cache'); ``` **Requiring `v8-compile-cache` in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by `v8-compile-cache`.** ## Options Set the environment variable `DISABLE_V8_COMPILE_CACHE=1` to disable the cache. Cache directory is defined by environment variable `V8_COMPILE_CACHE_CACHE_DIR` or defaults to `<os.tmpdir()>/v8-compile-cache-<V8_VERSION>`. ## Internals Cache files are suffixed `.BLOB` and `.MAP` corresponding to the entry module that required `v8-compile-cache`. The cache is _entry module specific_ because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis. ## Benchmarks See https://github.com/zertosh/v8-compile-cache/tree/master/bench. **Load Times:** | Module | Without Cache | With Cache | | ---------------- | -------------:| ----------:| | `babel-core` | `218ms` | `185ms` | | `yarn` | `153ms` | `113ms` | | `yarn` (bundled) | `228ms` | `105ms` | _^ Includes the overhead of loading the cache itself._ ## Acknowledgements * `FileSystemBlobStore` and `NativeCompileCache` are based on Atom's implementation of their v8 compile cache: - https://github.com/atom/atom/blob/b0d7a8a/src/file-system-blob-store.js - https://github.com/atom/atom/blob/b0d7a8a/src/native-compile-cache.js * `mkdirpSync` is based on: - https://github.com/substack/node-mkdirp/blob/f2003bb/index.js#L55-L98 # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. # near-sdk-as Collection of packages used in developing NEAR smart contracts in AssemblyScript including: - [`runtime library`](https://github.com/near/near-sdk-as/tree/master/sdk-core) - AssemblyScript near runtime library - [`bindgen`](https://github.com/near/near-sdk-as/tree/master/bindgen) - AssemblyScript transformer that adds the bindings needed to (de)serialize input and outputs. - [`near-mock-vm`](https://github.com/near/near-sdk-as/tree/master/near-mock-vm) - Core of the NEAR VM compiled to WebAssembly used for running unit tests. - [`@as-pect/cli`](https://github.com/jtenner/as-pect) - AssemblyScript testing framework similar to jest. ## To Install ```sh yarn add -D near-sdk-as ``` ## Project Setup To set up a AS project to compile with the sdk add the following `asconfig.json` file to the root: ```json { "extends": "near-sdk-as/asconfig.json" } ``` Then if your main file is `assembly/index.ts`, then the project can be build with [`asbuild`](https://github.com/willemneal/asbuild): ```sh yarn asb ``` will create a release build and place it `./build/release/<name-in-package.json>.wasm` ```sh yarn asb --target debug ``` will create a debug build and place it in `./build/debug/..` ## Testing ### Unit Testing See the [sdk's as-pect tests for an example](./sdk/assembly/__tests__) of creating unit tests. Must be ending in `.spec.ts` in a `assembly/__tests__`. ## License `near-sdk-as` is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-MIT](LICENSE-MIT) and [LICENSE-APACHE](LICENSE-APACHE) for details. # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![a fun cartoon logo made of glob characters](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif)
jeromtom_verifold
README.md frontend admin.html index.html main.js nearScr.js node_modules .package-lock.json @nodelib fs.scandir README.md out adapters fs.d.ts fs.js constants.d.ts constants.js index.d.ts index.js providers async.d.ts async.js common.d.ts common.js sync.d.ts sync.js settings.d.ts settings.js types index.d.ts index.js utils fs.d.ts fs.js index.d.ts index.js package.json fs.stat README.md out adapters fs.d.ts fs.js index.d.ts index.js providers async.d.ts async.js sync.d.ts sync.js settings.d.ts settings.js types index.d.ts index.js package.json fs.walk README.md out index.d.ts index.js providers async.d.ts async.js index.d.ts index.js stream.d.ts stream.js sync.d.ts sync.js readers async.d.ts async.js common.d.ts common.js reader.d.ts reader.js sync.d.ts sync.js settings.d.ts settings.js types index.d.ts index.js package.json acorn-node .travis.yml CHANGELOG.md LICENSE.md README.md build.js index.js lib bigint index.js class-fields index.js dynamic-import index.js export-ns-from index.js import-meta index.js numeric-separator index.js private-class-elements index.js static-class-features index.js package.json test index.js walk.js acorn-walk CHANGELOG.md README.md dist walk.d.ts walk.js package.json acorn CHANGELOG.md README.md dist acorn.d.ts acorn.js acorn.mjs.d.ts bin.js package.json anymatch README.md index.d.ts index.js package.json arg LICENSE.md README.md index.d.ts index.js package.json binary-extensions binary-extensions.json binary-extensions.json.d.ts index.d.ts index.js package.json readme.md braces CHANGELOG.md README.md index.js lib compile.js constants.js expand.js parse.js stringify.js utils.js package.json camelcase-css README.md index-es5.js index.js package.json chokidar README.md index.js lib constants.js fsevents-handler.js nodefs-handler.js node_modules glob-parent CHANGELOG.md README.md index.js package.json package.json types index.d.ts color-name README.md index.js package.json cssesc LICENSE-MIT.txt README.md cssesc.js package.json defined .travis.yml example defined.js index.js package.json test def.js falsy.js detective .travis.yml CHANGELOG.md bench detect.js esprima_v_acorn.txt bin detective.js example strings.js strings_src.js index.js package.json test both.js chained.js complicated.js es2019.js es6-module.js files both.js chained.js es6-module.js for-await.js generators.js isrequire.js nested.js optional-catch.js rest-spread.js set-in-object-pattern.js shebang.js sparse-array.js strings.js word.js yield.js generators.js isrequire.js nested.js noargs.js parseopts.js rest-spread.js return.js set-in-object-pattern.js shebang.js sparse-array.js strings.js word.js yield.js didyoumean README.md didYouMean-1.2.1.js didYouMean-1.2.1.min.js package.json dlv README.md dist dlv.es.js dlv.js dlv.umd.js index.js package.json fast-glob README.md node_modules glob-parent CHANGELOG.md README.md index.js package.json out index.d.ts index.js managers patterns.d.ts patterns.js tasks.d.ts tasks.js providers async.d.ts async.js filters deep.d.ts deep.js entry.d.ts entry.js error.d.ts error.js matchers matcher.d.ts matcher.js partial.d.ts partial.js provider.d.ts provider.js stream.d.ts stream.js sync.d.ts sync.js transformers entry.d.ts entry.js readers reader.d.ts reader.js stream.d.ts stream.js sync.d.ts sync.js settings.d.ts settings.js types index.d.ts index.js utils array.d.ts array.js errno.d.ts errno.js fs.d.ts fs.js index.d.ts index.js path.d.ts path.js pattern.d.ts pattern.js stream.d.ts stream.js string.d.ts string.js package.json fastq .github dependabot.yml workflows ci.yml README.md bench.js example.js index.d.ts package.json queue.js test example.ts promise.js test.js tsconfig.json fill-range README.md index.js package.json function-bind .jscs.json .travis.yml README.md implementation.js index.js package.json test index.js glob-parent README.md index.js package.json has README.md package.json src index.js test index.js is-binary-path index.d.ts index.js package.json readme.md is-core-module CHANGELOG.md README.md core.json index.js package.json test index.js is-extglob README.md index.js package.json is-glob README.md index.js package.json is-number README.md index.js package.json lilconfig dist index.d.ts index.js package.json readme.md merge2 README.md index.js package.json micromatch README.md index.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js nanoid README.md async index.browser.js index.d.ts index.js index.native.js package.json index.browser.js index.d.ts index.js nanoid.js non-secure index.d.ts index.js package.json package.json url-alphabet index.js package.json normalize-path README.md index.js package.json object-hash dist object_hash.js index.js package.json path-parse README.md index.js package.json picocolors README.md package.json picocolors.browser.js picocolors.d.ts picocolors.js types.ts picomatch CHANGELOG.md README.md index.js lib constants.js parse.js picomatch.js scan.js utils.js package.json pify index.js package.json readme.md postcss-import README.md index.js lib join-layer.js join-media.js load-content.js parse-statements.js process-content.js resolve-id.js package.json postcss-js README.md async.js index.js objectifier.js package.json parser.js process-result.js sync.js postcss-load-config README.md package.json src index.d.ts index.js options.js plugins.js req.js postcss-nested README.md index.d.ts index.js package.json postcss-selector-parser API.md CHANGELOG.md README.md dist index.js parser.js processor.js selectors attribute.js className.js combinator.js comment.js constructors.js container.js guards.js id.js index.js namespace.js nesting.js node.js pseudo.js root.js selector.js string.js tag.js types.js universal.js sortAscending.js tokenTypes.js tokenize.js util ensureObject.js getProp.js index.js stripComments.js unesc.js package.json postcss-selector-parser.d.ts postcss-value-parser README.md lib index.d.ts index.js parse.js stringify.js unit.js walk.js package.json postcss README.md lib at-rule.d.ts at-rule.js comment.d.ts comment.js container.d.ts container.js css-syntax-error.d.ts css-syntax-error.js declaration.d.ts declaration.js document.d.ts document.js fromJSON.d.ts fromJSON.js input.d.ts input.js lazy-result.d.ts lazy-result.js list.d.ts list.js map-generator.js no-work-result.d.ts no-work-result.js node.d.ts node.js parse.d.ts parse.js parser.js postcss.d.ts postcss.js previous-map.d.ts previous-map.js processor.d.ts processor.js result.d.ts result.js root.d.ts root.js rule.d.ts rule.js stringifier.d.ts stringifier.js stringify.d.ts stringify.js symbols.js terminal-highlight.js tokenize.js warn-once.js warning.d.ts warning.js package.json queue-microtask README.md index.d.ts index.js package.json quick-lru index.d.ts index.js package.json readme.md read-cache README.md index.js package.json readdirp README.md index.d.ts index.js package.json resolve .github FUNDING.yml SECURITY.md async.js example async.js sync.js index.js lib async.js caller.js core.js core.json homedir.js is-core.js node-modules-paths.js normalize-options.js sync.js package.json sync.js test core.js dotdot.js dotdot abc index.js index.js faulty_basedir.js filter.js filter_sync.js home_paths.js home_paths_sync.js mock.js mock_sync.js module_dir.js module_dir xmodules aaa index.js ymodules aaa index.js zmodules bbb main.js package.json node-modules-paths.js node_path.js node_path x aaa index.js ccc index.js y bbb index.js ccc index.js nonstring.js pathfilter.js pathfilter deep_ref main.js precedence.js precedence aaa.js aaa index.js main.js bbb.js bbb main.js resolver.js resolver baz doom.js package.json quux.js browser_field a.js b.js package.json cup.coffee dot_main index.js package.json dot_slash_main index.js package.json false_main index.js package.json foo.js incorrect_main index.js package.json invalid_main package.json malformed_package_json index.js package.json mug.coffee mug.js multirepo lerna.json package.json packages package-a index.js package.json package-b index.js package.json nested_symlinks mylib async.js package.json sync.js other_path lib other-lib.js root.js quux foo index.js same_names foo.js foo index.js symlinked _ node_modules foo.js package bar.js package.json without_basedir main.js resolver_sync.js shadowed_core.js shadowed_core node_modules util index.js subdirs.js symlinks.js reusify .coveralls.yml .travis.yml README.md benchmarks createNoCodeFunction.js fib.js reuseNoCodeFunction.js package.json reusify.js test.js run-parallel README.md index.js package.json source-map-js CHANGELOG.md README.md lib array-set.js base64-vlq.js base64.js binary-search.js mapping-list.js quick-sort.js source-map-consumer.js source-map-generator.js source-node.js util.js package.json source-map.d.ts source-map.js supports-preserve-symlinks-flag .github FUNDING.yml CHANGELOG.md README.md browser.js index.js package.json test index.js tailwindcss CHANGELOG.md README.md base.css colors.d.ts colors.js components.css defaultConfig.d.ts defaultConfig.js defaultTheme.d.ts defaultTheme.js lib cli-peer-dependencies.js cli.js constants.js corePluginList.js corePlugins.js css preflight.css featureFlags.js index.js lib cacheInvalidation.js collapseAdjacentRules.js collapseDuplicateDeclarations.js defaultExtractor.js detectNesting.js evaluateTailwindFunctions.js expandApplyAtRules.js expandTailwindAtRules.js generateRules.js getModuleDependencies.js normalizeTailwindDirectives.js partitionApplyAtRules.js regex.js resolveDefaultsAtRules.js setupContextUtils.js setupTrackingContext.js sharedState.js substituteScreenAtRules.js postcss-plugins nesting README.md index.js plugin.js processTailwindFeatures.js public colors.js create-plugin.js default-config.js default-theme.js resolve-config.js util bigSign.js buildMediaQuery.js cloneDeep.js cloneNodes.js color.js configurePlugins.js createPlugin.js createUtilityPlugin.js dataTypes.js defaults.js escapeClassName.js escapeCommas.js flattenColorPalette.js formatVariantSelector.js getAllConfigs.js hashConfig.js isKeyframeRule.js isPlainObject.js isValidArbitraryValue.js log.js nameClass.js negateValue.js normalizeConfig.js normalizeScreens.js parseAnimationValue.js parseBoxShadowValue.js parseDependency.js parseObjectStyles.js pluginUtils.js prefixSelector.js removeAlphaVariables.js resolveConfig.js resolveConfigPath.js responsive.js splitAtTopLevelOnly.js tap.js toColorValue.js toPath.js transformThemeValue.js validateConfig.js withAlphaVariable.js nesting index.js package.json plugin.d.ts plugin.js prettier.config.js resolveConfig.d.ts resolveConfig.js screens.css scripts create-plugin-list.js generate-types.js install-integrations.js rebuildFixtures.js type-utils.js src cli-peer-dependencies.js cli.js constants.js corePluginList.js corePlugins.js css preflight.css featureFlags.js index.js lib cacheInvalidation.js collapseAdjacentRules.js collapseDuplicateDeclarations.js defaultExtractor.js detectNesting.js evaluateTailwindFunctions.js expandApplyAtRules.js expandTailwindAtRules.js generateRules.js getModuleDependencies.js normalizeTailwindDirectives.js partitionApplyAtRules.js regex.js resolveDefaultsAtRules.js setupContextUtils.js setupTrackingContext.js sharedState.js substituteScreenAtRules.js postcss-plugins nesting README.md index.js plugin.js processTailwindFeatures.js public colors.js create-plugin.js default-config.js default-theme.js resolve-config.js util bigSign.js buildMediaQuery.js cloneDeep.js cloneNodes.js color.js configurePlugins.js createPlugin.js createUtilityPlugin.js dataTypes.js defaults.js escapeClassName.js escapeCommas.js flattenColorPalette.js formatVariantSelector.js getAllConfigs.js hashConfig.js isKeyframeRule.js isPlainObject.js isValidArbitraryValue.js log.js nameClass.js negateValue.js normalizeConfig.js normalizeScreens.js parseAnimationValue.js parseBoxShadowValue.js parseDependency.js parseObjectStyles.js pluginUtils.js prefixSelector.js removeAlphaVariables.js resolveConfig.js resolveConfigPath.js responsive.js splitAtTopLevelOnly.js tap.js toColorValue.js toPath.js transformThemeValue.js validateConfig.js withAlphaVariable.js stubs defaultConfig.stub.js defaultPostCssConfig.stub.js simpleConfig.stub.js tailwind.css types config.d.ts generated colors.d.ts corePluginList.d.ts default-theme.d.ts index.d.ts utilities.css variants.css to-regex-range README.md index.js package.json util-deprecate History.md README.md browser.js node.js package.json xtend README.md immutable.js mutable.js package.json test.js yaml README.md browser dist PlainValue-b8036b75.js Schema-e94716c8.js index.js legacy-exports.js package.json parse-cst.js resolveSeq-492ab440.js types.js util.js warnings-df54cb69.js index.js map.js pair.js parse-cst.js scalar.js schema.js seq.js types.js types binary.js omap.js pairs.js set.js timestamp.js util.js dist Document-9b4560a1.js PlainValue-ec8e588e.js Schema-88e323a7.js index.js legacy-exports.js parse-cst.js resolveSeq-d03cb037.js test-events.js types.js util.js warnings-1000a372.js index.d.ts index.js map.js package.json pair.js parse-cst.d.ts parse-cst.js scalar.js schema.js seq.js types.d.ts types.js types binary.js omap.js pairs.js set.js timestamp.js util.d.ts util.js package-lock.json package.json styles.css tailwind.config.js | | | neardev 3cccb4a7b3ca4fbeac3901dac4e17f978e1b9b485e335b3af621cf493925d789 package.json node_modules .bin mustache.cmd mustache.ps1 .package-lock.json base-x LICENSE.md README.md package.json src index.d.ts index.js bn.js README.md lib bn.js package.json borsh LICENSE-MIT.txt README.md lib index.d.ts index.js package.json bs58 CHANGELOG.md README.md index.js package.json capability Array.prototype.forEach.js Array.prototype.map.js Error.captureStackTrace.js Error.prototype.stack.js Function.prototype.bind.js Object.create.js Object.defineProperties.js Object.defineProperty.js Object.prototype.hasOwnProperty.js README.md arguments.callee.caller.js es5.js index.js lib CapabilityDetector.js definitions.js index.js package.json strict mode.js depd History.md Readme.md index.js lib browser index.js package.json error-polyfill README.md index.js lib index.js non-v8 Frame.js FrameStringParser.js FrameStringSource.js index.js prepareStackTrace.js unsupported.js v8.js package.json http-errors HISTORY.md README.md index.js node_modules depd History.md Readme.md index.js lib browser index.js compat callsite-tostring.js event-listener-count.js index.js package.json package.json inherits README.md inherits.js inherits_browser.js package.json js-sha256 CHANGELOG.md LICENSE.txt README.md build sha256.min.js index.d.ts package.json src sha256.js mustache CHANGELOG.md README.md mustache.js mustache.min.js package.json near-api-js README.md browser-exports.js dist near-api-js.js near-api-js.min.js lib account.d.ts account.js account_creator.d.ts account_creator.js account_multisig.d.ts account_multisig.js browser-connect.d.ts browser-connect.js browser-index.d.ts browser-index.js common-index.d.ts common-index.js connect.d.ts connect.js connection.d.ts connection.js constants.d.ts constants.js contract.d.ts contract.js generated rpc_error_schema.json index.d.ts index.js key_stores browser-index.d.ts browser-index.js browser_local_storage_key_store.d.ts browser_local_storage_key_store.js in_memory_key_store.d.ts in_memory_key_store.js index.d.ts index.js keystore.d.ts keystore.js merge_key_store.d.ts merge_key_store.js unencrypted_file_system_keystore.d.ts unencrypted_file_system_keystore.js near.d.ts near.js providers index.d.ts index.js json-rpc-provider.d.ts json-rpc-provider.js provider.d.ts provider.js res error_messages.json signer.d.ts signer.js transaction.d.ts transaction.js utils enums.d.ts enums.js errors.d.ts errors.js exponential-backoff.d.ts exponential-backoff.js format.d.ts format.js index.d.ts index.js key_pair.d.ts key_pair.js rpc_errors.d.ts rpc_errors.js serialize.d.ts serialize.js setup-node-fetch.d.ts setup-node-fetch.js web.d.ts web.js validators.d.ts validators.js wallet-account.d.ts wallet-account.js package.json node-fetch LICENSE.md README.md browser.js lib index.es.js index.js package.json o3 README.md index.js lib Class.js abstractMethod.js index.js package.json safe-buffer README.md index.d.ts index.js package.json setprototypeof README.md index.d.ts index.js package.json test index.js statuses HISTORY.md README.md codes.json index.js package.json text-encoding-utf-8 LICENSE.md README.md lib encoding.js encoding.lib.js package.json src encoding.js polyfill.js toidentifier HISTORY.md README.md index.js package.json tr46 index.js lib mappingTable.json package.json tweetnacl AUTHORS.md CHANGELOG.md PULL_REQUEST_TEMPLATE.md README.md nacl-fast.js nacl-fast.min.js nacl.d.ts nacl.js nacl.min.js package.json u3 README.md index.js lib cache.js eachCombination.js index.js package.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js public-api.js url-state-machine.js utils.js package.json package-lock.json package.json test factor circuit.r1cs.json
# is-core-module <sup>[![Version Badge][2]][1]</sup> [![github actions][actions-image]][actions-url] [![coverage][codecov-image]][codecov-url] [![dependency status][5]][6] [![dev dependency status][7]][8] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][11]][1] Is this specifier a node.js core module? Optionally provide a node version to check; defaults to the current node version. ## Example ```js var isCore = require('is-core-module'); var assert = require('assert'); assert(isCore('fs')); assert(!isCore('butts')); ``` ## Tests Clone the repo, `npm install`, and run `npm test` [1]: https://npmjs.org/package/is-core-module [2]: https://versionbadg.es/inspect-js/is-core-module.svg [5]: https://david-dm.org/inspect-js/is-core-module.svg [6]: https://david-dm.org/inspect-js/is-core-module [7]: https://david-dm.org/inspect-js/is-core-module/dev-status.svg [8]: https://david-dm.org/inspect-js/is-core-module#info=devDependencies [11]: https://nodei.co/npm/is-core-module.png?downloads=true&stars=true [license-image]: https://img.shields.io/npm/l/is-core-module.svg [license-url]: LICENSE [downloads-image]: https://img.shields.io/npm/dm/is-core-module.svg [downloads-url]: https://npm-stat.com/charts.html?package=is-core-module [codecov-image]: https://codecov.io/gh/inspect-js/is-core-module/branch/main/graphs/badge.svg [codecov-url]: https://app.codecov.io/gh/inspect-js/is-core-module/ [actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/inspect-js/is-core-module [actions-url]: https://github.com/inspect-js/is-core-module/actions # PostCSS [![Gitter][chat-img]][chat] <img align="right" width="95" height="95" alt="Philosopher’s stone, logo of PostCSS" src="https://postcss.org/logo.svg"> [chat-img]: https://img.shields.io/badge/Gitter-Join_the_PostCSS_chat-brightgreen.svg [chat]: https://gitter.im/postcss/postcss PostCSS is a tool for transforming styles with JS plugins. These plugins can lint your CSS, support variables and mixins, transpile future CSS syntax, inline images, and more. PostCSS is used by industry leaders including Wikipedia, Twitter, Alibaba, and JetBrains. The [Autoprefixer] PostCSS plugin is one of the most popular CSS processors. PostCSS takes a CSS file and provides an API to analyze and modify its rules (by transforming them into an [Abstract Syntax Tree]). This API can then be used by [plugins] to do a lot of useful things, e.g., to find errors automatically, or to insert vendor prefixes. **Support / Discussion:** [Gitter](https://gitter.im/postcss/postcss)<br> **Twitter account:** [@postcss](https://twitter.com/postcss)<br> **VK.com page:** [postcss](https://vk.com/postcss)<br> **中文翻译**: [`docs/README-cn.md`](./docs/README-cn.md) For PostCSS commercial support (consulting, improving the front-end culture of your company, PostCSS plugins), contact [Evil Martians] at <[email protected]>. [Abstract Syntax Tree]: https://en.wikipedia.org/wiki/Abstract_syntax_tree [Evil Martians]: https://evilmartians.com/?utm_source=postcss [Autoprefixer]: https://github.com/postcss/autoprefixer [plugins]: https://github.com/postcss/postcss#plugins <a href="https://evilmartians.com/?utm_source=postcss"> <img src="https://evilmartians.com/badges/sponsored-by-evil-martians.svg" alt="Sponsored by Evil Martians" width="236" height="54"> </a> ## Docs Read **[full docs](https://github.com/postcss/postcss#readme)** on GitHub. bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT TweetNaCl.js ============ Port of [TweetNaCl](http://tweetnacl.cr.yp.to) / [NaCl](http://nacl.cr.yp.to/) to JavaScript for modern browsers and Node.js. Public domain. [![Build Status](https://travis-ci.org/dchest/tweetnacl-js.svg?branch=master) ](https://travis-ci.org/dchest/tweetnacl-js) Demo: <https://dchest.github.io/tweetnacl-js/> Documentation ============= * [Overview](#overview) * [Audits](#audits) * [Installation](#installation) * [Examples](#examples) * [Usage](#usage) * [Public-key authenticated encryption (box)](#public-key-authenticated-encryption-box) * [Secret-key authenticated encryption (secretbox)](#secret-key-authenticated-encryption-secretbox) * [Scalar multiplication](#scalar-multiplication) * [Signatures](#signatures) * [Hashing](#hashing) * [Random bytes generation](#random-bytes-generation) * [Constant-time comparison](#constant-time-comparison) * [System requirements](#system-requirements) * [Development and testing](#development-and-testing) * [Benchmarks](#benchmarks) * [Contributors](#contributors) * [Who uses it](#who-uses-it) Overview -------- The primary goal of this project is to produce a translation of TweetNaCl to JavaScript which is as close as possible to the original C implementation, plus a thin layer of idiomatic high-level API on top of it. There are two versions, you can use either of them: * `nacl.js` is the port of TweetNaCl with minimum differences from the original + high-level API. * `nacl-fast.js` is like `nacl.js`, but with some functions replaced with faster versions. (Used by default when importing NPM package.) Audits ------ TweetNaCl.js has been audited by [Cure53](https://cure53.de/) in January-February 2017 (audit was sponsored by [Deletype](https://deletype.com)): > The overall outcome of this audit signals a particularly positive assessment > for TweetNaCl-js, as the testing team was unable to find any security > problems in the library. It has to be noted that this is an exceptionally > rare result of a source code audit for any project and must be seen as a true > testament to a development proceeding with security at its core. > > To reiterate, the TweetNaCl-js project, the source code was found to be > bug-free at this point. > > [...] > > In sum, the testing team is happy to recommend the TweetNaCl-js project as > likely one of the safer and more secure cryptographic tools among its > competition. [Read full audit report](https://cure53.de/tweetnacl.pdf) Installation ------------ You can install TweetNaCl.js via a package manager: [Yarn](https://yarnpkg.com/): $ yarn add tweetnacl [NPM](https://www.npmjs.org/): $ npm install tweetnacl or [download source code](https://github.com/dchest/tweetnacl-js/releases). Examples -------- You can find usage examples in our [wiki](https://github.com/dchest/tweetnacl-js/wiki/Examples). Usage ----- All API functions accept and return bytes as `Uint8Array`s. If you need to encode or decode strings, use functions from <https://github.com/dchest/tweetnacl-util-js> or one of the more robust codec packages. In Node.js v4 and later `Buffer` objects are backed by `Uint8Array`s, so you can freely pass them to TweetNaCl.js functions as arguments. The returned objects are still `Uint8Array`s, so if you need `Buffer`s, you'll have to convert them manually; make sure to convert using copying: `Buffer.from(array)` (or `new Buffer(array)` in Node.js v4 or earlier), instead of sharing: `Buffer.from(array.buffer)` (or `new Buffer(array.buffer)` Node 4 or earlier), because some functions return subarrays of their buffers. ### Public-key authenticated encryption (box) Implements *x25519-xsalsa20-poly1305*. #### nacl.box.keyPair() Generates a new random key pair for box and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 32-byte secret key } #### nacl.box.keyPair.fromSecretKey(secretKey) Returns a key pair for box with public key corresponding to the given secret key. #### nacl.box(message, nonce, theirPublicKey, mySecretKey) Encrypts and authenticates message using peer's public key, our secret key, and the given nonce, which must be unique for each distinct message for a key pair. Returns an encrypted and authenticated message, which is `nacl.box.overheadLength` longer than the original message. #### nacl.box.open(box, nonce, theirPublicKey, mySecretKey) Authenticates and decrypts the given box with peer's public key, our secret key, and the given nonce. Returns the original message, or `null` if authentication fails. #### nacl.box.before(theirPublicKey, mySecretKey) Returns a precomputed shared key which can be used in `nacl.box.after` and `nacl.box.open.after`. #### nacl.box.after(message, nonce, sharedKey) Same as `nacl.box`, but uses a shared key precomputed with `nacl.box.before`. #### nacl.box.open.after(box, nonce, sharedKey) Same as `nacl.box.open`, but uses a shared key precomputed with `nacl.box.before`. #### Constants ##### nacl.box.publicKeyLength = 32 Length of public key in bytes. ##### nacl.box.secretKeyLength = 32 Length of secret key in bytes. ##### nacl.box.sharedKeyLength = 32 Length of precomputed shared key in bytes. ##### nacl.box.nonceLength = 24 Length of nonce in bytes. ##### nacl.box.overheadLength = 16 Length of overhead added to box compared to original message. ### Secret-key authenticated encryption (secretbox) Implements *xsalsa20-poly1305*. #### nacl.secretbox(message, nonce, key) Encrypts and authenticates message using the key and the nonce. The nonce must be unique for each distinct message for this key. Returns an encrypted and authenticated message, which is `nacl.secretbox.overheadLength` longer than the original message. #### nacl.secretbox.open(box, nonce, key) Authenticates and decrypts the given secret box using the key and the nonce. Returns the original message, or `null` if authentication fails. #### Constants ##### nacl.secretbox.keyLength = 32 Length of key in bytes. ##### nacl.secretbox.nonceLength = 24 Length of nonce in bytes. ##### nacl.secretbox.overheadLength = 16 Length of overhead added to secret box compared to original message. ### Scalar multiplication Implements *x25519*. #### nacl.scalarMult(n, p) Multiplies an integer `n` by a group element `p` and returns the resulting group element. #### nacl.scalarMult.base(n) Multiplies an integer `n` by a standard group element and returns the resulting group element. #### Constants ##### nacl.scalarMult.scalarLength = 32 Length of scalar in bytes. ##### nacl.scalarMult.groupElementLength = 32 Length of group element in bytes. ### Signatures Implements [ed25519](http://ed25519.cr.yp.to). #### nacl.sign.keyPair() Generates new random key pair for signing and returns it as an object with `publicKey` and `secretKey` members: { publicKey: ..., // Uint8Array with 32-byte public key secretKey: ... // Uint8Array with 64-byte secret key } #### nacl.sign.keyPair.fromSecretKey(secretKey) Returns a signing key pair with public key corresponding to the given 64-byte secret key. The secret key must have been generated by `nacl.sign.keyPair` or `nacl.sign.keyPair.fromSeed`. #### nacl.sign.keyPair.fromSeed(seed) Returns a new signing key pair generated deterministically from a 32-byte seed. The seed must contain enough entropy to be secure. This method is not recommended for general use: instead, use `nacl.sign.keyPair` to generate a new key pair from a random seed. #### nacl.sign(message, secretKey) Signs the message using the secret key and returns a signed message. #### nacl.sign.open(signedMessage, publicKey) Verifies the signed message and returns the message without signature. Returns `null` if verification failed. #### nacl.sign.detached(message, secretKey) Signs the message using the secret key and returns a signature. #### nacl.sign.detached.verify(message, signature, publicKey) Verifies the signature for the message and returns `true` if verification succeeded or `false` if it failed. #### Constants ##### nacl.sign.publicKeyLength = 32 Length of signing public key in bytes. ##### nacl.sign.secretKeyLength = 64 Length of signing secret key in bytes. ##### nacl.sign.seedLength = 32 Length of seed for `nacl.sign.keyPair.fromSeed` in bytes. ##### nacl.sign.signatureLength = 64 Length of signature in bytes. ### Hashing Implements *SHA-512*. #### nacl.hash(message) Returns SHA-512 hash of the message. #### Constants ##### nacl.hash.hashLength = 64 Length of hash in bytes. ### Random bytes generation #### nacl.randomBytes(length) Returns a `Uint8Array` of the given length containing random bytes of cryptographic quality. **Implementation note** TweetNaCl.js uses the following methods to generate random bytes, depending on the platform it runs on: * `window.crypto.getRandomValues` (WebCrypto standard) * `window.msCrypto.getRandomValues` (Internet Explorer 11) * `crypto.randomBytes` (Node.js) If the platform doesn't provide a suitable PRNG, the following functions, which require random numbers, will throw exception: * `nacl.randomBytes` * `nacl.box.keyPair` * `nacl.sign.keyPair` Other functions are deterministic and will continue working. If a platform you are targeting doesn't implement secure random number generator, but you somehow have a cryptographically-strong source of entropy (not `Math.random`!), and you know what you are doing, you can plug it into TweetNaCl.js like this: nacl.setPRNG(function(x, n) { // ... copy n random bytes into x ... }); Note that `nacl.setPRNG` *completely replaces* internal random byte generator with the one provided. ### Constant-time comparison #### nacl.verify(x, y) Compares `x` and `y` in constant time and returns `true` if their lengths are non-zero and equal, and their contents are equal. Returns `false` if either of the arguments has zero length, or arguments have different lengths, or their contents differ. System requirements ------------------- TweetNaCl.js supports modern browsers that have a cryptographically secure pseudorandom number generator and typed arrays, including the latest versions of: * Chrome * Firefox * Safari (Mac, iOS) * Internet Explorer 11 Other systems: * Node.js Development and testing ------------------------ Install NPM modules needed for development: $ npm install To build minified versions: $ npm run build Tests use minified version, so make sure to rebuild it every time you change `nacl.js` or `nacl-fast.js`. ### Testing To run tests in Node.js: $ npm run test-node By default all tests described here work on `nacl.min.js`. To test other versions, set environment variable `NACL_SRC` to the file name you want to test. For example, the following command will test fast minified version: $ NACL_SRC=nacl-fast.min.js npm run test-node To run full suite of tests in Node.js, including comparing outputs of JavaScript port to outputs of the original C version: $ npm run test-node-all To prepare tests for browsers: $ npm run build-test-browser and then open `test/browser/test.html` (or `test/browser/test-fast.html`) to run them. To run tests in both Node and Electron: $ npm test ### Benchmarking To run benchmarks in Node.js: $ npm run bench $ NACL_SRC=nacl-fast.min.js npm run bench To run benchmarks in a browser, open `test/benchmark/bench.html` (or `test/benchmark/bench-fast.html`). Benchmarks ---------- For reference, here are benchmarks from MacBook Pro (Retina, 13-inch, Mid 2014) laptop with 2.6 GHz Intel Core i5 CPU (Intel) in Chrome 53/OS X and Xiaomi Redmi Note 3 smartphone with 1.8 GHz Qualcomm Snapdragon 650 64-bit CPU (ARM) in Chrome 52/Android: | | nacl.js Intel | nacl-fast.js Intel | nacl.js ARM | nacl-fast.js ARM | | ------------- |:-------------:|:-------------------:|:-------------:|:-----------------:| | salsa20 | 1.3 MB/s | 128 MB/s | 0.4 MB/s | 43 MB/s | | poly1305 | 13 MB/s | 171 MB/s | 4 MB/s | 52 MB/s | | hash | 4 MB/s | 34 MB/s | 0.9 MB/s | 12 MB/s | | secretbox 1K | 1113 op/s | 57583 op/s | 334 op/s | 14227 op/s | | box 1K | 145 op/s | 718 op/s | 37 op/s | 368 op/s | | scalarMult | 171 op/s | 733 op/s | 56 op/s | 380 op/s | | sign | 77 op/s | 200 op/s | 20 op/s | 61 op/s | | sign.open | 39 op/s | 102 op/s | 11 op/s | 31 op/s | (You can run benchmarks on your devices by clicking on the links at the bottom of the [home page](https://tweetnacl.js.org)). In short, with *nacl-fast.js* and 1024-byte messages you can expect to encrypt and authenticate more than 57000 messages per second on a typical laptop or more than 14000 messages per second on a $170 smartphone, sign about 200 and verify 100 messages per second on a laptop or 60 and 30 messages per second on a smartphone, per CPU core (with Web Workers you can do these operations in parallel), which is good enough for most applications. Contributors ------------ See AUTHORS.md file. Third-party libraries based on TweetNaCl.js ------------------------------------------- * [forward-secrecy](https://github.com/alax/forward-secrecy) — Axolotl ratchet implementation * [nacl-stream](https://github.com/dchest/nacl-stream-js) - streaming encryption * [tweetnacl-auth-js](https://github.com/dchest/tweetnacl-auth-js) — implementation of [`crypto_auth`](http://nacl.cr.yp.to/auth.html) * [tweetnacl-sealed-box](https://github.com/whs/tweetnacl-sealed-box) — implementation of [`sealed boxes`](https://download.libsodium.org/doc/public-key_cryptography/sealed_boxes.html) * [chloride](https://github.com/dominictarr/chloride) - unified API for various NaCl modules Who uses it ----------- Some notable users of TweetNaCl.js: * [GitHub](https://github.com) * [MEGA](https://github.com/meganz/webclient) * [Stellar](https://www.stellar.org/) * [miniLock](https://github.com/kaepora/miniLock) # camelcase-css [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Convert a kebab-cased CSS property into a camelCased DOM property. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. Type this at the command line: ```shell npm install camelcase-css ``` ## Usage ```js const camelCaseCSS = require('camelcase-css'); camelCaseCSS('-webkit-border-radius'); //-> WebkitBorderRadius camelCaseCSS('-moz-border-radius'); //-> MozBorderRadius camelCaseCSS('-ms-border-radius'); //-> msBorderRadius camelCaseCSS('border-radius'); //-> borderRadius ``` [npm-image]: https://img.shields.io/npm/v/camelcase-css.svg [npm-url]: https://npmjs.org/package/camelcase-css [travis-image]: https://img.shields.io/travis/stevenvachon/camelcase-css.svg [travis-url]: https://travis-ci.org/stevenvachon/camelcase-css # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Sponsors [![Scout APM](./sponsors/scout-apm.png)](https://scoutapm.com/) My Open Source work is supported by [Scout APM](https://scoutapm.com/) and [other sponsors](https://github.com/sponsors/indutny). ## Notation ### Prefixes There are several prefixes to instructions that affect the way they work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the end of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this trick one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # Polyfill for `Object.setPrototypeOf` [![NPM Version](https://img.shields.io/npm/v/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![NPM Downloads](https://img.shields.io/npm/dm/setprototypeof.svg)](https://npmjs.org/package/setprototypeof) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](https://github.com/standard/standard) A simple cross platform implementation to set the prototype of an instianted object. Supports all modern browsers and at least back to IE8. ## Usage: ``` $ npm install --save setprototypeof ``` ```javascript var setPrototypeOf = require('setprototypeof') var obj = {} setPrototypeOf(obj, { foo: function () { return 'bar' } }) obj.foo() // bar ``` TypeScript is also supported: ```typescript import setPrototypeOf from 'setprototypeof' ``` # u3 - Utility Functions This lib contains utility functions for e3, dataflower and other projects. ## Documentation ### Installation ```bash npm install u3 ``` ```bash bower install u3 ``` #### Usage In this documentation I used the lib as follows: ```js var u3 = require("u3"), cache = u3.cache, eachCombination = u3.eachCombination; ``` ### Function wrappers #### cache The `cache(fn)` function caches the fn results, so by the next calls it will return the result of the first call. You can use different arguments, but they won't affect the return value. ```js var a = cache(function fn(x, y, z){ return x + y + z; }); console.log(a(1, 2, 3)); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` It is possible to cache a value too. ```js var a = cache(1 + 2 + 3); console.log(a()); // 6 console.log(a()); // 6 console.log(a()); // 6 ``` ### Math #### eachCombination The `eachCombination(alternativesByDimension, callback)` calls the `callback(a,b,c,...)` on each combination of the `alternatives[a[],b[],c[],...]`. ```js eachCombination([ [1, 2, 3], ["a", "b"] ], console.log); /* 1, "a" 1, "b" 2, "a" 2, "b" 3, "a" 3, "b" */ ``` You can use any dimension and number of alternatives. In the current example we used 2 dimensions. By the first dimension we used 3 alternatives: `[1, 2, 3]` and by the second dimension we used 2 alternatives: `["a", "b"]`. ## License MIT - 2016 Jánszky László Lajos # queue-microtask [![ci][ci-image]][ci-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [ci-image]: https://img.shields.io/github/workflow/status/feross/queue-microtask/ci/master [ci-url]: https://github.com/feross/queue-microtask/actions [npm-image]: https://img.shields.io/npm/v/queue-microtask.svg [npm-url]: https://npmjs.org/package/queue-microtask [downloads-image]: https://img.shields.io/npm/dm/queue-microtask.svg [downloads-url]: https://npmjs.org/package/queue-microtask [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com ### fast, tiny [`queueMicrotask`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/queueMicrotask) shim for modern engines - Use [`queueMicrotask`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/queueMicrotask) in all modern JS engines. - No dependencies. Less than 10 lines. No shims or complicated fallbacks. - Optimal performance in all modern environments - Uses `queueMicrotask` in modern environments - Fallback to `Promise.resolve().then(fn)` in Node.js 10 and earlier, and old browsers (same performance as `queueMicrotask`) ## install ``` npm install queue-microtask ``` ## usage ```js const queueMicrotask = require('queue-microtask') queueMicrotask(() => { /* this will run soon */ }) ``` ## What is `queueMicrotask` and why would one use it? The `queueMicrotask` function is a WHATWG standard. It queues a microtask to be executed prior to control returning to the event loop. A microtask is a short function which will run after the current task has completed its work and when there is no other code waiting to be run before control of the execution context is returned to the event loop. The code `queueMicrotask(fn)` is equivalent to the code `Promise.resolve().then(fn)`. It is also very similar to [`process.nextTick(fn)`](https://nodejs.org/api/process.html#process_process_nexttick_callback_args) in Node. Using microtasks lets code run without interfering with any other, potentially higher priority, code that is pending, but before the JS engine regains control over the execution context. See the [spec](https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#microtask-queuing) or [Node documentation](https://nodejs.org/api/globals.html#globals_queuemicrotask_callback) for more information. ## Who is this package for? This package allows you to use `queueMicrotask` safely in all modern JS engines. Use it if you prioritize small JS bundle size over support for old browsers. If you just need to support Node 12 and later, use `queueMicrotask` directly. If you need to support all versions of Node, use this package. ## Why not use `process.nextTick`? In Node, `queueMicrotask` and `process.nextTick` are [essentially equivalent](https://nodejs.org/api/globals.html#globals_queuemicrotask_callback), though there are [subtle differences](https://github.com/YuzuJS/setImmediate#macrotasks-and-microtasks) that don't matter in most situations. You can think of `queueMicrotask` as a standardized version of `process.nextTick` that works in the browser. No need to rely on your browser bundler to shim `process` for the browser environment. ## Why not use `setTimeout(fn, 0)`? This approach is the most compatible, but it has problems. Modern browsers throttle timers severely, so `setTimeout(…, 0)` usually takes at least 4ms to run. Furthermore, the throttling gets even worse if the page is backgrounded. If you have many `setTimeout` calls, then this can severely limit the performance of your program. ## Why not use a microtask library like [`immediate`](https://www.npmjs.com/package/immediate) or [`asap`](https://www.npmjs.com/package/asap)? These packages are great! However, if you prioritize small JS bundle size over optimal performance in old browsers then you may want to consider this package. This package (`queue-microtask`) is four times smaller than `immediate`, twice as small as `asap`, and twice as small as using `process.nextTick` and letting the browser bundler shim it automatically. Note: This package throws an exception in JS environments which lack `Promise` support -- which are usually very old browsers and Node.js versions. Since the `queueMicrotask` API is supported in Node.js, Chrome, Firefox, Safari, Opera, and Edge, **the vast majority of users will get optimal performance**. Any JS environment with `Promise`, which is almost all of them, also get optimal performance. If you need support for JS environments which lack `Promise` support, use one of the alternative packages. ## What is a shim? > In computer programming, a shim is a library that transparently intercepts API calls and changes the arguments passed, handles the operation itself or redirects the operation elsewhere. – [Wikipedia](https://en.wikipedia.org/wiki/Shim_(computing)) This package could also be described as a "ponyfill". > A ponyfill is almost the same as a polyfill, but not quite. Instead of patching functionality for older browsers, a ponyfill provides that functionality as a standalone module you can use. – [PonyFoo](https://ponyfoo.com/articles/polyfills-or-ponyfills) ## API ### `queueMicrotask(fn)` The `queueMicrotask()` method queues a microtask. The `fn` argument is a function to be executed after all pending tasks have completed but before yielding control to the browser's event loop. ## license MIT. Copyright (c) [Feross Aboukhadijeh](https://feross.org). # WebIDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [WebIDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a WebIDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different WebIDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the WebIDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the WebIDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). ## Status All of the numeric types are implemented (float being implemented as double) and some others are as well - check the source for all of them. This list will grow over time in service of the [HTML as Custom Elements](https://github.com/dglazkov/html-as-custom-elements) project, but in the meantime, pull requests welcome! I'm not sure yet what the strategy will be for modifiers, e.g. [`[Clamp]`](http://heycam.github.io/webidl/#Clamp). Maybe something like `conversions["unsigned long"](x, { clamp: true })`? We'll see. We might also want to extend the API to give better error messages, e.g. "Argument 1 of HTMLMediaElement.fastSeek is not a finite floating-point value" instead of "Argument is not a finite floating-point value." This would require passing in more information to the conversion functions than we currently do. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. WebIDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on WebIDL values, i.e. instances of WebIDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a WebIDL value of [WebIDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, WebIDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given WebIDL operation, how does that get converted into a WebIDL value? For example, a JavaScript `true` passed in the position of a WebIDL `boolean` argument becomes a WebIDL `true`. But, a JavaScript `true` passed in the position of a [WebIDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a WebIDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the WebIDL algorithms, they don't actually use WebIDL values, since those aren't "real" outside of specs. Instead, implementations apply the WebIDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting WebIDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of WebIDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given WebIDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ WebIDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ WebIDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a WebIDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't Use This Seriously, why would you ever use this? You really shouldn't. WebIDL is … not great, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from WebIDL. In general, your JavaScript should not be trying to become more like WebIDL; if anything, we should fix WebIDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in WebIDL. anymatch [![Build Status](https://travis-ci.org/micromatch/anymatch.svg?branch=master)](https://travis-ci.org/micromatch/anymatch) [![Coverage Status](https://img.shields.io/coveralls/micromatch/anymatch.svg?branch=master)](https://coveralls.io/r/micromatch/anymatch?branch=master) ====== Javascript module to match a string against a regular expression, glob, string, or function that takes the string as an argument and returns a truthy or falsy value. The matcher can also be an array of any or all of these. Useful for allowing a very flexible user-defined config to define things like file paths. __Note: This module has Bash-parity, please be aware that Windows-style backslashes are not supported as separators. See https://github.com/micromatch/micromatch#backslashes for more information.__ Usage ----- ```sh npm install anymatch ``` #### anymatch(matchers, testString, [returnIndex], [options]) * __matchers__: (_Array|String|RegExp|Function_) String to be directly matched, string with glob patterns, regular expression test, function that takes the testString as an argument and returns a truthy value if it should be matched, or an array of any number and mix of these types. * __testString__: (_String|Array_) The string to test against the matchers. If passed as an array, the first element of the array will be used as the `testString` for non-function matchers, while the entire array will be applied as the arguments for function matchers. * __options__: (_Object_ [optional]_) Any of the [picomatch](https://github.com/micromatch/picomatch#options) options. * __returnIndex__: (_Boolean [optional]_) If true, return the array index of the first matcher that that testString matched, or -1 if no match, instead of a boolean result. ```js const anymatch = require('anymatch'); const matchers = [ 'path/to/file.js', 'path/anyjs/**/*.js', /foo.js$/, string => string.includes('bar') && string.length > 10 ] ; anymatch(matchers, 'path/to/file.js'); // true anymatch(matchers, 'path/anyjs/baz.js'); // true anymatch(matchers, 'path/to/foo.js'); // true anymatch(matchers, 'path/to/bar.js'); // true anymatch(matchers, 'bar.js'); // false // returnIndex = true anymatch(matchers, 'foo.js', {returnIndex: true}); // 2 anymatch(matchers, 'path/anyjs/foo.js', {returnIndex: true}); // 1 // any picomatc // using globs to match directories and their children anymatch('node_modules', 'node_modules'); // true anymatch('node_modules', 'node_modules/somelib/index.js'); // false anymatch('node_modules/**', 'node_modules/somelib/index.js'); // true anymatch('node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // false anymatch('**/node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // true const matcher = anymatch(matchers); ['foo.js', 'bar.js'].filter(matcher); // [ 'foo.js' ] anymatch master* ❯ ``` #### anymatch(matchers) You can also pass in only your matcher(s) to get a curried function that has already been bound to the provided matching criteria. This can be used as an `Array#filter` callback. ```js var matcher = anymatch(matchers); matcher('path/to/file.js'); // true matcher('path/anyjs/baz.js', true); // 1 ['foo.js', 'bar.js'].filter(matcher); // ['foo.js'] ``` Changelog ---------- [See release notes page on GitHub](https://github.com/micromatch/anymatch/releases) - **v3.0:** Removed `startIndex` and `endIndex` arguments. Node 8.x-only. - **v2.0:** [micromatch](https://github.com/jonschlinkert/micromatch) moves away from minimatch-parity and inline with Bash. This includes handling backslashes differently (see https://github.com/micromatch/micromatch#backslashes for more information). - **v1.2:** anymatch uses [micromatch](https://github.com/jonschlinkert/micromatch) for glob pattern matching. Issues with glob pattern matching should be reported directly to the [micromatch issue tracker](https://github.com/jonschlinkert/micromatch/issues). License ------- [ISC](https://raw.github.com/micromatch/anymatch/master/LICENSE) # is-glob [![NPM version](https://img.shields.io/npm/v/is-glob.svg?style=flat)](https://www.npmjs.com/package/is-glob) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![NPM total downloads](https://img.shields.io/npm/dt/is-glob.svg?style=flat)](https://npmjs.org/package/is-glob) [![Build Status](https://img.shields.io/github/workflow/status/micromatch/is-glob/dev)](https://github.com/micromatch/is-glob/actions) > Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-glob ``` You might also be interested in [is-valid-glob](https://github.com/jonschlinkert/is-valid-glob) and [has-glob](https://github.com/jonschlinkert/has-glob). ## Usage ```js var isGlob = require('is-glob'); ``` ### Default behavior **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js'); isGlob('*.js'); isGlob('**/abc.js'); isGlob('abc/*.js'); isGlob('abc/(aaa|bbb).js'); isGlob('abc/[a-z].js'); isGlob('abc/{a,b}.js'); //=> true ``` Extglobs ```js isGlob('abc/@(a).js'); isGlob('abc/!(a).js'); isGlob('abc/+(a).js'); isGlob('abc/*(a).js'); isGlob('abc/?(a).js'); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('abc/\\@(a).js'); isGlob('abc/\\!(a).js'); isGlob('abc/\\+(a).js'); isGlob('abc/\\*(a).js'); isGlob('abc/\\?(a).js'); isGlob('\\!foo.js'); isGlob('\\*.js'); isGlob('\\*\\*/abc.js'); isGlob('abc/\\*.js'); isGlob('abc/\\(aaa|bbb).js'); isGlob('abc/\\[a-z].js'); isGlob('abc/\\{a,b}.js'); //=> false ``` Patterns that do not have glob patterns return `false`: ```js isGlob('abc.js'); isGlob('abc/def/ghi.js'); isGlob('foo.js'); isGlob('abc/@.js'); isGlob('abc/+.js'); isGlob('abc/?.js'); isGlob(); isGlob(null); //=> false ``` Arrays are also `false` (If you want to check if an array has a glob pattern, use [has-glob](https://github.com/jonschlinkert/has-glob)): ```js isGlob(['**/*.js']); isGlob(['foo.js']); //=> false ``` ### Option strict When `options.strict === false` the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return `false` may return `true`. This is done so that matching libraries like [micromatch](https://github.com/micromatch/micromatch) have a chance at determining if the pattern is a glob or not. **True** Patterns that have glob characters or regex patterns will return `true`: ```js isGlob('!foo.js', {strict: false}); isGlob('*.js', {strict: false}); isGlob('**/abc.js', {strict: false}); isGlob('abc/*.js', {strict: false}); isGlob('abc/(aaa|bbb).js', {strict: false}); isGlob('abc/[a-z].js', {strict: false}); isGlob('abc/{a,b}.js', {strict: false}); //=> true ``` Extglobs ```js isGlob('abc/@(a).js', {strict: false}); isGlob('abc/!(a).js', {strict: false}); isGlob('abc/+(a).js', {strict: false}); isGlob('abc/*(a).js', {strict: false}); isGlob('abc/?(a).js', {strict: false}); //=> true ``` **False** Escaped globs or extglobs return `false`: ```js isGlob('\\!foo.js', {strict: false}); isGlob('\\*.js', {strict: false}); isGlob('\\*\\*/abc.js', {strict: false}); isGlob('abc/\\*.js', {strict: false}); isGlob('abc/\\(aaa|bbb).js', {strict: false}); isGlob('abc/\\[a-z].js', {strict: false}); isGlob('abc/\\{a,b}.js', {strict: false}); //=> false ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [assemble](https://www.npmjs.com/package/assemble): Get the rocks out of your socks! Assemble makes you fast at creating web projects… [more](https://github.com/assemble/assemble) | [homepage](https://github.com/assemble/assemble "Get the rocks out of your socks! Assemble makes you fast at creating web projects. Assemble is used by thousands of projects for rapid prototyping, creating themes, scaffolds, boilerplates, e-books, UI components, API documentation, blogs, building websit") * [base](https://www.npmjs.com/package/base): Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks | [homepage](https://github.com/node-base/base "Framework for rapidly creating high quality, server-side node.js applications, using plugins like building blocks") * [update](https://www.npmjs.com/package/update): Be scalable! Update is a new, open source developer framework and CLI for automating updates… [more](https://github.com/update/update) | [homepage](https://github.com/update/update "Be scalable! Update is a new, open source developer framework and CLI for automating updates of any kind in code projects.") * [verb](https://www.npmjs.com/package/verb): Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used… [more](https://github.com/verbose/verb) | [homepage](https://github.com/verbose/verb "Documentation generator for GitHub projects. Verb is extremely powerful, easy to use, and is used on hundreds of projects of all sizes to generate everything from API docs to readmes.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 47 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [doowb](https://github.com/doowb) | | 1 | [phated](https://github.com/phated) | | 1 | [danhper](https://github.com/danhper) | | 1 | [paulmillr](https://github.com/paulmillr) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 27, 2019._ # readdirp [![Weekly downloads](https://img.shields.io/npm/dw/readdirp.svg)](https://github.com/paulmillr/readdirp) Recursive version of [fs.readdir](https://nodejs.org/api/fs.html#fs_fs_readdir_path_options_callback). Exposes a **stream API** and a **promise API**. ```sh npm install readdirp ``` ```javascript const readdirp = require('readdirp'); // Use streams to achieve small RAM & CPU footprint. // 1) Streams example with for-await. for await (const entry of readdirp('.')) { const {path} = entry; console.log(`${JSON.stringify({path})}`); } // 2) Streams example, non for-await. // Print out all JS files along with their size within the current folder & subfolders. readdirp('.', {fileFilter: '*.js', alwaysStat: true}) .on('data', (entry) => { const {path, stats: {size}} = entry; console.log(`${JSON.stringify({path, size})}`); }) // Optionally call stream.destroy() in `warn()` in order to abort and cause 'close' to be emitted .on('warn', error => console.error('non-fatal error', error)) .on('error', error => console.error('fatal error', error)) .on('end', () => console.log('done')); // 3) Promise example. More RAM and CPU than streams / for-await. const files = await readdirp.promise('.'); console.log(files.map(file => file.path)); // Other options. readdirp('test', { fileFilter: '*.js', directoryFilter: ['!.git', '!*modules'] // directoryFilter: (di) => di.basename.length === 9 type: 'files_directories', depth: 1 }); ``` For more examples, check out `examples` directory. ## API `const stream = readdirp(root[, options])` — **Stream API** - Reads given root recursively and returns a `stream` of [entry infos](#entryinfo) - Optionally can be used like `for await (const entry of stream)` with node.js 10+ (`asyncIterator`). - `on('data', (entry) => {})` [entry info](#entryinfo) for every file / dir. - `on('warn', (error) => {})` non-fatal `Error` that prevents a file / dir from being processed. Example: inaccessible to the user. - `on('error', (error) => {})` fatal `Error` which also ends the stream. Example: illegal options where passed. - `on('end')` — we are done. Called when all entries were found and no more will be emitted. - `on('close')` — stream is destroyed via `stream.destroy()`. Could be useful if you want to manually abort even on a non fatal error. At that point the stream is no longer `readable` and no more entries, warning or errors are emitted - To learn more about streams, consult the very detailed [nodejs streams documentation](https://nodejs.org/api/stream.html) or the [stream-handbook](https://github.com/substack/stream-handbook) `const entries = await readdirp.promise(root[, options])` — **Promise API**. Returns a list of [entry infos](#entryinfo). First argument is awalys `root`, path in which to start reading and recursing into subdirectories. ### options - `fileFilter: ["*.js"]`: filter to include or exclude files. A `Function`, Glob string or Array of glob strings. - **Function**: a function that takes an entry info as a parameter and returns true to include or false to exclude the entry - **Glob string**: a string (e.g., `*.js`) which is matched using [picomatch](https://github.com/micromatch/picomatch), so go there for more information. Globstars (`**`) are not supported since specifying a recursive pattern for an already recursive function doesn't make sense. Negated globs (as explained in the minimatch documentation) are allowed, e.g., `!*.txt` matches everything but text files. - **Array of glob strings**: either need to be all inclusive or all exclusive (negated) patterns otherwise an error is thrown. `['*.json', '*.js']` includes all JavaScript and Json files. `['!.git', '!node_modules']` includes all directories except the '.git' and 'node_modules'. - Directories that do not pass a filter will not be recursed into. - `directoryFilter: ['!.git']`: filter to include/exclude directories found and to recurse into. Directories that do not pass a filter will not be recursed into. - `depth: 5`: depth at which to stop recursing even if more subdirectories are found - `type: 'files'`: determines if data events on the stream should be emitted for `'files'` (default), `'directories'`, `'files_directories'`, or `'all'`. Setting to `'all'` will also include entries for other types of file descriptors like character devices, unix sockets and named pipes. - `alwaysStat: false`: always return `stats` property for every file. Default is `false`, readdirp will return `Dirent` entries. Setting it to `true` can double readdir execution time - use it only when you need file `size`, `mtime` etc. Cannot be enabled on node <10.10.0. - `lstat: false`: include symlink entries in the stream along with files. When `true`, `fs.lstat` would be used instead of `fs.stat` ### `EntryInfo` Has the following properties: - `path: 'assets/javascripts/react.js'`: path to the file/directory (relative to given root) - `fullPath: '/Users/dev/projects/app/assets/javascripts/react.js'`: full path to the file/directory found - `basename: 'react.js'`: name of the file/directory - `dirent: fs.Dirent`: built-in [dir entry object](https://nodejs.org/api/fs.html#fs_class_fs_dirent) - only with `alwaysStat: false` - `stats: fs.Stats`: built in [stat object](https://nodejs.org/api/fs.html#fs_class_fs_stats) - only with `alwaysStat: true` ## Changelog - 3.5 (Oct 13, 2020) disallows recursive directory-based symlinks. Before, it could have entered infinite loop. - 3.4 (Mar 19, 2020) adds support for directory-based symlinks. - 3.3 (Dec 6, 2019) stabilizes RAM consumption and enables perf management with `highWaterMark` option. Fixes race conditions related to `for-await` looping. - 3.2 (Oct 14, 2019) improves performance by 250% and makes streams implementation more idiomatic. - 3.1 (Jul 7, 2019) brings `bigint` support to `stat` output on Windows. This is backwards-incompatible for some cases. Be careful. It you use it incorrectly, you'll see "TypeError: Cannot mix BigInt and other types, use explicit conversions". - 3.0 brings huge performance improvements and stream backpressure support. - Upgrading 2.x to 3.x: - Signature changed from `readdirp(options)` to `readdirp(root, options)` - Replaced callback API with promise API. - Renamed `entryType` option to `type` - Renamed `entryType: 'both'` to `'files_directories'` - `EntryInfo` - Renamed `stat` to `stats` - Emitted only when `alwaysStat: true` - `dirent` is emitted instead of `stats` by default with `alwaysStat: false` - Renamed `name` to `basename` - Removed `parentDir` and `fullParentDir` properties - Supported node.js versions: - 3.x: node 8+ - 2.x: node 0.6+ ## License Copyright (c) 2012-2019 Thorsten Lorenz, Paul Miller (<https://paulmillr.com>) MIT License, see [LICENSE](LICENSE) file. # Chokidar [![Weekly downloads](https://img.shields.io/npm/dw/chokidar.svg)](https://github.com/paulmillr/chokidar) [![Yearly downloads](https://img.shields.io/npm/dy/chokidar.svg)](https://github.com/paulmillr/chokidar) > Minimal and efficient cross-platform file watching library [![NPM](https://nodei.co/npm/chokidar.png)](https://www.npmjs.com/package/chokidar) ## Why? Node.js `fs.watch`: * Doesn't report filenames on MacOS. * Doesn't report events at all when using editors like Sublime on MacOS. * Often reports events twice. * Emits most changes as `rename`. * Does not provide an easy way to recursively watch file trees. * Does not support recursive watching on Linux. Node.js `fs.watchFile`: * Almost as bad at event handling. * Also does not provide any recursive watching. * Results in high CPU utilization. Chokidar resolves these problems. Initially made for **[Brunch](https://brunch.io/)** (an ultra-swift web app build tool), it is now used in [Microsoft's Visual Studio Code](https://github.com/microsoft/vscode), [gulp](https://github.com/gulpjs/gulp/), [karma](https://karma-runner.github.io/), [PM2](https://github.com/Unitech/PM2), [browserify](http://browserify.org/), [webpack](https://webpack.github.io/), [BrowserSync](https://www.browsersync.io/), and [many others](https://www.npmjs.com/browse/depended/chokidar). It has proven itself in production environments. Version 3 is out! Check out our blog post about it: [Chokidar 3: How to save 32TB of traffic every week](https://paulmillr.com/posts/chokidar-3-save-32tb-of-traffic/) ## How? Chokidar does still rely on the Node.js core `fs` module, but when using `fs.watch` and `fs.watchFile` for watching, it normalizes the events it receives, often checking for truth by getting file stats and/or dir contents. On MacOS, chokidar by default uses a native extension exposing the Darwin `FSEvents` API. This provides very efficient recursive watching compared with implementations like `kqueue` available on most \*nix platforms. Chokidar still does have to do some work to normalize the events received that way as well. On most other platforms, the `fs.watch`-based implementation is the default, which avoids polling and keeps CPU usage down. Be advised that chokidar will initiate watchers recursively for everything within scope of the paths that have been specified, so be judicious about not wasting system resources by watching much more than needed. ## Getting started Install with npm: ```sh npm install chokidar ``` Then `require` and use it in your code: ```javascript const chokidar = require('chokidar'); // One-liner for current directory chokidar.watch('.').on('all', (event, path) => { console.log(event, path); }); ``` ## API ```javascript // Example of a more typical implementation structure // Initialize watcher. const watcher = chokidar.watch('file, dir, glob, or array', { ignored: /(^|[\/\\])\../, // ignore dotfiles persistent: true }); // Something to use when events are received. const log = console.log.bind(console); // Add event listeners. watcher .on('add', path => log(`File ${path} has been added`)) .on('change', path => log(`File ${path} has been changed`)) .on('unlink', path => log(`File ${path} has been removed`)); // More possible events. watcher .on('addDir', path => log(`Directory ${path} has been added`)) .on('unlinkDir', path => log(`Directory ${path} has been removed`)) .on('error', error => log(`Watcher error: ${error}`)) .on('ready', () => log('Initial scan complete. Ready for changes')) .on('raw', (event, path, details) => { // internal log('Raw event info:', event, path, details); }); // 'add', 'addDir' and 'change' events also receive stat() results as second // argument when available: https://nodejs.org/api/fs.html#fs_class_fs_stats watcher.on('change', (path, stats) => { if (stats) console.log(`File ${path} changed size to ${stats.size}`); }); // Watch new files. watcher.add('new-file'); watcher.add(['new-file-2', 'new-file-3', '**/other-file*']); // Get list of actual paths being watched on the filesystem var watchedPaths = watcher.getWatched(); // Un-watch some files. await watcher.unwatch('new-file*'); // Stop watching. // The method is async! watcher.close().then(() => console.log('closed')); // Full list of options. See below for descriptions. // Do not use this example! chokidar.watch('file', { persistent: true, ignored: '*.txt', ignoreInitial: false, followSymlinks: true, cwd: '.', disableGlobbing: false, usePolling: false, interval: 100, binaryInterval: 300, alwaysStat: false, depth: 99, awaitWriteFinish: { stabilityThreshold: 2000, pollInterval: 100 }, ignorePermissionErrors: false, atomic: true // or a custom 'atomicity delay', in milliseconds (default 100) }); ``` `chokidar.watch(paths, [options])` * `paths` (string or array of strings). Paths to files, dirs to be watched recursively, or glob patterns. - Note: globs must not contain windows separators (`\`), because that's how they work by the standard — you'll need to replace them with forward slashes (`/`). - Note 2: for additional glob documentation, check out low-level library: [picomatch](https://github.com/micromatch/picomatch). * `options` (object) Options object as defined below: #### Persistence * `persistent` (default: `true`). Indicates whether the process should continue to run as long as files are being watched. If set to `false` when using `fsevents` to watch, no more events will be emitted after `ready`, even if the process continues to run. #### Path filtering * `ignored` ([anymatch](https://github.com/es128/anymatch)-compatible definition) Defines files/paths to be ignored. The whole relative or absolute path is tested, not just filename. If a function with two arguments is provided, it gets called twice per path - once with a single argument (the path), second time with two arguments (the path and the [`fs.Stats`](https://nodejs.org/api/fs.html#fs_class_fs_stats) object of that path). * `ignoreInitial` (default: `false`). If set to `false` then `add`/`addDir` events are also emitted for matching paths while instantiating the watching as chokidar discovers these file paths (before the `ready` event). * `followSymlinks` (default: `true`). When `false`, only the symlinks themselves will be watched for changes instead of following the link references and bubbling events through the link's path. * `cwd` (no default). The base directory from which watch `paths` are to be derived. Paths emitted with events will be relative to this. * `disableGlobbing` (default: `false`). If set to `true` then the strings passed to `.watch()` and `.add()` are treated as literal path names, even if they look like globs. #### Performance * `usePolling` (default: `false`). Whether to use fs.watchFile (backed by polling), or fs.watch. If polling leads to high CPU utilization, consider setting this to `false`. It is typically necessary to **set this to `true` to successfully watch files over a network**, and it may be necessary to successfully watch files in other non-standard situations. Setting to `true` explicitly on MacOS overrides the `useFsEvents` default. You may also set the CHOKIDAR_USEPOLLING env variable to true (1) or false (0) in order to override this option. * _Polling-specific settings_ (effective when `usePolling: true`) * `interval` (default: `100`). Interval of file system polling, in milliseconds. You may also set the CHOKIDAR_INTERVAL env variable to override this option. * `binaryInterval` (default: `300`). Interval of file system polling for binary files. ([see list of binary extensions](https://github.com/sindresorhus/binary-extensions/blob/master/binary-extensions.json)) * `useFsEvents` (default: `true` on MacOS). Whether to use the `fsevents` watching interface if available. When set to `true` explicitly and `fsevents` is available this supercedes the `usePolling` setting. When set to `false` on MacOS, `usePolling: true` becomes the default. * `alwaysStat` (default: `false`). If relying upon the [`fs.Stats`](https://nodejs.org/api/fs.html#fs_class_fs_stats) object that may get passed with `add`, `addDir`, and `change` events, set this to `true` to ensure it is provided even in cases where it wasn't already available from the underlying watch events. * `depth` (default: `undefined`). If set, limits how many levels of subdirectories will be traversed. * `awaitWriteFinish` (default: `false`). By default, the `add` event will fire when a file first appears on disk, before the entire file has been written. Furthermore, in some cases some `change` events will be emitted while the file is being written. In some cases, especially when watching for large files there will be a need to wait for the write operation to finish before responding to a file creation or modification. Setting `awaitWriteFinish` to `true` (or a truthy value) will poll file size, holding its `add` and `change` events until the size does not change for a configurable amount of time. The appropriate duration setting is heavily dependent on the OS and hardware. For accurate detection this parameter should be relatively high, making file watching much less responsive. Use with caution. * *`options.awaitWriteFinish` can be set to an object in order to adjust timing params:* * `awaitWriteFinish.stabilityThreshold` (default: 2000). Amount of time in milliseconds for a file size to remain constant before emitting its event. * `awaitWriteFinish.pollInterval` (default: 100). File size polling interval, in milliseconds. #### Errors * `ignorePermissionErrors` (default: `false`). Indicates whether to watch files that don't have read permissions if possible. If watching fails due to `EPERM` or `EACCES` with this set to `true`, the errors will be suppressed silently. * `atomic` (default: `true` if `useFsEvents` and `usePolling` are `false`). Automatically filters out artifacts that occur when using editors that use "atomic writes" instead of writing directly to the source file. If a file is re-added within 100 ms of being deleted, Chokidar emits a `change` event rather than `unlink` then `add`. If the default of 100 ms does not work well for you, you can override it by setting `atomic` to a custom value, in milliseconds. ### Methods & Events `chokidar.watch()` produces an instance of `FSWatcher`. Methods of `FSWatcher`: * `.add(path / paths)`: Add files, directories, or glob patterns for tracking. Takes an array of strings or just one string. * `.on(event, callback)`: Listen for an FS event. Available events: `add`, `addDir`, `change`, `unlink`, `unlinkDir`, `ready`, `raw`, `error`. Additionally `all` is available which gets emitted with the underlying event name and path for every event other than `ready`, `raw`, and `error`. `raw` is internal, use it carefully. * `.unwatch(path / paths)`: Stop watching files, directories, or glob patterns. Takes an array of strings or just one string. * `.close()`: **async** Removes all listeners from watched files. Asynchronous, returns Promise. Use with `await` to ensure bugs don't happen. * `.getWatched()`: Returns an object representing all the paths on the file system being watched by this `FSWatcher` instance. The object's keys are all the directories (using absolute paths unless the `cwd` option was used), and the values are arrays of the names of the items contained in each directory. ## CLI If you need a CLI interface for your file watching, check out [chokidar-cli](https://github.com/open-cli-tools/chokidar-cli), allowing you to execute a command on each change, or get a stdio stream of change events. ## Install Troubleshooting * `npm WARN optional dep failed, continuing [email protected]` * This message is normal part of how `npm` handles optional dependencies and is not indicative of a problem. Even if accompanied by other related error messages, Chokidar should function properly. * `TypeError: fsevents is not a constructor` * Update chokidar by doing `rm -rf node_modules package-lock.json yarn.lock && npm install`, or update your dependency that uses chokidar. * Chokidar is producing `ENOSP` error on Linux, like this: * `bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell` `Error: watch /home/ ENOSPC` * This means Chokidar ran out of file handles and you'll need to increase their count by executing the following command in Terminal: `echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p` ## Changelog For more detailed changelog, see [`full_changelog.md`](.github/full_changelog.md). - **v3.5 (Jan 6, 2021):** Support for ARM Macs with Apple Silicon. Fixes for deleted symlinks. - **v3.4 (Apr 26, 2020):** Support for directory-based symlinks. Fixes for macos file replacement. - **v3.3 (Nov 2, 2019):** `FSWatcher#close()` method became async. That fixes IO race conditions related to close method. - **v3.2 (Oct 1, 2019):** Improve Linux RAM usage by 50%. Race condition fixes. Windows glob fixes. Improve stability by using tight range of dependency versions. - **v3.1 (Sep 16, 2019):** dotfiles are no longer filtered out by default. Use `ignored` option if needed. Improve initial Linux scan time by 50%. - **v3 (Apr 30, 2019):** massive CPU & RAM consumption improvements; reduces deps / package size by a factor of 17x and bumps Node.js requirement to v8.16 and higher. - **v2 (Dec 29, 2017):** Globs are now posix-style-only; without windows support. Tons of bugfixes. - **v1 (Apr 7, 2015):** Glob support, symlink support, tons of bugfixes. Node 0.8+ is supported - **v0.1 (Apr 20, 2012):** Initial release, extracted from [Brunch](https://github.com/brunch/brunch/blob/9847a065aea300da99bd0753f90354cde9de1261/src/helpers.coffee#L66) ## Also Why was chokidar named this way? What's the meaning behind it? >Chowkidar is a transliteration of a Hindi word meaning 'watchman, gatekeeper', चौकीदार. This ultimately comes from Sanskrit _ चतुष्क_ (crossway, quadrangle, consisting-of-four). ## License MIT (c) Paul Miller (<https://paulmillr.com>), see [LICENSE](LICENSE) file. # reusify [![npm version][npm-badge]][npm-url] [![Build Status][travis-badge]][travis-url] [![Coverage Status][coveralls-badge]][coveralls-url] Reuse your objects and functions for maximum speed. This technique will make any function run ~10% faster. You call your functions a lot, and it adds up quickly in hot code paths. ``` $ node benchmarks/createNoCodeFunction.js Total time 53133 Total iterations 100000000 Iteration/s 1882069.5236482036 $ node benchmarks/reuseNoCodeFunction.js Total time 50617 Total iterations 100000000 Iteration/s 1975620.838848608 ``` The above benchmark uses fibonacci to simulate a real high-cpu load. The actual numbers might differ for your use case, but the difference should not. The benchmark was taken using Node v6.10.0. This library was extracted from [fastparallel](http://npm.im/fastparallel). ## Example ```js var reusify = require('reusify') var fib = require('reusify/benchmarks/fib') var instance = reusify(MyObject) // get an object from the cache, // or creates a new one when cache is empty var obj = instance.get() // set the state obj.num = 100 obj.func() // reset the state. // if the state contains any external object // do not use delete operator (it is slow) // prefer set them to null obj.num = 0 // store an object in the cache instance.release(obj) function MyObject () { // you need to define this property // so V8 can compile MyObject into an // hidden class this.next = null this.num = 0 var that = this // this function is never reallocated, // so it can be optimized by V8 this.func = function () { if (null) { // do nothing } else { // calculates fibonacci fib(that.num) } } } ``` The above example was intended for synchronous code, let's see async: ```js var reusify = require('reusify') var instance = reusify(MyObject) for (var i = 0; i < 100; i++) { getData(i, console.log) } function getData (value, cb) { var obj = instance.get() obj.value = value obj.cb = cb obj.run() } function MyObject () { this.next = null this.value = null var that = this this.run = function () { asyncOperation(that.value, that.handle) } this.handle = function (err, result) { that.cb(err, result) that.value = null that.cb = null instance.release(that) } } ``` Also note how in the above examples, the code, that consumes an istance of `MyObject`, reset the state to initial condition, just before storing it in the cache. That's needed so that every subsequent request for an instance from the cache, could get a clean instance. ## Why It is faster because V8 doesn't have to collect all the functions you create. On a short-lived benchmark, it is as fast as creating the nested function, but on a longer time frame it creates less pressure on the garbage collector. ## Other examples If you want to see some complex example, checkout [middie](https://github.com/fastify/middie) and [steed](https://github.com/mcollina/steed). ## Acknowledgements Thanks to [Trevor Norris](https://github.com/trevnorris) for getting me down the rabbit hole of performance, and thanks to [Mathias Buss](http://github.com/mafintosh) for suggesting me to share this trick. ## License MIT [npm-badge]: https://badge.fury.io/js/reusify.svg [npm-url]: https://badge.fury.io/js/reusify [travis-badge]: https://api.travis-ci.org/mcollina/reusify.svg [travis-url]: https://travis-ci.org/mcollina/reusify [coveralls-badge]: https://coveralls.io/repos/mcollina/reusify/badge.svg?branch=master&service=github [coveralls-url]: https://coveralls.io/github/mcollina/reusify?branch=master A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # PostCSS Nested <img align="right" width="135" height="95" title="Philosopher’s stone, logo of PostCSS" src="https://postcss.org/logo-leftp.svg"> [PostCSS] plugin to unwrap nested rules like how Sass does it. ```css .phone { &_title { width: 500px; @media (max-width: 500px) { width: auto; } body.is_dark & { color: white; } } img { display: block; } } .title { font-size: var(--font); @at-root html { --font: 16px } } ``` will be processed to: ```css .phone_title { width: 500px; } @media (max-width: 500px) { .phone_title { width: auto; } } body.is_dark .phone_title { color: white; } .phone img { display: block; } .title { font-size: var(--font); } html { --font: 16px } ``` Related plugins: * Use [`postcss-atroot`] for `@at-root` at-rule to move nested child to the CSS root. * Use [`postcss-current-selector`] **after** this plugin if you want to use current selector in properties or variables values. * Use [`postcss-nested-ancestors`] **before** this plugin if you want to reference any ancestor element directly in your selectors with `^&`. Alternatives: * See also [`postcss-nesting`], which implements [CSSWG draft] (requires the `&` and introduces `@nest`). * [`postcss-nested-props`] for nested properties like `font-size`. <a href="https://evilmartians.com/?utm_source=postcss-nested"> <img src="https://evilmartians.com/badges/sponsored-by-evil-martians.svg" alt="Sponsored by Evil Martians" width="236" height="54"> </a> [`postcss-atroot`]: https://github.com/OEvgeny/postcss-atroot [`postcss-current-selector`]: https://github.com/komlev/postcss-current-selector [`postcss-nested-ancestors`]: https://github.com/toomuchdesign/postcss-nested-ancestors [`postcss-nested-props`]: https://github.com/jedmao/postcss-nested-props [`postcss-nesting`]: https://github.com/jonathantneal/postcss-nesting [CSSWG draft]: https://drafts.csswg.org/css-nesting-1/ [PostCSS]: https://github.com/postcss/postcss ## Usage **Step 1:** Install plugin: ```sh npm install --save-dev postcss postcss-nested ``` **Step 2:** Check your project for existing PostCSS config: `postcss.config.js` in the project root, `"postcss"` section in `package.json` or `postcss` in bundle config. If you do not use PostCSS, add it according to [official docs] and set this plugin in settings. **Step 3:** Add the plugin to plugins list: ```diff module.exports = { plugins: [ + require('postcss-nested'), require('autoprefixer') ] } ``` [official docs]: https://github.com/postcss/postcss#usage ## Options ### `bubble` By default, plugin will bubble only `@media` and `@supports` at-rules. You can add your custom at-rules to this list by `bubble` option: ```js postcss([ require('postcss-nested')({ bubble: ['phone'] }) ]) ``` ```css /* input */ a { color: white; @phone { color: black; } } /* output */ a { color: white; } @phone { a { color: black; } } ``` ### `unwrap` By default, plugin will unwrap only `@font-face`, `@keyframes` and `@document` at-rules. You can add your custom at-rules to this list by `unwrap` option: ```js postcss([ require('postcss-nested')({ unwrap: ['phone'] }) ]) ``` ```css /* input */ a { color: white; @phone { color: black; } } /* output */ a { color: white; } @phone { color: black; } ``` ### `preserveEmpty` By default, plugin will strip out any empty selector generated by intermediate nesting levels. You can set `preserveEmpty` to `true` to preserve them. ```css .a { .b { color: black; } } ``` Will be compiled to: ```css .a { } .a .b { color: black; } ``` This is especially useful if you want to export the empty classes with `postcss-modules`. # micromatch [![NPM version](https://img.shields.io/npm/v/micromatch.svg?style=flat)](https://www.npmjs.com/package/micromatch) [![NPM monthly downloads](https://img.shields.io/npm/dm/micromatch.svg?style=flat)](https://npmjs.org/package/micromatch) [![NPM total downloads](https://img.shields.io/npm/dt/micromatch.svg?style=flat)](https://npmjs.org/package/micromatch) [![Tests](https://github.com/micromatch/micromatch/actions/workflows/test.yml/badge.svg)](https://github.com/micromatch/micromatch/actions/workflows/test.yml) > Glob matching for javascript/node.js. A replacement and faster alternative to minimatch and multimatch. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Table of Contents <details> <summary><strong>Details</strong></summary> - [Install](#install) - [Quickstart](#quickstart) - [Why use micromatch?](#why-use-micromatch) * [Matching features](#matching-features) - [Switching to micromatch](#switching-to-micromatch) * [From minimatch](#from-minimatch) * [From multimatch](#from-multimatch) - [API](#api) - [Options](#options) - [Options Examples](#options-examples) * [options.basename](#optionsbasename) * [options.bash](#optionsbash) * [options.expandRange](#optionsexpandrange) * [options.format](#optionsformat) * [options.ignore](#optionsignore) * [options.matchBase](#optionsmatchbase) * [options.noextglob](#optionsnoextglob) * [options.nonegate](#optionsnonegate) * [options.noglobstar](#optionsnoglobstar) * [options.nonull](#optionsnonull) * [options.nullglob](#optionsnullglob) * [options.onIgnore](#optionsonignore) * [options.onMatch](#optionsonmatch) * [options.onResult](#optionsonresult) * [options.posixSlashes](#optionsposixslashes) * [options.unescape](#optionsunescape) - [Extended globbing](#extended-globbing) * [Extglobs](#extglobs) * [Braces](#braces) * [Regex character classes](#regex-character-classes) * [Regex groups](#regex-groups) * [POSIX bracket expressions](#posix-bracket-expressions) - [Notes](#notes) * [Bash 4.3 parity](#bash-43-parity) * [Backslashes](#backslashes) - [Benchmarks](#benchmarks) * [Running benchmarks](#running-benchmarks) * [Latest results](#latest-results) - [Contributing](#contributing) - [About](#about) </details> ## Install Install with [npm](https://www.npmjs.com/) (requires [Node.js](https://nodejs.org/en/) >=8.6): ```sh $ npm install --save micromatch ``` ## Quickstart ```js const micromatch = require('micromatch'); // micromatch(list, patterns[, options]); ``` The [main export](#micromatch) takes a list of strings and one or more glob patterns: ```js console.log(micromatch(['foo', 'bar', 'baz', 'qux'], ['f*', 'b*'])) //=> ['foo', 'bar', 'baz'] console.log(micromatch(['foo', 'bar', 'baz', 'qux'], ['*', '!b*'])) //=> ['foo', 'qux'] ``` Use [.isMatch()](#ismatch) to for boolean matching: ```js console.log(micromatch.isMatch('foo', 'f*')) //=> true console.log(micromatch.isMatch('foo', ['b*', 'f*'])) //=> true ``` [Switching](#switching-to-micromatch) from minimatch and multimatch is easy! <br> ## Why use micromatch? > micromatch is a [replacement](#switching-to-micromatch) for minimatch and multimatch * Supports all of the same matching features as [minimatch](https://github.com/isaacs/minimatch) and [multimatch](https://github.com/sindresorhus/multimatch) * More complete support for the Bash 4.3 specification than minimatch and multimatch. Micromatch passes _all of the spec tests_ from bash, including some that bash still fails. * **Fast & Performant** - Loads in about 5ms and performs [fast matches](#benchmarks). * **Glob matching** - Using wildcards (`*` and `?`), globstars (`**`) for nested directories * **[Advanced globbing](#extended-globbing)** - Supports [extglobs](#extglobs), [braces](#braces-1), and [POSIX brackets](#posix-bracket-expressions), and support for escaping special characters with `\` or quotes. * **Accurate** - Covers more scenarios [than minimatch](https://github.com/yarnpkg/yarn/pull/3339) * **Well tested** - More than 5,000 [test assertions](./test) * **Windows support** - More reliable windows support than minimatch and multimatch. * **[Safe](https://github.com/micromatch/braces#braces-is-safe)** - Micromatch is not subject to DoS with brace patterns like minimatch and multimatch. ### Matching features * Support for multiple glob patterns (no need for wrappers like multimatch) * Wildcards (`**`, `*.js`) * Negation (`'!a/*.js'`, `'*!(b).js'`) * [extglobs](#extglobs) (`+(x|y)`, `!(a|b)`) * [POSIX character classes](#posix-bracket-expressions) (`[[:alpha:][:digit:]]`) * [brace expansion](https://github.com/micromatch/braces) (`foo/{1..5}.md`, `bar/{a,b,c}.js`) * regex character classes (`foo-[1-5].js`) * regex logical "or" (`foo/(abc|xyz).js`) You can mix and match these features to create whatever patterns you need! ## Switching to micromatch _(There is one notable difference between micromatch and minimatch in regards to how backslashes are handled. See [the notes about backslashes](#backslashes) for more information.)_ ### From minimatch Use [micromatch.isMatch()](#ismatch) instead of `minimatch()`: ```js console.log(micromatch.isMatch('foo', 'b*')); //=> false ``` Use [micromatch.match()](#match) instead of `minimatch.match()`: ```js console.log(micromatch.match(['foo', 'bar'], 'b*')); //=> 'bar' ``` ### From multimatch Same signature: ```js console.log(micromatch(['foo', 'bar', 'baz'], ['f*', '*z'])); //=> ['foo', 'baz'] ``` ## API **Params** * `list` **{String|Array<string>}**: List of strings to match. * `patterns` **{String|Array<string>}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) * `returns` **{Array}**: Returns an array of matches **Example** ```js const mm = require('micromatch'); // mm(list, patterns[, options]); console.log(mm(['a.js', 'a.txt'], ['*.js'])); //=> [ 'a.js' ] ``` ### [.matcher](index.js#L104) Returns a matcher function from the given glob `pattern` and `options`. The returned function takes a string to match as its only argument and returns true if the string is a match. **Params** * `pattern` **{String}**: Glob pattern * `options` **{Object}** * `returns` **{Function}**: Returns a matcher function. **Example** ```js const mm = require('micromatch'); // mm.matcher(pattern[, options]); const isMatch = mm.matcher('*.!(*a)'); console.log(isMatch('a.a')); //=> false console.log(isMatch('a.b')); //=> true ``` ### [.isMatch](index.js#L123) Returns true if **any** of the given glob `patterns` match the specified `string`. **Params** * `str` **{String}**: The string to test. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `[options]` **{Object}**: See available [options](#options). * `returns` **{Boolean}**: Returns true if any patterns match `str` **Example** ```js const mm = require('micromatch'); // mm.isMatch(string, patterns[, options]); console.log(mm.isMatch('a.a', ['b.*', '*.a'])); //=> true console.log(mm.isMatch('a.a', 'b.*')); //=> false ``` ### [.not](index.js#L148) Returns a list of strings that _**do not match any**_ of the given `patterns`. **Params** * `list` **{Array}**: Array of strings to match. * `patterns` **{String|Array}**: One or more glob pattern to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Array}**: Returns an array of strings that **do not match** the given patterns. **Example** ```js const mm = require('micromatch'); // mm.not(list, patterns[, options]); console.log(mm.not(['a.a', 'b.b', 'c.c'], '*.a')); //=> ['b.b', 'c.c'] ``` ### [.contains](index.js#L188) Returns true if the given `string` contains the given pattern. Similar to [.isMatch](#isMatch) but the pattern can match any part of the string. **Params** * `str` **{String}**: The string to match. * `patterns` **{String|Array}**: Glob pattern to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if any of the patterns matches any part of `str`. **Example** ```js var mm = require('micromatch'); // mm.contains(string, pattern[, options]); console.log(mm.contains('aa/bb/cc', '*b')); //=> true console.log(mm.contains('aa/bb/cc', '*d')); //=> false ``` ### [.matchKeys](index.js#L230) Filter the keys of the given object with the given `glob` pattern and `options`. Does not attempt to match nested keys. If you need this feature, use [glob-object](https://github.com/jonschlinkert/glob-object) instead. **Params** * `object` **{Object}**: The object with keys to filter. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Object}**: Returns an object with only keys that match the given patterns. **Example** ```js const mm = require('micromatch'); // mm.matchKeys(object, patterns[, options]); const obj = { aa: 'a', ab: 'b', ac: 'c' }; console.log(mm.matchKeys(obj, '*b')); //=> { ab: 'b' } ``` ### [.some](index.js#L259) Returns true if some of the strings in the given `list` match any of the given glob `patterns`. **Params** * `list` **{String|Array}**: The string or array of strings to test. Returns as soon as the first match is found. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if any `patterns` matches any of the strings in `list` **Example** ```js const mm = require('micromatch'); // mm.some(list, patterns[, options]); console.log(mm.some(['foo.js', 'bar.js'], ['*.js', '!foo.js'])); // true console.log(mm.some(['foo.js'], ['*.js', '!foo.js'])); // false ``` ### [.every](index.js#L295) Returns true if every string in the given `list` matches any of the given glob `patterns`. **Params** * `list` **{String|Array}**: The string or array of strings to test. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if all `patterns` matches all of the strings in `list` **Example** ```js const mm = require('micromatch'); // mm.every(list, patterns[, options]); console.log(mm.every('foo.js', ['foo.js'])); // true console.log(mm.every(['foo.js', 'bar.js'], ['*.js'])); // true console.log(mm.every(['foo.js', 'bar.js'], ['*.js', '!foo.js'])); // false console.log(mm.every(['foo.js'], ['*.js', '!foo.js'])); // false ``` ### [.all](index.js#L334) Returns true if **all** of the given `patterns` match the specified string. **Params** * `str` **{String|Array}**: The string to test. * `patterns` **{String|Array}**: One or more glob patterns to use for matching. * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Boolean}**: Returns true if any patterns match `str` **Example** ```js const mm = require('micromatch'); // mm.all(string, patterns[, options]); console.log(mm.all('foo.js', ['foo.js'])); // true console.log(mm.all('foo.js', ['*.js', '!foo.js'])); // false console.log(mm.all('foo.js', ['*.js', 'foo.js'])); // true console.log(mm.all('foo.js', ['*.js', 'f*', '*o*', '*o.js'])); // true ``` ### [.capture](index.js#L361) Returns an array of matches captured by `pattern` in `string, or`null` if the pattern did not match. **Params** * `glob` **{String}**: Glob pattern to use for matching. * `input` **{String}**: String to match * `options` **{Object}**: See available [options](#options) for changing how matches are performed * `returns` **{Array|null}**: Returns an array of captures if the input matches the glob pattern, otherwise `null`. **Example** ```js const mm = require('micromatch'); // mm.capture(pattern, string[, options]); console.log(mm.capture('test/*.js', 'test/foo.js')); //=> ['foo'] console.log(mm.capture('test/*.js', 'foo/bar.css')); //=> null ``` ### [.makeRe](index.js#L387) Create a regular expression from the given glob `pattern`. **Params** * `pattern` **{String}**: A glob pattern to convert to regex. * `options` **{Object}** * `returns` **{RegExp}**: Returns a regex created from the given pattern. **Example** ```js const mm = require('micromatch'); // mm.makeRe(pattern[, options]); console.log(mm.makeRe('*.js')); //=> /^(?:(\.[\\\/])?(?!\.)(?=.)[^\/]*?\.js)$/ ``` ### [.scan](index.js#L403) Scan a glob pattern to separate the pattern into segments. Used by the [split](#split) method. **Params** * `pattern` **{String}** * `options` **{Object}** * `returns` **{Object}**: Returns an object with **Example** ```js const mm = require('micromatch'); const state = mm.scan(pattern[, options]); ``` ### [.parse](index.js#L419) Parse a glob pattern to create the source string for a regular expression. **Params** * `glob` **{String}** * `options` **{Object}** * `returns` **{Object}**: Returns an object with useful properties and output to be used as regex source string. **Example** ```js const mm = require('micromatch'); const state = mm.parse(pattern[, options]); ``` ### [.braces](index.js#L446) Process the given brace `pattern`. **Params** * `pattern` **{String}**: String with brace pattern to process. * `options` **{Object}**: Any [options](#options) to change how expansion is performed. See the [braces](https://github.com/micromatch/braces) library for all available options. * `returns` **{Array}** **Example** ```js const { braces } = require('micromatch'); console.log(braces('foo/{a,b,c}/bar')); //=> [ 'foo/(a|b|c)/bar' ] console.log(braces('foo/{a,b,c}/bar', { expand: true })); //=> [ 'foo/a/bar', 'foo/b/bar', 'foo/c/bar' ] ``` ## Options | **Option** | **Type** | **Default value** | **Description** | | --- | --- | --- | --- | | `basename` | `boolean` | `false` | If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. | | `bash` | `boolean` | `false` | Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (`**`). | | `capture` | `boolean` | `undefined` | Return regex matches in supporting methods. | | `contains` | `boolean` | `undefined` | Allows glob to match any part of the given string(s). | | `cwd` | `string` | `process.cwd()` | Current working directory. Used by `picomatch.split()` | | `debug` | `boolean` | `undefined` | Debug regular expressions when an error is thrown. | | `dot` | `boolean` | `false` | Match dotfiles. Otherwise dotfiles are ignored unless a `.` is explicitly defined in the pattern. | | `expandRange` | `function` | `undefined` | Custom function for expanding ranges in brace patterns, such as `{a..z}`. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It's recommended that returned strings be wrapped in parentheses. This option is overridden by the `expandBrace` option. | | `failglob` | `boolean` | `false` | Similar to the `failglob` behavior in Bash, throws an error when no matches are found. Based on the bash option of the same name. | | `fastpaths` | `boolean` | `true` | To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to `false`. | | `flags` | `boolean` | `undefined` | Regex flags to use in the generated regex. If defined, the `nocase` option will be overridden. | | [format](#optionsformat) | `function` | `undefined` | Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc. | | `ignore` | `array\|string` | `undefined` | One or more glob patterns for excluding strings that should not be matched from the result. | | `keepQuotes` | `boolean` | `false` | Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes. | | `literalBrackets` | `boolean` | `undefined` | When `true`, brackets in the glob pattern will be escaped so that only literal brackets will be matched. | | `lookbehinds` | `boolean` | `true` | Support regex positive and negative lookbehinds. Note that you must be using Node 8.1.10 or higher to enable regex lookbehinds. | | `matchBase` | `boolean` | `false` | Alias for `basename` | | `maxLength` | `boolean` | `65536` | Limit the max length of the input string. An error is thrown if the input string is longer than this value. | | `nobrace` | `boolean` | `false` | Disable brace matching, so that `{a,b}` and `{1..3}` would be treated as literal characters. | | `nobracket` | `boolean` | `undefined` | Disable matching with regex brackets. | | `nocase` | `boolean` | `false` | Perform case-insensitive matching. Equivalent to the regex `i` flag. Note that this option is ignored when the `flags` option is defined. | | `nodupes` | `boolean` | `true` | Deprecated, use `nounique` instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false. | | `noext` | `boolean` | `false` | Alias for `noextglob` | | `noextglob` | `boolean` | `false` | Disable support for matching with [extglobs](#extglobs) (like `+(a\|b)`) | | `noglobstar` | `boolean` | `false` | Disable support for matching nested directories with globstars (`**`) | | `nonegate` | `boolean` | `false` | Disable support for negating with leading `!` | | `noquantifiers` | `boolean` | `false` | Disable support for regex quantifiers (like `a{1,2}`) and treat them as brace patterns to be expanded. | | [onIgnore](#optionsonIgnore) | `function` | `undefined` | Function to be called on ignored items. | | [onMatch](#optionsonMatch) | `function` | `undefined` | Function to be called on matched items. | | [onResult](#optionsonResult) | `function` | `undefined` | Function to be called on all items, regardless of whether or not they are matched or ignored. | | `posix` | `boolean` | `false` | Support [POSIX character classes](#posix-bracket-expressions) ("posix brackets"). | | `posixSlashes` | `boolean` | `undefined` | Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself | | `prepend` | `string` | `undefined` | String to prepend to the generated regex used for matching. | | `regex` | `boolean` | `false` | Use regular expression rules for `+` (instead of matching literal `+`), and for stars that follow closing parentheses or brackets (as in `)*` and `]*`). | | `strictBrackets` | `boolean` | `undefined` | Throw an error if brackets, braces, or parens are imbalanced. | | `strictSlashes` | `boolean` | `undefined` | When true, picomatch won't match trailing slashes with single stars. | | `unescape` | `boolean` | `undefined` | Remove preceding backslashes from escaped glob characters before creating the regular expression to perform matches. | | `unixify` | `boolean` | `undefined` | Alias for `posixSlashes`, for backwards compatitibility. | ## Options Examples ### options.basename Allow glob patterns without slashes to match a file path based on its basename. Same behavior as [minimatch](https://github.com/isaacs/minimatch) option `matchBase`. **Type**: `Boolean` **Default**: `false` **Example** ```js micromatch(['a/b.js', 'a/c.md'], '*.js'); //=> [] micromatch(['a/b.js', 'a/c.md'], '*.js', { basename: true }); //=> ['a/b.js'] ``` ### options.bash Enabled by default, this option enforces bash-like behavior with stars immediately following a bracket expression. Bash bracket expressions are similar to regex character classes, but unlike regex, a star following a bracket expression **does not repeat the bracketed characters**. Instead, the star is treated the same as any other star. **Type**: `Boolean` **Default**: `true` **Example** ```js const files = ['abc', 'ajz']; console.log(micromatch(files, '[a-c]*')); //=> ['abc', 'ajz'] console.log(micromatch(files, '[a-c]*', { bash: false })); ``` ### options.expandRange **Type**: `function` **Default**: `undefined` Custom function for expanding ranges in brace patterns. The [fill-range](https://github.com/jonschlinkert/fill-range) library is ideal for this purpose, or you can use custom code to do whatever you need. **Example** The following example shows how to create a glob that matches a numeric folder name between `01` and `25`, with leading zeros. ```js const fill = require('fill-range'); const regex = micromatch.makeRe('foo/{01..25}/bar', { expandRange(a, b) { return `(${fill(a, b, { toRegex: true })})`; } }); console.log(regex) //=> /^(?:foo\/((?:0[1-9]|1[0-9]|2[0-5]))\/bar)$/ console.log(regex.test('foo/00/bar')) // false console.log(regex.test('foo/01/bar')) // true console.log(regex.test('foo/10/bar')) // true console.log(regex.test('foo/22/bar')) // true console.log(regex.test('foo/25/bar')) // true console.log(regex.test('foo/26/bar')) // false ``` ### options.format **Type**: `function` **Default**: `undefined` Custom function for formatting strings before they're matched. **Example** ```js // strip leading './' from strings const format = str => str.replace(/^\.\//, ''); const isMatch = picomatch('foo/*.js', { format }); console.log(isMatch('./foo/bar.js')) //=> true ``` ### options.ignore String or array of glob patterns to match files to ignore. **Type**: `String|Array` **Default**: `undefined` ```js const isMatch = micromatch.matcher('*', { ignore: 'f*' }); console.log(isMatch('foo')) //=> false console.log(isMatch('bar')) //=> true console.log(isMatch('baz')) //=> true ``` ### options.matchBase Alias for [options.basename](#options-basename). ### options.noextglob Disable extglob support, so that [extglobs](#extglobs) are regarded as literal characters. **Type**: `Boolean` **Default**: `undefined` **Examples** ```js console.log(micromatch(['a/z', 'a/b', 'a/!(z)'], 'a/!(z)')); //=> ['a/b', 'a/!(z)'] console.log(micromatch(['a/z', 'a/b', 'a/!(z)'], 'a/!(z)', { noextglob: true })); //=> ['a/!(z)'] (matches only as literal characters) ``` ### options.nonegate Disallow negation (`!`) patterns, and treat leading `!` as a literal character to match. **Type**: `Boolean` **Default**: `undefined` ### options.noglobstar Disable matching with globstars (`**`). **Type**: `Boolean` **Default**: `undefined` ```js micromatch(['a/b', 'a/b/c', 'a/b/c/d'], 'a/**'); //=> ['a/b', 'a/b/c', 'a/b/c/d'] micromatch(['a/b', 'a/b/c', 'a/b/c/d'], 'a/**', {noglobstar: true}); //=> ['a/b'] ``` ### options.nonull Alias for [options.nullglob](#options-nullglob). ### options.nullglob If `true`, when no matches are found the actual (arrayified) glob pattern is returned instead of an empty array. Same behavior as [minimatch](https://github.com/isaacs/minimatch) option `nonull`. **Type**: `Boolean` **Default**: `undefined` ### options.onIgnore ```js const onIgnore = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); // { glob: '*', regex: /^(?:(?!\.)(?=.)[^\/]*?\/?)$/, input: 'foo', output: 'foo' } }; const isMatch = micromatch.matcher('*', { onIgnore, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` ### options.onMatch ```js const onMatch = ({ glob, regex, input, output }) => { console.log({ input, output }); // { input: 'some\\path', output: 'some/path' } // { input: 'some\\path', output: 'some/path' } // { input: 'some\\path', output: 'some/path' } }; const isMatch = micromatch.matcher('**', { onMatch, posixSlashes: true }); isMatch('some\\path'); isMatch('some\\path'); isMatch('some\\path'); ``` ### options.onResult ```js const onResult = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = micromatch('*', { onResult, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` ### options.posixSlashes Convert path separators on returned files to posix/unix-style forward slashes. Aliased as `unixify` for backwards compatibility. **Type**: `Boolean` **Default**: `true` on windows, `false` everywhere else. **Example** ```js console.log(micromatch.match(['a\\b\\c'], 'a/**')); //=> ['a/b/c'] console.log(micromatch.match(['a\\b\\c'], { posixSlashes: false })); //=> ['a\\b\\c'] ``` ### options.unescape Remove backslashes from escaped glob characters before creating the regular expression to perform matches. **Type**: `Boolean` **Default**: `undefined` **Example** In this example we want to match a literal `*`: ```js console.log(micromatch.match(['abc', 'a\\*c'], 'a\\*c')); //=> ['a\\*c'] console.log(micromatch.match(['abc', 'a\\*c'], 'a\\*c', { unescape: true })); //=> ['a*c'] ``` <br> <br> ## Extended globbing Micromatch supports the following extended globbing features. ### Extglobs Extended globbing, as described by the bash man page: | **pattern** | **regex equivalent** | **description** | | --- | --- | --- | | `?(pattern)` | `(pattern)?` | Matches zero or one occurrence of the given patterns | | `*(pattern)` | `(pattern)*` | Matches zero or more occurrences of the given patterns | | `+(pattern)` | `(pattern)+` | Matches one or more occurrences of the given patterns | | `@(pattern)` | `(pattern)` <sup>*</sup> | Matches one of the given patterns | | `!(pattern)` | N/A (equivalent regex is much more complicated) | Matches anything except one of the given patterns | <sup><strong>*</strong></sup> Note that `@` isn't a regex character. ### Braces Brace patterns can be used to match specific ranges or sets of characters. **Example** The pattern `{f,b}*/{1..3}/{b,q}*` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar baz/1/qux baz/2/qux baz/3/qux ``` Visit [braces](https://github.com/micromatch/braces) to see the full range of features and options related to brace expansion, or to create brace matching or expansion related issues. ### Regex character classes Given the list: `['a.js', 'b.js', 'c.js', 'd.js', 'E.js']`: * `[ac].js`: matches both `a` and `c`, returning `['a.js', 'c.js']` * `[b-d].js`: matches from `b` to `d`, returning `['b.js', 'c.js', 'd.js']` * `a/[A-Z].js`: matches and uppercase letter, returning `['a/E.md']` Learn about [regex character classes](http://www.regular-expressions.info/charclass.html). ### Regex groups Given `['a.js', 'b.js', 'c.js', 'd.js', 'E.js']`: * `(a|c).js`: would match either `a` or `c`, returning `['a.js', 'c.js']` * `(b|d).js`: would match either `b` or `d`, returning `['b.js', 'd.js']` * `(b|[A-Z]).js`: would match either `b` or an uppercase letter, returning `['b.js', 'E.js']` As with regex, parens can be nested, so patterns like `((a|b)|c)/b` will work. Although brace expansion might be friendlier to use, depending on preference. ### POSIX bracket expressions POSIX brackets are intended to be more user-friendly than regex character classes. This of course is in the eye of the beholder. **Example** ```js console.log(micromatch.isMatch('a1', '[[:alpha:][:digit:]]')) //=> true console.log(micromatch.isMatch('a1', '[[:alpha:][:alpha:]]')) //=> false ``` *** ## Notes ### Bash 4.3 parity Whenever possible matching behavior is based on behavior Bash 4.3, which is mostly consistent with minimatch. However, it's suprising how many edge cases and rabbit holes there are with glob matching, and since there is no real glob specification, and micromatch is more accurate than both Bash and minimatch, there are cases where best-guesses were made for behavior. In a few cases where Bash had no answers, we used wildmatch (used by git) as a fallback. ### Backslashes There is an important, notable difference between minimatch and micromatch _in regards to how backslashes are handled_ in glob patterns. * Micromatch exclusively and explicitly reserves backslashes for escaping characters in a glob pattern, even on windows, which is consistent with bash behavior. _More importantly, unescaping globs can result in unsafe regular expressions_. * Minimatch converts all backslashes to forward slashes, which means you can't use backslashes to escape any characters in your glob patterns. We made this decision for micromatch for a couple of reasons: * Consistency with bash conventions. * Glob patterns are not filepaths. They are a type of [regular language](https://en.wikipedia.org/wiki/Regular_language) that is converted to a JavaScript regular expression. Thus, when forward slashes are defined in a glob pattern, the resulting regular expression will match windows or POSIX path separators just fine. **A note about joining paths to globs** Note that when you pass something like `path.join('foo', '*')` to micromatch, you are creating a filepath and expecting it to still work as a glob pattern. This causes problems on windows, since the `path.sep` is `\\`. In other words, since `\\` is reserved as an escape character in globs, on windows `path.join('foo', '*')` would result in `foo\\*`, which tells micromatch to match `*` as a literal character. This is the same behavior as bash. To solve this, you might be inspired to do something like `'foo\\*'.replace(/\\/g, '/')`, but this causes another, potentially much more serious, problem. ## Benchmarks ### Running benchmarks Install dependencies for running benchmarks: ```sh $ cd bench && npm install ``` Run the benchmarks: ```sh $ npm run bench ``` ### Latest results As of March 24, 2022 (longer bars are better): ```sh # .makeRe star micromatch x 2,232,802 ops/sec ±2.34% (89 runs sampled)) minimatch x 781,018 ops/sec ±6.74% (92 runs sampled)) # .makeRe star; dot=true micromatch x 1,863,453 ops/sec ±0.74% (93 runs sampled) minimatch x 723,105 ops/sec ±0.75% (93 runs sampled) # .makeRe globstar micromatch x 1,624,179 ops/sec ±2.22% (91 runs sampled) minimatch x 1,117,230 ops/sec ±2.78% (86 runs sampled)) # .makeRe globstars micromatch x 1,658,642 ops/sec ±0.86% (92 runs sampled) minimatch x 741,224 ops/sec ±1.24% (89 runs sampled)) # .makeRe with leading star micromatch x 1,525,014 ops/sec ±1.63% (90 runs sampled) minimatch x 561,074 ops/sec ±3.07% (89 runs sampled) # .makeRe - braces micromatch x 172,478 ops/sec ±2.37% (78 runs sampled) minimatch x 96,087 ops/sec ±2.34% (88 runs sampled))) # .makeRe braces - range (expanded) micromatch x 26,973 ops/sec ±0.84% (89 runs sampled) minimatch x 3,023 ops/sec ±0.99% (90 runs sampled)) # .makeRe braces - range (compiled) micromatch x 152,892 ops/sec ±1.67% (83 runs sampled) minimatch x 992 ops/sec ±3.50% (89 runs sampled)d)) # .makeRe braces - nested ranges (expanded) micromatch x 15,816 ops/sec ±13.05% (80 runs sampled) minimatch x 2,953 ops/sec ±1.64% (91 runs sampled) # .makeRe braces - nested ranges (compiled) micromatch x 110,881 ops/sec ±1.85% (82 runs sampled) minimatch x 1,008 ops/sec ±1.51% (91 runs sampled) # .makeRe braces - set (compiled) micromatch x 134,930 ops/sec ±3.54% (63 runs sampled)) minimatch x 43,242 ops/sec ±0.60% (93 runs sampled) # .makeRe braces - nested sets (compiled) micromatch x 94,455 ops/sec ±1.74% (69 runs sampled)) minimatch x 27,720 ops/sec ±1.84% (93 runs sampled)) ``` ## Contributing All contributions are welcome! Please read [the contributing guide](.github/contributing.md) to get started. **Bug reports** Please create an issue if you encounter a bug or matching behavior that doesn't seem correct. If you find a matching-related issue, please: * [research existing issues first](../../issues) (open and closed) * visit the [GNU Bash documentation](https://www.gnu.org/software/bash/manual/) to see how Bash deals with the pattern * visit the [minimatch](https://github.com/isaacs/minimatch) documentation to cross-check expected behavior in node.js * if all else fails, since there is no real specification for globs we will probably need to discuss expected behavior and decide how to resolve it. which means any detail you can provide to help with this discussion would be greatly appreciated. **Platform issues** It's important to us that micromatch work consistently on all platforms. If you encounter any platform-specific matching or path related issues, please let us know (pull requests are also greatly appreciated). ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). Please read the [contributing guide](.github/contributing.md) for advice on opening issues, pull requests, and coding standards. </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [braces](https://www.npmjs.com/package/braces): Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support… [more](https://github.com/micromatch/braces) | [homepage](https://github.com/micromatch/braces "Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.") * [expand-brackets](https://www.npmjs.com/package/expand-brackets): Expand POSIX bracket expressions (character classes) in glob patterns. | [homepage](https://github.com/micromatch/expand-brackets "Expand POSIX bracket expressions (character classes) in glob patterns.") * [extglob](https://www.npmjs.com/package/extglob): Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob… [more](https://github.com/micromatch/extglob) | [homepage](https://github.com/micromatch/extglob "Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob patterns.") * [fill-range](https://www.npmjs.com/package/fill-range): Fill in a range of numbers or letters, optionally passing an increment or `step` to… [more](https://github.com/jonschlinkert/fill-range) | [homepage](https://github.com/jonschlinkert/fill-range "Fill in a range of numbers or letters, optionally passing an increment or `step` to use, or create a regex-compatible range with `options.toRegex`") * [nanomatch](https://www.npmjs.com/package/nanomatch): Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash… [more](https://github.com/micromatch/nanomatch) | [homepage](https://github.com/micromatch/nanomatch "Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash 4.3 wildcard support only (no support for exglobs, posix brackets or braces)") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 512 | [jonschlinkert](https://github.com/jonschlinkert) | | 12 | [es128](https://github.com/es128) | | 9 | [danez](https://github.com/danez) | | 8 | [doowb](https://github.com/doowb) | | 6 | [paulmillr](https://github.com/paulmillr) | | 5 | [mrmlnc](https://github.com/mrmlnc) | | 3 | [DrPizza](https://github.com/DrPizza) | | 2 | [TrySound](https://github.com/TrySound) | | 2 | [mceIdo](https://github.com/mceIdo) | | 2 | [Glazy](https://github.com/Glazy) | | 2 | [MartinKolarik](https://github.com/MartinKolarik) | | 2 | [antonyk](https://github.com/antonyk) | | 2 | [Tvrqvoise](https://github.com/Tvrqvoise) | | 1 | [amilajack](https://github.com/amilajack) | | 1 | [Cslove](https://github.com/Cslove) | | 1 | [devongovett](https://github.com/devongovett) | | 1 | [DianeLooney](https://github.com/DianeLooney) | | 1 | [UltCombo](https://github.com/UltCombo) | | 1 | [frangio](https://github.com/frangio) | | 1 | [joyceerhl](https://github.com/joyceerhl) | | 1 | [juszczykjakub](https://github.com/juszczykjakub) | | 1 | [muescha](https://github.com/muescha) | | 1 | [sebdeckers](https://github.com/sebdeckers) | | 1 | [tomByrer](https://github.com/tomByrer) | | 1 | [fidian](https://github.com/fidian) | | 1 | [curbengh](https://github.com/curbengh) | | 1 | [simlu](https://github.com/simlu) | | 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | | 1 | [yvele](https://github.com/yvele) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2022, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on March 24, 2022._ node-fetch ========== [![npm version][npm-image]][npm-url] [![build status][travis-image]][travis-url] [![coverage status][codecov-image]][codecov-url] [![install size][install-size-image]][install-size-url] [![Discord][discord-image]][discord-url] A light-weight module that brings `window.fetch` to Node.js (We are looking for [v2 maintainers and collaborators](https://github.com/bitinn/node-fetch/issues/567)) [![Backers][opencollective-image]][opencollective-url] <!-- TOC --> - [Motivation](#motivation) - [Features](#features) - [Difference from client-side fetch](#difference-from-client-side-fetch) - [Installation](#installation) - [Loading and configuring the module](#loading-and-configuring-the-module) - [Common Usage](#common-usage) - [Plain text or HTML](#plain-text-or-html) - [JSON](#json) - [Simple Post](#simple-post) - [Post with JSON](#post-with-json) - [Post with form parameters](#post-with-form-parameters) - [Handling exceptions](#handling-exceptions) - [Handling client and server errors](#handling-client-and-server-errors) - [Advanced Usage](#advanced-usage) - [Streams](#streams) - [Buffer](#buffer) - [Accessing Headers and other Meta data](#accessing-headers-and-other-meta-data) - [Extract Set-Cookie Header](#extract-set-cookie-header) - [Post data using a file stream](#post-data-using-a-file-stream) - [Post with form-data (detect multipart)](#post-with-form-data-detect-multipart) - [Request cancellation with AbortSignal](#request-cancellation-with-abortsignal) - [API](#api) - [fetch(url[, options])](#fetchurl-options) - [Options](#options) - [Class: Request](#class-request) - [Class: Response](#class-response) - [Class: Headers](#class-headers) - [Interface: Body](#interface-body) - [Class: FetchError](#class-fetcherror) - [License](#license) - [Acknowledgement](#acknowledgement) <!-- /TOC --> ## Motivation Instead of implementing `XMLHttpRequest` in Node.js to run browser-specific [Fetch polyfill](https://github.com/github/fetch), why not go from native `http` to `fetch` API directly? Hence, `node-fetch`, minimal code for a `window.fetch` compatible API on Node.js runtime. See Matt Andrews' [isomorphic-fetch](https://github.com/matthew-andrews/isomorphic-fetch) or Leonardo Quixada's [cross-fetch](https://github.com/lquixada/cross-fetch) for isomorphic usage (exports `node-fetch` for server-side, `whatwg-fetch` for client-side). ## Features - Stay consistent with `window.fetch` API. - Make conscious trade-off when following [WHATWG fetch spec][whatwg-fetch] and [stream spec](https://streams.spec.whatwg.org/) implementation details, document known differences. - Use native promise but allow substituting it with [insert your favorite promise library]. - Use native Node streams for body on both request and response. - Decode content encoding (gzip/deflate) properly and convert string output (such as `res.text()` and `res.json()`) to UTF-8 automatically. - Useful extensions such as timeout, redirect limit, response size limit, [explicit errors](ERROR-HANDLING.md) for troubleshooting. ## Difference from client-side fetch - See [Known Differences](LIMITS.md) for details. - If you happen to use a missing feature that `window.fetch` offers, feel free to open an issue. - Pull requests are welcomed too! ## Installation Current stable release (`2.x`) ```sh $ npm install node-fetch ``` ## Loading and configuring the module We suggest you load the module via `require` until the stabilization of ES modules in node: ```js const fetch = require('node-fetch'); ``` If you are using a Promise library other than native, set it through `fetch.Promise`: ```js const Bluebird = require('bluebird'); fetch.Promise = Bluebird; ``` ## Common Usage NOTE: The documentation below is up-to-date with `2.x` releases; see the [`1.x` readme](https://github.com/bitinn/node-fetch/blob/1.x/README.md), [changelog](https://github.com/bitinn/node-fetch/blob/1.x/CHANGELOG.md) and [2.x upgrade guide](UPGRADE-GUIDE.md) for the differences. #### Plain text or HTML ```js fetch('https://github.com/') .then(res => res.text()) .then(body => console.log(body)); ``` #### JSON ```js fetch('https://api.github.com/users/github') .then(res => res.json()) .then(json => console.log(json)); ``` #### Simple Post ```js fetch('https://httpbin.org/post', { method: 'POST', body: 'a=1' }) .then(res => res.json()) // expecting a json response .then(json => console.log(json)); ``` #### Post with JSON ```js const body = { a: 1 }; fetch('https://httpbin.org/post', { method: 'post', body: JSON.stringify(body), headers: { 'Content-Type': 'application/json' }, }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form parameters `URLSearchParams` is available in Node.js as of v7.5.0. See [official documentation](https://nodejs.org/api/url.html#url_class_urlsearchparams) for more usage methods. NOTE: The `Content-Type` header is only set automatically to `x-www-form-urlencoded` when an instance of `URLSearchParams` is given as such: ```js const { URLSearchParams } = require('url'); const params = new URLSearchParams(); params.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: params }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Handling exceptions NOTE: 3xx-5xx responses are *NOT* exceptions and should be handled in `then()`; see the next section for more information. Adding a catch to the fetch promise chain will catch *all* exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the [error handling document](ERROR-HANDLING.md) for more details. ```js fetch('https://domain.invalid/') .catch(err => console.error(err)); ``` #### Handling client and server errors It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses: ```js function checkStatus(res) { if (res.ok) { // res.status >= 200 && res.status < 300 return res; } else { throw MyCustomError(res.statusText); } } fetch('https://httpbin.org/status/400') .then(checkStatus) .then(res => console.log('will not get here...')) ``` ## Advanced Usage #### Streams The "Node.js way" is to use streams when possible: ```js fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => { const dest = fs.createWriteStream('./octocat.png'); res.body.pipe(dest); }); ``` #### Buffer If you prefer to cache binary data in full, use buffer(). (NOTE: `buffer()` is a `node-fetch`-only API) ```js const fileType = require('file-type'); fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png') .then(res => res.buffer()) .then(buffer => fileType(buffer)) .then(type => { /* ... */ }); ``` #### Accessing Headers and other Meta data ```js fetch('https://github.com/') .then(res => { console.log(res.ok); console.log(res.status); console.log(res.statusText); console.log(res.headers.raw()); console.log(res.headers.get('content-type')); }); ``` #### Extract Set-Cookie Header Unlike browsers, you can access raw `Set-Cookie` headers manually using `Headers.raw()`. This is a `node-fetch` only API. ```js fetch(url).then(res => { // returns an array of values, instead of a string of comma-separated values console.log(res.headers.raw()['set-cookie']); }); ``` #### Post data using a file stream ```js const { createReadStream } = require('fs'); const stream = createReadStream('input.txt'); fetch('https://httpbin.org/post', { method: 'POST', body: stream }) .then(res => res.json()) .then(json => console.log(json)); ``` #### Post with form-data (detect multipart) ```js const FormData = require('form-data'); const form = new FormData(); form.append('a', 1); fetch('https://httpbin.org/post', { method: 'POST', body: form }) .then(res => res.json()) .then(json => console.log(json)); // OR, using custom headers // NOTE: getHeaders() is non-standard API const form = new FormData(); form.append('a', 1); const options = { method: 'POST', body: form, headers: form.getHeaders() } fetch('https://httpbin.org/post', options) .then(res => res.json()) .then(json => console.log(json)); ``` #### Request cancellation with AbortSignal > NOTE: You may cancel streamed requests only on Node >= v8.0.0 You may cancel requests with `AbortController`. A suggested implementation is [`abort-controller`](https://www.npmjs.com/package/abort-controller). An example of timing out a request after 150ms could be achieved as the following: ```js import AbortController from 'abort-controller'; const controller = new AbortController(); const timeout = setTimeout( () => { controller.abort(); }, 150, ); fetch(url, { signal: controller.signal }) .then(res => res.json()) .then( data => { useData(data) }, err => { if (err.name === 'AbortError') { // request was aborted } }, ) .finally(() => { clearTimeout(timeout); }); ``` See [test cases](https://github.com/bitinn/node-fetch/blob/master/test/test.js) for more examples. ## API ### fetch(url[, options]) - `url` A string representing the URL for fetching - `options` [Options](#fetch-options) for the HTTP(S) request - Returns: <code>Promise&lt;[Response](#class-response)&gt;</code> Perform an HTTP(S) fetch. `url` should be an absolute url, such as `https://example.com/`. A path-relative URL (`/file/under/root`) or protocol-relative URL (`//can-be-http-or-https.com/`) will result in a rejected `Promise`. <a id="fetch-options"></a> ### Options The default values are shown after each option key. ```js { // These properties are part of the Fetch Standard method: 'GET', headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below) body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect signal: null, // pass an instance of AbortSignal to optionally abort requests // The following properties are node-fetch extensions follow: 20, // maximum redirect count. 0 to not follow redirect timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies). Signal is recommended instead. compress: true, // support gzip/deflate content encoding. false to disable size: 0, // maximum response body size in bytes. 0 to disable agent: null // http(s).Agent instance or function that returns an instance (see below) } ``` ##### Default Headers If no values are set, the following request headers will be sent automatically: Header | Value ------------------- | -------------------------------------------------------- `Accept-Encoding` | `gzip,deflate` _(when `options.compress === true`)_ `Accept` | `*/*` `Connection` | `close` _(when no `options.agent` is present)_ `Content-Length` | _(automatically calculated, if possible)_ `Transfer-Encoding` | `chunked` _(when `req.body` is a stream)_ `User-Agent` | `node-fetch/1.0 (+https://github.com/bitinn/node-fetch)` Note: when `body` is a `Stream`, `Content-Length` is not set automatically. ##### Custom Agent The `agent` option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following: - Support self-signed certificate - Use only IPv4 or IPv6 - Custom DNS Lookup See [`http.Agent`](https://nodejs.org/api/http.html#http_new_agent_options) for more information. In addition, the `agent` option accepts a function that returns `http`(s)`.Agent` instance given current [URL](https://nodejs.org/api/url.html), this is useful during a redirection chain across HTTP and HTTPS protocol. ```js const httpAgent = new http.Agent({ keepAlive: true }); const httpsAgent = new https.Agent({ keepAlive: true }); const options = { agent: function (_parsedURL) { if (_parsedURL.protocol == 'http:') { return httpAgent; } else { return httpsAgent; } } } ``` <a id="class-request"></a> ### Class: Request An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the [Body](#iface-body) interface. Due to the nature of Node.js, the following properties are not implemented at this moment: - `type` - `destination` - `referrer` - `referrerPolicy` - `mode` - `credentials` - `cache` - `integrity` - `keepalive` The following node-fetch extension properties are provided: - `follow` - `compress` - `counter` - `agent` See [options](#fetch-options) for exact meaning of these extensions. #### new Request(input[, options]) <small>*(spec-compliant)*</small> - `input` A string representing a URL, or another `Request` (which will be cloned) - `options` [Options][#fetch-options] for the HTTP(S) request Constructs a new `Request` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request). In most cases, directly `fetch(url, options)` is simpler than creating a `Request` object. <a id="class-response"></a> ### Class: Response An HTTP(S) response. This class implements the [Body](#iface-body) interface. The following properties are not implemented in node-fetch at this moment: - `Response.error()` - `Response.redirect()` - `type` - `trailer` #### new Response([body[, options]]) <small>*(spec-compliant)*</small> - `body` A `String` or [`Readable` stream][node-readable] - `options` A [`ResponseInit`][response-init] options dictionary Constructs a new `Response` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Response/Response). Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a `Response` directly. #### response.ok <small>*(spec-compliant)*</small> Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300. #### response.redirected <small>*(spec-compliant)*</small> Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0. <a id="class-headers"></a> ### Class: Headers This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the [Fetch Standard][whatwg-fetch] are implemented. #### new Headers([init]) <small>*(spec-compliant)*</small> - `init` Optional argument to pre-fill the `Headers` object Construct a new `Headers` object. `init` can be either `null`, a `Headers` object, an key-value map object or any iterable object. ```js // Example adapted from https://fetch.spec.whatwg.org/#example-headers-class const meta = { 'Content-Type': 'text/xml', 'Breaking-Bad': '<3' }; const headers = new Headers(meta); // The above is equivalent to const meta = [ [ 'Content-Type', 'text/xml' ], [ 'Breaking-Bad', '<3' ] ]; const headers = new Headers(meta); // You can in fact use any iterable objects, like a Map or even another Headers const meta = new Map(); meta.set('Content-Type', 'text/xml'); meta.set('Breaking-Bad', '<3'); const headers = new Headers(meta); const copyOfHeaders = new Headers(headers); ``` <a id="iface-body"></a> ### Interface: Body `Body` is an abstract interface with methods that are applicable to both `Request` and `Response` classes. The following methods are not yet implemented in node-fetch at this moment: - `formData()` #### body.body <small>*(deviation from spec)*</small> * Node.js [`Readable` stream][node-readable] Data are encapsulated in the `Body` object. Note that while the [Fetch Standard][whatwg-fetch] requires the property to always be a WHATWG `ReadableStream`, in node-fetch it is a Node.js [`Readable` stream][node-readable]. #### body.bodyUsed <small>*(spec-compliant)*</small> * `Boolean` A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again. #### body.arrayBuffer() #### body.blob() #### body.json() #### body.text() <small>*(spec-compliant)*</small> * Returns: <code>Promise</code> Consume the body and return a promise that will resolve to one of these formats. #### body.buffer() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;Buffer&gt;</code> Consume the body and return a promise that will resolve to a Buffer. #### body.textConverted() <small>*(node-fetch extension)*</small> * Returns: <code>Promise&lt;String&gt;</code> Identical to `body.text()`, except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible. (This API requires an optional dependency of the npm package [encoding](https://www.npmjs.com/package/encoding), which you need to install manually. `webpack` users may see [a warning message](https://github.com/bitinn/node-fetch/issues/412#issuecomment-379007792) due to this optional dependency.) <a id="class-fetcherror"></a> ### Class: FetchError <small>*(node-fetch extension)*</small> An operational error in the fetching process. See [ERROR-HANDLING.md][] for more info. <a id="class-aborterror"></a> ### Class: AbortError <small>*(node-fetch extension)*</small> An Error thrown when the request is aborted in response to an `AbortSignal`'s `abort` event. It has a `name` property of `AbortError`. See [ERROR-HANDLING.MD][] for more info. ## Acknowledgement Thanks to [github/fetch](https://github.com/github/fetch) for providing a solid implementation reference. `node-fetch` v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr). ## License MIT [npm-image]: https://flat.badgen.net/npm/v/node-fetch [npm-url]: https://www.npmjs.com/package/node-fetch [travis-image]: https://flat.badgen.net/travis/bitinn/node-fetch [travis-url]: https://travis-ci.org/bitinn/node-fetch [codecov-image]: https://flat.badgen.net/codecov/c/github/bitinn/node-fetch/master [codecov-url]: https://codecov.io/gh/bitinn/node-fetch [install-size-image]: https://flat.badgen.net/packagephobia/install/node-fetch [install-size-url]: https://packagephobia.now.sh/result?p=node-fetch [discord-image]: https://img.shields.io/discord/619915844268326952?color=%237289DA&label=Discord&style=flat-square [discord-url]: https://discord.gg/Zxbndcm [opencollective-image]: https://opencollective.com/node-fetch/backers.svg [opencollective-url]: https://opencollective.com/node-fetch [whatwg-fetch]: https://fetch.spec.whatwg.org/ [response-init]: https://fetch.spec.whatwg.org/#responseinit [node-readable]: https://nodejs.org/api/stream.html#stream_readable_streams [mdn-headers]: https://developer.mozilla.org/en-US/docs/Web/API/Headers [LIMITS.md]: https://github.com/bitinn/node-fetch/blob/master/LIMITS.md [ERROR-HANDLING.md]: https://github.com/bitinn/node-fetch/blob/master/ERROR-HANDLING.md [UPGRADE-GUIDE.md]: https://github.com/bitinn/node-fetch/blob/master/UPGRADE-GUIDE.md # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Current Status whatwg-url is currently up to date with the URL spec up to commit [a62223](https://github.com/whatwg/url/commit/a622235308342c9adc7fc2fd1659ff059f7d5e2a). ## API ### The `URL` Constructor The main API is the [`URL`](https://url.spec.whatwg.org/#url) export, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use this. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/browsers.html#serialization-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by the string `"failure"`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ the string `"failure"`. # fastq ![ci][ci-url] [![npm version][npm-badge]][npm-url] [![Dependency Status][david-badge]][david-url] Fast, in memory work queue. Benchmarks (1 million tasks): * setImmediate: 812ms * fastq: 854ms * async.queue: 1298ms * neoAsync.queue: 1249ms Obtained on node 12.16.1, on a dedicated server. If you need zero-overhead series function call, check out [fastseries](http://npm.im/fastseries). For zero-overhead parallel function call, check out [fastparallel](http://npm.im/fastparallel). [![js-standard-style](https://raw.githubusercontent.com/feross/standard/master/badge.png)](https://github.com/feross/standard) * <a href="#install">Installation</a> * <a href="#usage">Usage</a> * <a href="#api">API</a> * <a href="#license">Licence &amp; copyright</a> ## Install `npm i fastq --save` ## Usage (callback API) ```js 'use strict' const queue = require('fastq')(worker, 1) queue.push(42, function (err, result) { if (err) { throw err } console.log('the result is', result) }) function worker (arg, cb) { cb(null, arg * 2) } ``` ## Usage (promise API) ```js const queue = require('fastq').promise(worker, 1) async function worker (arg) { return arg * 2 } async function run () { const result = await queue.push(42) console.log('the result is', result) } run() ``` ### Setting "this" ```js 'use strict' const that = { hello: 'world' } const queue = require('fastq')(that, worker, 1) queue.push(42, function (err, result) { if (err) { throw err } console.log(this) console.log('the result is', result) }) function worker (arg, cb) { console.log(this) cb(null, arg * 2) } ``` ### Using with TypeScript (callback API) ```ts 'use strict' import * as fastq from "fastq"; import type { queue, done } from "fastq"; type Task = { id: number } const q: queue<Task> = fastq(worker, 1) q.push({ id: 42}) function worker (arg: Task, cb: done) { console.log(arg.id) cb(null) } ``` ### Using with TypeScript (promise API) ```ts 'use strict' import * as fastq from "fastq"; import type { queueAsPromised } from "fastq"; type Task = { id: number } const q: queueAsPromised<Task> = fastq.promise(asyncWorker, 1) q.push({ id: 42}).catch((err) => console.error(err)) async function asyncWorker (arg: Task): Promise<void> { // No need for a try-catch block, fastq handles errors automatically console.log(arg.id) } ``` ## API * <a href="#fastqueue"><code>fastqueue()</code></a> * <a href="#push"><code>queue#<b>push()</b></code></a> * <a href="#unshift"><code>queue#<b>unshift()</b></code></a> * <a href="#pause"><code>queue#<b>pause()</b></code></a> * <a href="#resume"><code>queue#<b>resume()</b></code></a> * <a href="#idle"><code>queue#<b>idle()</b></code></a> * <a href="#length"><code>queue#<b>length()</b></code></a> * <a href="#getQueue"><code>queue#<b>getQueue()</b></code></a> * <a href="#kill"><code>queue#<b>kill()</b></code></a> * <a href="#killAndDrain"><code>queue#<b>killAndDrain()</b></code></a> * <a href="#error"><code>queue#<b>error()</b></code></a> * <a href="#concurrency"><code>queue#<b>concurrency</b></code></a> * <a href="#drain"><code>queue#<b>drain</b></code></a> * <a href="#empty"><code>queue#<b>empty</b></code></a> * <a href="#saturated"><code>queue#<b>saturated</b></code></a> * <a href="#promise"><code>fastqueue.promise()</code></a> ------------------------------------------------------- <a name="fastqueue"></a> ### fastqueue([that], worker, concurrency) Creates a new queue. Arguments: * `that`, optional context of the `worker` function. * `worker`, worker function, it would be called with `that` as `this`, if that is specified. * `concurrency`, number of concurrent tasks that could be executed in parallel. ------------------------------------------------------- <a name="push"></a> ### queue.push(task, done) Add a task at the end of the queue. `done(err, result)` will be called when the task was processed. ------------------------------------------------------- <a name="unshift"></a> ### queue.unshift(task, done) Add a task at the beginning of the queue. `done(err, result)` will be called when the task was processed. ------------------------------------------------------- <a name="pause"></a> ### queue.pause() Pause the processing of tasks. Currently worked tasks are not stopped. ------------------------------------------------------- <a name="resume"></a> ### queue.resume() Resume the processing of tasks. ------------------------------------------------------- <a name="idle"></a> ### queue.idle() Returns `false` if there are tasks being processed or waiting to be processed. `true` otherwise. ------------------------------------------------------- <a name="length"></a> ### queue.length() Returns the number of tasks waiting to be processed (in the queue). ------------------------------------------------------- <a name="getQueue"></a> ### queue.getQueue() Returns all the tasks be processed (in the queue). Returns empty array when there are no tasks ------------------------------------------------------- <a name="kill"></a> ### queue.kill() Removes all tasks waiting to be processed, and reset `drain` to an empty function. ------------------------------------------------------- <a name="killAndDrain"></a> ### queue.killAndDrain() Same than `kill` but the `drain` function will be called before reset to empty. ------------------------------------------------------- <a name="error"></a> ### queue.error(handler) Set a global error handler. `handler(err, task)` will be called when any of the tasks return an error. ------------------------------------------------------- <a name="concurrency"></a> ### queue.concurrency Property that returns the number of concurrent tasks that could be executed in parallel. It can be altered at runtime. ------------------------------------------------------- <a name="drain"></a> ### queue.drain Function that will be called when the last item from the queue has been processed by a worker. It can be altered at runtime. ------------------------------------------------------- <a name="empty"></a> ### queue.empty Function that will be called when the last item from the queue has been assigned to a worker. It can be altered at runtime. ------------------------------------------------------- <a name="saturated"></a> ### queue.saturated Function that will be called when the queue hits the concurrency limit. It can be altered at runtime. ------------------------------------------------------- <a name="promise"></a> ### fastqueue.promise([that], worker(arg), concurrency) Creates a new queue with `Promise` apis. It also offers all the methods and properties of the object returned by [`fastqueue`](#fastqueue) with the modified [`push`](#pushPromise) and [`unshift`](#unshiftPromise) methods. Node v10+ is required to use the promisified version. Arguments: * `that`, optional context of the `worker` function. * `worker`, worker function, it would be called with `that` as `this`, if that is specified. It MUST return a `Promise`. * `concurrency`, number of concurrent tasks that could be executed in parallel. <a name="pushPromise"></a> #### queue.push(task) => Promise Add a task at the end of the queue. The returned `Promise` will be fulfilled (rejected) when the task is completed successfully (unsuccessfully). This promise could be ignored as it will not lead to a `'unhandledRejection'`. <a name="unshiftPromise"></a> #### queue.unshift(task) => Promise Add a task at the beginning of the queue. The returned `Promise` will be fulfilled (rejected) when the task is completed successfully (unsuccessfully). This promise could be ignored as it will not lead to a `'unhandledRejection'`. <a name="drained"></a> #### queue.drained() => Promise Wait for the queue to be drained. The returned `Promise` will be resolved when all tasks in the queue have been processed by a worker. This promise could be ignored as it will not lead to a `'unhandledRejection'`. ## License ISC [ci-url]: https://github.com/mcollina/fastq/workflows/ci/badge.svg [npm-badge]: https://badge.fury.io/js/fastq.svg [npm-url]: https://badge.fury.io/js/fastq [david-badge]: https://david-dm.org/mcollina/fastq.svg [david-url]: https://david-dm.org/mcollina/fastq # near-api-js [![Build Status](https://travis-ci.com/near/near-api-js.svg?branch=master)](https://travis-ci.com/near/near-api-js) [![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/near/near-api-js) A JavaScript/TypeScript library for development of DApps on the NEAR platform # Documentation [Read the TypeDoc API documentation](https://near.github.io/near-api-js/) --- # Examples ## [Quick Reference](https://github.com/near/near-api-js/blob/master/examples/quick-reference.md) _(Cheat sheet / quick reference)_ ## [Cookbook](https://github.com/near/near-api-js/blob/master/examples/cookbook/README.md) _(Common use cases / more complex examples)_ --- # Contribute to this library 1. Install dependencies yarn 2. Run continuous build with: yarn build -- -w # Publish Prepare `dist` version by running: yarn dist When publishing to npm use [np](https://github.com/sindresorhus/np). --- # Integration Test Start the node by following instructions from [nearcore](https://github.com/nearprotocol/nearcore), then yarn test Tests use sample contract from `near-hello` npm package, see https://github.com/nearprotocol/near-hello # Update error schema Follow next steps: 1. [Change hash for the commit with errors in the nearcore](https://github.com/near/near-api-js/blob/master/fetch_error_schema.js#L8-L9) 2. Fetch new schema: `node fetch_error_schema.js` 3. `yarn build` to update `lib/**.js` files # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE](LICENSE) and [LICENSE-APACHE](LICENSE-APACHE) for details. # cssesc [![Build status](https://travis-ci.org/mathiasbynens/cssesc.svg?branch=master)](https://travis-ci.org/mathiasbynens/cssesc) [![Code coverage status](https://img.shields.io/codecov/c/github/mathiasbynens/cssesc.svg)](https://codecov.io/gh/mathiasbynens/cssesc) A JavaScript library for escaping CSS strings and identifiers while generating the shortest possible ASCII-only output. This is a JavaScript library for [escaping text for use in CSS strings or identifiers](https://mathiasbynens.be/notes/css-escapes) while generating the shortest possible valid ASCII-only output. [Here’s an online demo.](https://mothereff.in/css-escapes) [A polyfill for the CSSOM `CSS.escape()` method is available in a separate repository.](https://mths.be/cssescape) (In comparison, _cssesc_ is much more powerful.) Feel free to fork if you see possible improvements! ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install cssesc ``` In a browser: ```html <script src="cssesc.js"></script> ``` In [Node.js](https://nodejs.org/): ```js const cssesc = require('cssesc'); ``` In Ruby using [the `ruby-cssesc` wrapper gem](https://github.com/borodean/ruby-cssesc): ```bash gem install ruby-cssesc ``` ```ruby require 'ruby-cssesc' CSSEsc.escape('I ♥ Ruby', is_identifier: true) ``` In Sass using [`sassy-escape`](https://github.com/borodean/sassy-escape): ```bash gem install sassy-escape ``` ```scss body { content: escape('I ♥ Sass', $is-identifier: true); } ``` ## API ### `cssesc(value, options)` This function takes a value and returns an escaped version of the value where any characters that are not printable ASCII symbols are escaped using the shortest possible (but valid) [escape sequences for use in CSS strings or identifiers](https://mathiasbynens.be/notes/css-escapes). ```js cssesc('Ich ♥ Bücher'); // → 'Ich \\2665 B\\FC cher' cssesc('foo 𝌆 bar'); // → 'foo \\1D306 bar' ``` By default, `cssesc` returns a string that can be used as part of a CSS string. If the target is a CSS identifier rather than a CSS string, use the `isIdentifier: true` setting (see below). The optional `options` argument accepts an object with the following options: #### `isIdentifier` The default value for the `isIdentifier` option is `false`. This means that the input text will be escaped for use in a CSS string literal. If you want to use the result as a CSS identifier instead (in a selector, for example), set this option to `true`. ```js cssesc('123a2b'); // → '123a2b' cssesc('123a2b', { 'isIdentifier': true }); // → '\\31 23a2b' ``` #### `quotes` The default value for the `quotes` option is `'single'`. This means that any occurences of `'` in the input text will be escaped as `\'`, so that the output can be used in a CSS string literal wrapped in single quotes. ```js cssesc('Lorem ipsum "dolor" sit \'amet\' etc.'); // → 'Lorem ipsum "dolor" sit \\\'amet\\\' etc.' // → "Lorem ipsum \"dolor\" sit \\'amet\\' etc." cssesc('Lorem ipsum "dolor" sit \'amet\' etc.', { 'quotes': 'single' }); // → 'Lorem ipsum "dolor" sit \\\'amet\\\' etc.' // → "Lorem ipsum \"dolor\" sit \\'amet\\' etc." ``` If you want to use the output as part of a CSS string literal wrapped in double quotes, set the `quotes` option to `'double'`. ```js cssesc('Lorem ipsum "dolor" sit \'amet\' etc.', { 'quotes': 'double' }); // → 'Lorem ipsum \\"dolor\\" sit \'amet\' etc.' // → "Lorem ipsum \\\"dolor\\\" sit 'amet' etc." ``` #### `wrap` The `wrap` option takes a boolean value (`true` or `false`), and defaults to `false` (disabled). When enabled, the output will be a valid CSS string literal wrapped in quotes. The type of quotes can be specified through the `quotes` setting. ```js cssesc('Lorem ipsum "dolor" sit \'amet\' etc.', { 'quotes': 'single', 'wrap': true }); // → '\'Lorem ipsum "dolor" sit \\\'amet\\\' etc.\'' // → "\'Lorem ipsum \"dolor\" sit \\\'amet\\\' etc.\'" cssesc('Lorem ipsum "dolor" sit \'amet\' etc.', { 'quotes': 'double', 'wrap': true }); // → '"Lorem ipsum \\"dolor\\" sit \'amet\' etc."' // → "\"Lorem ipsum \\\"dolor\\\" sit \'amet\' etc.\"" ``` #### `escapeEverything` The `escapeEverything` option takes a boolean value (`true` or `false`), and defaults to `false` (disabled). When enabled, all the symbols in the output will be escaped, even printable ASCII symbols. ```js cssesc('lolwat"foo\'bar', { 'escapeEverything': true }); // → '\\6C\\6F\\6C\\77\\61\\74\\"\\66\\6F\\6F\\\'\\62\\61\\72' // → "\\6C\\6F\\6C\\77\\61\\74\\\"\\66\\6F\\6F\\'\\62\\61\\72" ``` #### Overriding the default options globally The global default settings can be overridden by modifying the `css.options` object. This saves you from passing in an `options` object for every call to `encode` if you want to use the non-default setting. ```js // Read the global default setting for `escapeEverything`: cssesc.options.escapeEverything; // → `false` by default // Override the global default setting for `escapeEverything`: cssesc.options.escapeEverything = true; // Using the global default setting for `escapeEverything`, which is now `true`: cssesc('foo © bar ≠ baz 𝌆 qux'); // → '\\66\\6F\\6F\\ \\A9\\ \\62\\61\\72\\ \\2260\\ \\62\\61\\7A\\ \\1D306\\ \\71\\75\\78' ``` ### `cssesc.version` A string representing the semantic version number. ### Using the `cssesc` binary To use the `cssesc` binary in your shell, simply install cssesc globally using npm: ```bash npm install -g cssesc ``` After that you will be able to escape text for use in CSS strings or identifiers from the command line: ```bash $ cssesc 'föo ♥ bår 𝌆 baz' f\F6o \2665 b\E5r \1D306 baz ``` If the output needs to be a CSS identifier rather than part of a string literal, use the `-i`/`--identifier` option: ```bash $ cssesc --identifier 'föo ♥ bår 𝌆 baz' f\F6o\ \2665\ b\E5r\ \1D306\ baz ``` See `cssesc --help` for the full list of options. ## Support This library supports the Node.js and browser versions mentioned in [`.babelrc`](https://github.com/mathiasbynens/cssesc/blob/master/.babelrc). For a version that supports a wider variety of legacy browsers and environments out-of-the-box, [see v0.1.0](https://github.com/mathiasbynens/cssesc/releases/tag/v0.1.0). ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License This library is available under the [MIT](https://mths.be/mit) license. # Ozone - Javascript Class Framework [![Build Status](https://travis-ci.org/inf3rno/o3.png?branch=master)](https://travis-ci.org/inf3rno/o3) The Ozone class framework contains enhanced class support to ease the development of object-oriented javascript applications in an ES5 environment. Another alternative to get a better class support to use ES6 classes and compilers like Babel, Traceur or TypeScript until native ES6 support arrives. ## Documentation ### Installation ```bash npm install o3 ``` ```bash bower install o3 ``` #### Environment compatibility The framework succeeded the tests on - node v4.2 and v5.x - chrome 51.0 - firefox 47.0 and 48.0 - internet explorer 11.0 - phantomjs 2.1 by the usage of npm scripts under win7 x64. I wasn't able to test the framework by Opera since the Karma launcher is buggy, so I decided not to support Opera. I used [Yadda](https://github.com/acuminous/yadda) to write BDD tests. I used [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. On pre-ES5 environments there will be bugs in the Class module due to pre-ES5 enumeration and the lack of some ES5 methods, so pre-ES5 environments are not supported. #### Requirements An ES5 capable environment is required with - `Object.create` - ES5 compatible property enumeration: `Object.defineProperty`, `Object.getOwnPropertyDescriptor`, `Object.prototype.hasOwnProperty`, etc. - `Array.prototype.forEach` #### Usage In this documentation I used the framework as follows: ```js var o3 = require("o3"), Class = o3.Class; ``` ### Inheritance #### Inheriting from native classes (from the Error class in these examples) You can extend native classes by calling the Class() function. ```js var UserError = Class(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` An alternative to call Class.extend() with the Ancestor as the context. The Class() function uses this in the background. ```js var UserError = Class.extend.call(Error, { prototype: { message: "blah", constructor: function UserError() { Error.captureStackTrace(this, this.constructor); } } }); ``` #### Inheriting from custom classes You can use Class.extend() by any other class, not just by native classes. ```js var Ancestor = Class(Object, { prototype: { a: 1, b: 2 } }); var Descendant = Class.extend.call(Ancestor, { prototype: { c: 3 } }); ``` Or you can simply add it as a static method, so you don't have to pass context any time you want to use it. The only drawback, that this static method will be inherited as well. ```js var Ancestor = Class(Object, { extend: Class.extend, prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheriting from the Class class You can inherit the extend() method and other utility methods from the Class class. Probably this is the simplest solution if you need the Class API and you don't need to inherit from special native classes like Error. ```js var Ancestor = Class.extend({ prototype: { a: 1, b: 2 } }); var Descendant = Ancestor.extend({ prototype: { c: 3 } }); ``` #### Inheritance with clone and merge The static extend() method uses the clone() and merge() utility methods to inherit from the ancestor and add properties from the config. ```js var MyClass = Class.clone.call(Object, function MyClass(){ // ... }); Class.merge.call(MyClass, { prototype: { x: 1, y: 2 } }); ``` Or with utility methods. ```js var MyClass = Class.clone(function MyClass() { // ... }).merge({ prototype: { x: 1, y: 2 } }); ``` #### Inheritance with clone and absorb You can fill in missing properties with the usage of absorb. ```js var MyClass = Class(SomeAncestor, {...}); Class.absorb.call(MyClass, Class); MyClass.merge({...}); ``` For example if you don't have Class methods and your class already has an ancestor, then you can use absorb() to add Class methods. #### Abstract classes Using abstract classes with instantiation verification won't be implemented in this lib, however we provide an `abstractMethod`, which you can put to not implemented parts of your abstract class. ```js var AbstractA = Class({ prototype: { doA: function (){ // ... var b = this.getB(); // ... // do something with b // ... }, getB: abstractMethod } }); var AB1 = Class(AbstractA, { prototype: { getB: function (){ return new B1(); } } }); var ab1 = new AB1(); ``` I strongly support the composition over inheritance principle and I think you should use dependency injection instead of abstract classes. ```js var A = Class({ prototype: { init: function (b){ this.b = b; }, doA: function (){ // ... // do something with this.b // ... } } }); var b = new B1(); var ab1 = new A(b); ``` ### Constructors #### Using a custom constructor You can pass your custom constructor as a config option by creating the class. ```js var MyClass = Class(Object, { prototype: { constructor: function () { // ... } } }); ``` #### Using a custom factory to create the constructor Or you can pass a static factory method to create your custom constructor. ```js var MyClass = Class(Object, { factory: function () { return function () { // ... } } }); ``` #### Using an inherited factory to create the constructor By inheritance the constructors of the descendant classes will be automatically created as well. ```js var Ancestor = Class(Object, { factory: function () { return function () { // ... } } }); var Descendant = Class(Ancestor, {}); ``` #### Using the default factory to create the constructor You don't need to pass anything if you need a noop function as constructor. The Class.factory() will create a noop constructor by default. ```js var MyClass = Class(Object, {}); ``` In fact you don't need to pass any arguments to the Class function if you need an empty class inheriting from the Object native class. ```js var MyClass = Class(); ``` The default factory calls the build() and init() methods if they are given. ```js var MyClass = Class({ prototype: { build: function (options) { console.log("build", options); }, init: function (options) { console.log("init", options); } } }); var my = new MyClass({a: 1, b: 2}); // build {a: 1, b: 2} // init {a: 1, b: 2} var my2 = my.clone({c: 3}); // build {c: 3} var MyClass2 = MyClass.extend({}, [{d: 4}]); // build {d: 4} ``` ### Instantiation #### Creating new instance with the new operator Ofc. you can create a new instance in the javascript way. ```js var MyClass = Class(); var my = new MyClass(); ``` #### Creating a new instance with the static newInstance method If you want to pass an array of arguments then you can do it the following way. ```js var MyClass = Class.extend({ prototype: { constructor: function () { for (var i in arguments) console.log(arguments[i]); } } }); var my = MyClass.newInstance.apply(MyClass, ["a", "b", "c"]); // a // b // c ``` #### Creating new instance with clone You can create a new instance by cloning the prototype of the class. ```js var MyClass = Class(); var my = Class.prototype.clone.call(MyClass.prototype); ``` Or you can inherit the utility methods to make this easier. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); ``` Just be aware that by default cloning calls only the `build()` method, so the `init()` method won't be called by the new instance. #### Cloning instances You can clone an existing instance with the clone method. ```js var MyClass = Class.extend(); var my = MyClass.prototype.clone(); var my2 = my.clone(); ``` Be aware that this is prototypal inheritance with Object.create(), so the inherited properties won't be enumerable. The clone() method calls the build() method on the new instance if it is given. #### Using clone in the constructor You can use the same behavior both by cloning and by creating a new instance using the constructor ```js var MyClass = Class.extend({ lastIndex: 0, prototype: { index: undefined, constructor: function MyClass() { return MyClass.prototype.clone(); }, clone: function () { var instance = Class.prototype.clone.call(this); instance.index = ++MyClass.lastIndex; return instance; } } }); var my1 = new MyClass(); var my2 = MyClass.prototype.clone(); var my3 = my1.clone(); var my4 = my2.clone(); ``` Be aware that this way the constructor will drop the instance created with the `new` operator. Be aware that the clone() method is used by inheritance, so creating the prototype of a descendant class will use the clone() method as well. ```js var Descendant = MyClass.clone(function Descendant() { return Descendant.prototype.clone(); }); var my5 = Descendant.prototype; var my6 = new Descendant(); // ... ``` #### Using absorb(), merge() or inheritance to set the defaults values on properties You can use absorb() to set default values after configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(config); this.absorb(theDefaults); } } }); ``` You can use merge() to set default values before configuration. ```js var MyClass = Class.extend({ prototype: { constructor: function (config) { var theDefaults = { // ... }; this.merge(theDefaults); this.merge(config); } } }); ``` You can use inheritance to set default values on class level. ```js var MyClass = Class.extend({ prototype: { aProperty: defaultValue, // ... constructor: function (config) { this.merge(config); } } }); ``` ## License MIT - 2015 Jánszky László Lajos # http-errors [![NPM Version][npm-version-image]][npm-url] [![NPM Downloads][npm-downloads-image]][node-url] [![Node.js Version][node-image]][node-url] [![Build Status][ci-image]][ci-url] [![Test Coverage][coveralls-image]][coveralls-url] Create HTTP errors for Express, Koa, Connect, etc. with ease. ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install http-errors ``` ## Example ```js var createError = require('http-errors') var express = require('express') var app = express() app.use(function (req, res, next) { if (!req.user) return next(createError(401, 'Please login to view this page.')) next() }) ``` ## API This is the current API, currently extracted from Koa and subject to change. ### Error Properties - `expose` - can be used to signal if `message` should be sent to the client, defaulting to `false` when `status` >= 500 - `headers` - can be an object of header names to values to be sent to the client, defaulting to `undefined`. When defined, the key names should all be lower-cased - `message` - the traditional error message, which should be kept short and all single line - `status` - the status code of the error, mirroring `statusCode` for general compatibility - `statusCode` - the status code of the error, defaulting to `500` ### createError([status], [message], [properties]) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = createError(404, 'This video does not exist!') ``` - `status: 500` - the status code as a number - `message` - the message of the error, defaulting to node's text for that status code. - `properties` - custom properties to attach to the object ### createError([status], [error], [properties]) Extend the given `error` object with `createError.HttpError` properties. This will not alter the inheritance of the given `error` object, and the modified `error` object is the return value. <!-- eslint-disable no-redeclare --> ```js fs.readFile('foo.txt', function (err, buf) { if (err) { if (err.code === 'ENOENT') { var httpError = createError(404, err, { expose: false }) } else { var httpError = createError(500, err) } } }) ``` - `status` - the status code as a number - `error` - the error object to extend - `properties` - custom properties to attach to the object ### createError.isHttpError(val) Determine if the provided `val` is an `HttpError`. This will return `true` if the error inherits from the `HttpError` constructor of this module or matches the "duck type" for an error this module creates. All outputs from the `createError` factory will return `true` for this function, including if an non-`HttpError` was passed into the factory. ### new createError\[code || name\](\[msg]\)) Create a new error object with the given message `msg`. The error object inherits from `createError.HttpError`. ```js var err = new createError.NotFound() ``` - `code` - the status code as a number - `name` - the name of the error as a "bumpy case", i.e. `NotFound` or `InternalServerError`. #### List of all constructors |Status Code|Constructor Name | |-----------|-----------------------------| |400 |BadRequest | |401 |Unauthorized | |402 |PaymentRequired | |403 |Forbidden | |404 |NotFound | |405 |MethodNotAllowed | |406 |NotAcceptable | |407 |ProxyAuthenticationRequired | |408 |RequestTimeout | |409 |Conflict | |410 |Gone | |411 |LengthRequired | |412 |PreconditionFailed | |413 |PayloadTooLarge | |414 |URITooLong | |415 |UnsupportedMediaType | |416 |RangeNotSatisfiable | |417 |ExpectationFailed | |418 |ImATeapot | |421 |MisdirectedRequest | |422 |UnprocessableEntity | |423 |Locked | |424 |FailedDependency | |425 |UnorderedCollection | |426 |UpgradeRequired | |428 |PreconditionRequired | |429 |TooManyRequests | |431 |RequestHeaderFieldsTooLarge | |451 |UnavailableForLegalReasons | |500 |InternalServerError | |501 |NotImplemented | |502 |BadGateway | |503 |ServiceUnavailable | |504 |GatewayTimeout | |505 |HTTPVersionNotSupported | |506 |VariantAlsoNegotiates | |507 |InsufficientStorage | |508 |LoopDetected | |509 |BandwidthLimitExceeded | |510 |NotExtended | |511 |NetworkAuthenticationRequired| ## License [MIT](LICENSE) [ci-image]: https://badgen.net/github/checks/jshttp/http-errors/master?label=ci [ci-url]: https://github.com/jshttp/http-errors/actions?query=workflow%3Aci [coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/http-errors/master [coveralls-url]: https://coveralls.io/r/jshttp/http-errors?branch=master [node-image]: https://badgen.net/npm/node/http-errors [node-url]: https://nodejs.org/en/download [npm-downloads-image]: https://badgen.net/npm/dm/http-errors [npm-url]: https://npmjs.org/package/http-errors [npm-version-image]: https://badgen.net/npm/v/http-errors [travis-image]: https://badgen.net/travis/jshttp/http-errors/master [travis-url]: https://travis-ci.org/jshttp/http-errors # is-number [![NPM version](https://img.shields.io/npm/v/is-number.svg?style=flat)](https://www.npmjs.com/package/is-number) [![NPM monthly downloads](https://img.shields.io/npm/dm/is-number.svg?style=flat)](https://npmjs.org/package/is-number) [![NPM total downloads](https://img.shields.io/npm/dt/is-number.svg?style=flat)](https://npmjs.org/package/is-number) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/is-number.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/is-number) > Returns true if the value is a finite number. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-number ``` ## Why is this needed? In JavaScript, it's not always as straightforward as it should be to reliably check if a value is a number. It's common for devs to use `+`, `-`, or `Number()` to cast a string value to a number (for example, when values are returned from user input, regex matches, parsers, etc). But there are many non-intuitive edge cases that yield unexpected results: ```js console.log(+[]); //=> 0 console.log(+''); //=> 0 console.log(+' '); //=> 0 console.log(typeof NaN); //=> 'number' ``` This library offers a performant way to smooth out edge cases like these. ## Usage ```js const isNumber = require('is-number'); ``` See the [tests](./test.js) for more examples. ### true ```js isNumber(5e3); // true isNumber(0xff); // true isNumber(-1.1); // true isNumber(0); // true isNumber(1); // true isNumber(1.1); // true isNumber(10); // true isNumber(10.10); // true isNumber(100); // true isNumber('-1.1'); // true isNumber('0'); // true isNumber('012'); // true isNumber('0xff'); // true isNumber('1'); // true isNumber('1.1'); // true isNumber('10'); // true isNumber('10.10'); // true isNumber('100'); // true isNumber('5e3'); // true isNumber(parseInt('012')); // true isNumber(parseFloat('012')); // true ``` ### False Everything else is false, as you would expect: ```js isNumber(Infinity); // false isNumber(NaN); // false isNumber(null); // false isNumber(undefined); // false isNumber(''); // false isNumber(' '); // false isNumber('foo'); // false isNumber([1]); // false isNumber([]); // false isNumber(function () {}); // false isNumber({}); // false ``` ## Release history ### 7.0.0 * Refactor. Now uses `.isFinite` if it exists. * Performance is about the same as v6.0 when the value is a string or number. But it's now 3x-4x faster when the value is not a string or number. ### 6.0.0 * Optimizations, thanks to @benaadams. ### 5.0.0 **Breaking changes** * removed support for `instanceof Number` and `instanceof String` ## Benchmarks As with all benchmarks, take these with a grain of salt. See the [benchmarks](./benchmark/index.js) for more detail. ``` # all v7.0 x 413,222 ops/sec ±2.02% (86 runs sampled) v6.0 x 111,061 ops/sec ±1.29% (85 runs sampled) parseFloat x 317,596 ops/sec ±1.36% (86 runs sampled) fastest is 'v7.0' # string v7.0 x 3,054,496 ops/sec ±1.05% (89 runs sampled) v6.0 x 2,957,781 ops/sec ±0.98% (88 runs sampled) parseFloat x 3,071,060 ops/sec ±1.13% (88 runs sampled) fastest is 'parseFloat,v7.0' # number v7.0 x 3,146,895 ops/sec ±0.89% (89 runs sampled) v6.0 x 3,214,038 ops/sec ±1.07% (89 runs sampled) parseFloat x 3,077,588 ops/sec ±1.07% (87 runs sampled) fastest is 'v6.0' ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [is-plain-object](https://www.npmjs.com/package/is-plain-object): Returns true if an object was created by the `Object` constructor. | [homepage](https://github.com/jonschlinkert/is-plain-object "Returns true if an object was created by the `Object` constructor.") * [is-primitive](https://www.npmjs.com/package/is-primitive): Returns `true` if the value is a primitive. | [homepage](https://github.com/jonschlinkert/is-primitive "Returns `true` if the value is a primitive. ") * [isobject](https://www.npmjs.com/package/isobject): Returns true if the value is an object and not an array or null. | [homepage](https://github.com/jonschlinkert/isobject "Returns true if the value is an object and not an array or null.") * [kind-of](https://www.npmjs.com/package/kind-of): Get the native type of a value. | [homepage](https://github.com/jonschlinkert/kind-of "Get the native type of a value.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 49 | [jonschlinkert](https://github.com/jonschlinkert) | | 5 | [charlike-old](https://github.com/charlike-old) | | 1 | [benaadams](https://github.com/benaadams) | | 1 | [realityking](https://github.com/realityking) | ### Author **Jon Schlinkert** * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) ### License Copyright © 2018, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on June 15, 2018._ # postcss-selector-parser [![Build Status](https://travis-ci.org/postcss/postcss-selector-parser.svg?branch=master)](https://travis-ci.org/postcss/postcss-selector-parser) > Selector parser with built in methods for working with selector strings. ## Install With [npm](https://npmjs.com/package/postcss-selector-parser) do: ``` npm install postcss-selector-parser ``` ## Quick Start ```js const parser = require('postcss-selector-parser'); const transform = selectors => { selectors.walk(selector => { // do something with the selector console.log(String(selector)) }); }; const transformed = parser(transform).processSync('h1, h2, h3'); ``` To normalize selector whitespace: ```js const parser = require('postcss-selector-parser'); const normalized = parser().processSync('h1, h2, h3', {lossless: false}); // -> h1,h2,h3 ``` Async support is provided through `parser.process` and will resolve a Promise with the resulting selector string. ## API Please see [API.md](API.md). ## Credits * Huge thanks to Andrey Sitnik (@ai) for work on PostCSS which helped accelerate this module's development. ## License MIT # tailwindcss/nesting This is a PostCSS plugin that wraps [postcss-nested](https://github.com/postcss/postcss-nested) or [postcss-nesting](https://github.com/csstools/postcss-plugins/tree/main/plugins/postcss-nesting) and acts as a compatibility layer to make sure your nesting plugin of choice properly understands Tailwind's custom syntax like `@apply` and `@screen`. Add it to your PostCSS configuration, somewhere before Tailwind itself: ```js // postcss.config.js module.exports = { plugins: [ require('postcss-import'), require('tailwindcss/nesting'), require('tailwindcss'), require('autoprefixer'), ] } ``` By default, it uses the [postcss-nested](https://github.com/postcss/postcss-nested) plugin under the hood, which uses a Sass-like syntax and is the plugin that powers nesting support in the [Tailwind CSS plugin API](https://tailwindcss.com/docs/plugins#css-in-js-syntax). If you'd rather use [postcss-nesting](https://github.com/csstools/postcss-plugins/tree/main/plugins/postcss-nesting) (which is based on the work-in-progress [CSS Nesting](https://drafts.csswg.org/css-nesting-1/) specification), first install the plugin alongside: ```shell npm install postcss-nesting ``` Then pass the plugin itself as an argument to `tailwindcss/nesting` in your PostCSS configuration: ```js // postcss.config.js module.exports = { plugins: [ require('postcss-import'), require('tailwindcss/nesting')(require('postcss-nesting')), require('tailwindcss'), require('autoprefixer'), ] } ``` This can also be helpful if for whatever reason you need to use a very specific version of `postcss-nested` and want to override the version we bundle with `tailwindcss/nesting` itself. # fast-glob > It's a very fast and efficient [glob][glob_definition] library for [Node.js][node_js]. This package provides methods for traversing the file system and returning pathnames that matched a defined set of a specified pattern according to the rules used by the Unix Bash shell with some simplifications, meanwhile results are returned in **arbitrary order**. Quick, simple, effective. ## Table of Contents <details> <summary><strong>Details</strong></summary> * [Highlights](#highlights) * [Donation](#donation) * [Old and modern mode](#old-and-modern-mode) * [Pattern syntax](#pattern-syntax) * [Basic syntax](#basic-syntax) * [Advanced syntax](#advanced-syntax) * [Installation](#installation) * [API](#api) * [Asynchronous](#asynchronous) * [Synchronous](#synchronous) * [Stream](#stream) * [patterns](#patterns) * [[options]](#options) * [Helpers](#helpers) * [generateTasks](#generatetaskspatterns-options) * [isDynamicPattern](#isdynamicpatternpattern-options) * [escapePath](#escapepathpattern) * [Options](#options-3) * [Common](#common) * [concurrency](#concurrency) * [cwd](#cwd) * [deep](#deep) * [followSymbolicLinks](#followsymboliclinks) * [fs](#fs) * [ignore](#ignore) * [suppressErrors](#suppresserrors) * [throwErrorOnBrokenSymbolicLink](#throwerroronbrokensymboliclink) * [Output control](#output-control) * [absolute](#absolute) * [markDirectories](#markdirectories) * [objectMode](#objectmode) * [onlyDirectories](#onlydirectories) * [onlyFiles](#onlyfiles) * [stats](#stats) * [unique](#unique) * [Matching control](#matching-control) * [braceExpansion](#braceexpansion) * [caseSensitiveMatch](#casesensitivematch) * [dot](#dot) * [extglob](#extglob) * [globstar](#globstar) * [baseNameMatch](#basenamematch) * [FAQ](#faq) * [What is a static or dynamic pattern?](#what-is-a-static-or-dynamic-pattern) * [How to write patterns on Windows?](#how-to-write-patterns-on-windows) * [Why are parentheses match wrong?](#why-are-parentheses-match-wrong) * [How to exclude directory from reading?](#how-to-exclude-directory-from-reading) * [How to use UNC path?](#how-to-use-unc-path) * [Compatible with `node-glob`?](#compatible-with-node-glob) * [Benchmarks](#benchmarks) * [Server](#server) * [Nettop](#nettop) * [Changelog](#changelog) * [License](#license) </details> ## Highlights * Fast. Probably the fastest. * Supports multiple and negative patterns. * Synchronous, Promise and Stream API. * Object mode. Can return more than just strings. * Error-tolerant. ## Donation Do you like this project? Support it by donating, creating an issue or pull request. [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)][paypal_mrmlnc] ## Old and modern mode This package works in two modes, depending on the environment in which it is used. * **Old mode**. Node.js below 10.10 or when the [`stats`](#stats) option is *enabled*. * **Modern mode**. Node.js 10.10+ and the [`stats`](#stats) option is *disabled*. The modern mode is faster. Learn more about the [internal mechanism][nodelib_fs_scandir_old_and_modern_modern]. ## Pattern syntax > :warning: Always use forward-slashes in glob expressions (patterns and [`ignore`](#ignore) option). Use backslashes for escaping characters. There is more than one form of syntax: basic and advanced. Below is a brief overview of the supported features. Also pay attention to our [FAQ](#faq). > :book: This package uses a [`micromatch`][micromatch] as a library for pattern matching. ### Basic syntax * An asterisk (`*`) — matches everything except slashes (path separators), hidden files (names starting with `.`). * A double star or globstar (`**`) — matches zero or more directories. * Question mark (`?`) – matches any single character except slashes (path separators). * Sequence (`[seq]`) — matches any character in sequence. > :book: A few additional words about the [basic matching behavior][picomatch_matching_behavior]. Some examples: * `src/**/*.js` — matches all files in the `src` directory (any level of nesting) that have the `.js` extension. * `src/*.??` — matches all files in the `src` directory (only first level of nesting) that have a two-character extension. * `file-[01].js` — matches files: `file-0.js`, `file-1.js`. ### Advanced syntax * [Escapes characters][micromatch_backslashes] (`\\`) — matching special characters (`$^*+?()[]`) as literals. * [POSIX character classes][picomatch_posix_brackets] (`[[:digit:]]`). * [Extended globs][micromatch_extglobs] (`?(pattern-list)`). * [Bash style brace expansions][micromatch_braces] (`{}`). * [Regexp character classes][micromatch_regex_character_classes] (`[1-5]`). * [Regex groups][regular_expressions_brackets] (`(a|b)`). > :book: A few additional words about the [advanced matching behavior][micromatch_extended_globbing]. Some examples: * `src/**/*.{css,scss}` — matches all files in the `src` directory (any level of nesting) that have the `.css` or `.scss` extension. * `file-[[:digit:]].js` — matches files: `file-0.js`, `file-1.js`, …, `file-9.js`. * `file-{1..3}.js` — matches files: `file-1.js`, `file-2.js`, `file-3.js`. * `file-(1|2)` — matches files: `file-1.js`, `file-2.js`. ## Installation ```console npm install fast-glob ``` ## API ### Asynchronous ```js fg(patterns, [options]) ``` Returns a `Promise` with an array of matching entries. ```js const fg = require('fast-glob'); const entries = await fg(['.editorconfig', '**/index.js'], { dot: true }); // ['.editorconfig', 'services/index.js'] ``` ### Synchronous ```js fg.sync(patterns, [options]) ``` Returns an array of matching entries. ```js const fg = require('fast-glob'); const entries = fg.sync(['.editorconfig', '**/index.js'], { dot: true }); // ['.editorconfig', 'services/index.js'] ``` ### Stream ```js fg.stream(patterns, [options]) ``` Returns a [`ReadableStream`][node_js_stream_readable_streams] when the `data` event will be emitted with matching entry. ```js const fg = require('fast-glob'); const stream = fg.stream(['.editorconfig', '**/index.js'], { dot: true }); for await (const entry of stream) { // .editorconfig // services/index.js } ``` #### patterns * Required: `true` * Type: `string | string[]` Any correct pattern(s). > :1234: [Pattern syntax](#pattern-syntax) > > :warning: This package does not respect the order of patterns. First, all the negative patterns are applied, and only then the positive patterns. If you want to get a certain order of records, use sorting or split calls. #### [options] * Required: `false` * Type: [`Options`](#options-3) See [Options](#options-3) section. ### Helpers #### `generateTasks(patterns, [options])` Returns the internal representation of patterns ([`Task`](./src/managers/tasks.ts) is a combining patterns by base directory). ```js fg.generateTasks('*'); [{ base: '.', // Parent directory for all patterns inside this task dynamic: true, // Dynamic or static patterns are in this task patterns: ['*'], positive: ['*'], negative: [] }] ``` ##### patterns * Required: `true` * Type: `string | string[]` Any correct pattern(s). ##### [options] * Required: `false` * Type: [`Options`](#options-3) See [Options](#options-3) section. #### `isDynamicPattern(pattern, [options])` Returns `true` if the passed pattern is a dynamic pattern. > :1234: [What is a static or dynamic pattern?](#what-is-a-static-or-dynamic-pattern) ```js fg.isDynamicPattern('*'); // true fg.isDynamicPattern('abc'); // false ``` ##### pattern * Required: `true` * Type: `string` Any correct pattern. ##### [options] * Required: `false` * Type: [`Options`](#options-3) See [Options](#options-3) section. #### `escapePath(pattern)` Returns a path with escaped special characters (`*?|(){}[]`, `!` at the beginning of line, `@+!` before the opening parenthesis). ```js fg.escapePath('!abc'); // \\!abc fg.escapePath('C:/Program Files (x86)'); // C:/Program Files \\(x86\\) ``` ##### pattern * Required: `true` * Type: `string` Any string, for example, a path to a file. ## Options ### Common options #### concurrency * Type: `number` * Default: `os.cpus().length` Specifies the maximum number of concurrent requests from a reader to read directories. > :book: The higher the number, the higher the performance and load on the file system. If you want to read in quiet mode, set the value to a comfortable number or `1`. #### cwd * Type: `string` * Default: `process.cwd()` The current working directory in which to search. #### deep * Type: `number` * Default: `Infinity` Specifies the maximum depth of a read directory relative to the start directory. For example, you have the following tree: ```js dir/ └── one/ // 1 └── two/ // 2 └── file.js // 3 ``` ```js // With base directory fg.sync('dir/**', { onlyFiles: false, deep: 1 }); // ['dir/one'] fg.sync('dir/**', { onlyFiles: false, deep: 2 }); // ['dir/one', 'dir/one/two'] // With cwd option fg.sync('**', { onlyFiles: false, cwd: 'dir', deep: 1 }); // ['one'] fg.sync('**', { onlyFiles: false, cwd: 'dir', deep: 2 }); // ['one', 'one/two'] ``` > :book: If you specify a pattern with some base directory, this directory will not participate in the calculation of the depth of the found directories. Think of it as a [`cwd`](#cwd) option. #### followSymbolicLinks * Type: `boolean` * Default: `true` Indicates whether to traverse descendants of symbolic link directories when expanding `**` patterns. > :book: Note that this option does not affect the base directory of the pattern. For example, if `./a` is a symlink to directory `./b` and you specified `['./a**', './b/**']` patterns, then directory `./a` will still be read. > :book: If the [`stats`](#stats) option is specified, the information about the symbolic link (`fs.lstat`) will be replaced with information about the entry (`fs.stat`) behind it. #### fs * Type: `FileSystemAdapter` * Default: `fs.*` Custom implementation of methods for working with the file system. ```ts export interface FileSystemAdapter { lstat?: typeof fs.lstat; stat?: typeof fs.stat; lstatSync?: typeof fs.lstatSync; statSync?: typeof fs.statSync; readdir?: typeof fs.readdir; readdirSync?: typeof fs.readdirSync; } ``` #### ignore * Type: `string[]` * Default: `[]` An array of glob patterns to exclude matches. This is an alternative way to use negative patterns. ```js dir/ ├── package-lock.json └── package.json ``` ```js fg.sync(['*.json', '!package-lock.json']); // ['package.json'] fg.sync('*.json', { ignore: ['package-lock.json'] }); // ['package.json'] ``` #### suppressErrors * Type: `boolean` * Default: `false` By default this package suppress only `ENOENT` errors. Set to `true` to suppress any error. > :book: Can be useful when the directory has entries with a special level of access. #### throwErrorOnBrokenSymbolicLink * Type: `boolean` * Default: `false` Throw an error when symbolic link is broken if `true` or safely return `lstat` call if `false`. > :book: This option has no effect on errors when reading the symbolic link directory. ### Output control #### absolute * Type: `boolean` * Default: `false` Return the absolute path for entries. ```js fg.sync('*.js', { absolute: false }); // ['index.js'] fg.sync('*.js', { absolute: true }); // ['/home/user/index.js'] ``` > :book: This option is required if you want to use negative patterns with absolute path, for example, `!${__dirname}/*.js`. #### markDirectories * Type: `boolean` * Default: `false` Mark the directory path with the final slash. ```js fg.sync('*', { onlyFiles: false, markDirectories: false }); // ['index.js', 'controllers'] fg.sync('*', { onlyFiles: false, markDirectories: true }); // ['index.js', 'controllers/'] ``` #### objectMode * Type: `boolean` * Default: `false` Returns objects (instead of strings) describing entries. ```js fg.sync('*', { objectMode: false }); // ['src/index.js'] fg.sync('*', { objectMode: true }); // [{ name: 'index.js', path: 'src/index.js', dirent: <fs.Dirent> }] ``` The object has the following fields: * name (`string`) — the last part of the path (basename) * path (`string`) — full path relative to the pattern base directory * dirent ([`fs.Dirent`][node_js_fs_class_fs_dirent]) — instance of `fs.Dirent` > :book: An object is an internal representation of entry, so getting it does not affect performance. #### onlyDirectories * Type: `boolean` * Default: `false` Return only directories. ```js fg.sync('*', { onlyDirectories: false }); // ['index.js', 'src'] fg.sync('*', { onlyDirectories: true }); // ['src'] ``` > :book: If `true`, the [`onlyFiles`](#onlyfiles) option is automatically `false`. #### onlyFiles * Type: `boolean` * Default: `true` Return only files. ```js fg.sync('*', { onlyFiles: false }); // ['index.js', 'src'] fg.sync('*', { onlyFiles: true }); // ['index.js'] ``` #### stats * Type: `boolean` * Default: `false` Enables an [object mode](#objectmode) with an additional field: * stats ([`fs.Stats`][node_js_fs_class_fs_stats]) — instance of `fs.Stats` ```js fg.sync('*', { stats: false }); // ['src/index.js'] fg.sync('*', { stats: true }); // [{ name: 'index.js', path: 'src/index.js', dirent: <fs.Dirent>, stats: <fs.Stats> }] ``` > :book: Returns `fs.stat` instead of `fs.lstat` for symbolic links when the [`followSymbolicLinks`](#followsymboliclinks) option is specified. > > :warning: Unlike [object mode](#objectmode) this mode requires additional calls to the file system. On average, this mode is slower at least twice. See [old and modern mode](#old-and-modern-mode) for more details. #### unique * Type: `boolean` * Default: `true` Ensures that the returned entries are unique. ```js fg.sync(['*.json', 'package.json'], { unique: false }); // ['package.json', 'package.json'] fg.sync(['*.json', 'package.json'], { unique: true }); // ['package.json'] ``` If `true` and similar entries are found, the result is the first found. ### Matching control #### braceExpansion * Type: `boolean` * Default: `true` Enables Bash-like brace expansion. > :1234: [Syntax description][bash_hackers_syntax_expansion_brace] or more [detailed description][micromatch_braces]. ```js dir/ ├── abd ├── acd └── a{b,c}d ``` ```js fg.sync('a{b,c}d', { braceExpansion: false }); // ['a{b,c}d'] fg.sync('a{b,c}d', { braceExpansion: true }); // ['abd', 'acd'] ``` #### caseSensitiveMatch * Type: `boolean` * Default: `true` Enables a [case-sensitive][wikipedia_case_sensitivity] mode for matching files. ```js dir/ ├── file.txt └── File.txt ``` ```js fg.sync('file.txt', { caseSensitiveMatch: false }); // ['file.txt', 'File.txt'] fg.sync('file.txt', { caseSensitiveMatch: true }); // ['file.txt'] ``` #### dot * Type: `boolean` * Default: `false` Allow patterns to match entries that begin with a period (`.`). > :book: Note that an explicit dot in a portion of the pattern will always match dot files. ```js dir/ ├── .editorconfig └── package.json ``` ```js fg.sync('*', { dot: false }); // ['package.json'] fg.sync('*', { dot: true }); // ['.editorconfig', 'package.json'] ``` #### extglob * Type: `boolean` * Default: `true` Enables Bash-like `extglob` functionality. > :1234: [Syntax description][micromatch_extglobs]. ```js dir/ ├── README.md └── package.json ``` ```js fg.sync('*.+(json|md)', { extglob: false }); // [] fg.sync('*.+(json|md)', { extglob: true }); // ['README.md', 'package.json'] ``` #### globstar * Type: `boolean` * Default: `true` Enables recursively repeats a pattern containing `**`. If `false`, `**` behaves exactly like `*`. ```js dir/ └── a └── b ``` ```js fg.sync('**', { onlyFiles: false, globstar: false }); // ['a'] fg.sync('**', { onlyFiles: false, globstar: true }); // ['a', 'a/b'] ``` #### baseNameMatch * Type: `boolean` * Default: `false` If set to `true`, then patterns without slashes will be matched against the basename of the path if it contains slashes. ```js dir/ └── one/ └── file.md ``` ```js fg.sync('*.md', { baseNameMatch: false }); // [] fg.sync('*.md', { baseNameMatch: true }); // ['one/file.md'] ``` ## FAQ ## What is a static or dynamic pattern? All patterns can be divided into two types: * **static**. A pattern is considered static if it can be used to get an entry on the file system without using matching mechanisms. For example, the `file.js` pattern is a static pattern because we can just verify that it exists on the file system. * **dynamic**. A pattern is considered dynamic if it cannot be used directly to find occurrences without using a matching mechanisms. For example, the `*` pattern is a dynamic pattern because we cannot use this pattern directly. A pattern is considered dynamic if it contains the following characters (`…` — any characters or their absence) or options: * The [`caseSensitiveMatch`](#casesensitivematch) option is disabled * `\\` (the escape character) * `*`, `?`, `!` (at the beginning of line) * `[…]` * `(…|…)` * `@(…)`, `!(…)`, `*(…)`, `?(…)`, `+(…)` (respects the [`extglob`](#extglob) option) * `{…,…}`, `{…..…}` (respects the [`braceExpansion`](#braceexpansion) option) ## How to write patterns on Windows? Always use forward-slashes in glob expressions (patterns and [`ignore`](#ignore) option). Use backslashes for escaping characters. With the [`cwd`](#cwd) option use a convenient format. **Bad** ```ts [ 'directory\\*', path.join(process.cwd(), '**') ] ``` **Good** ```ts [ 'directory/*', path.join(process.cwd(), '**').replace(/\\/g, '/') ] ``` > :book: Use the [`normalize-path`][npm_normalize_path] or the [`unixify`][npm_unixify] package to convert Windows-style path to a Unix-style path. Read more about [matching with backslashes][micromatch_backslashes]. ## Why are parentheses match wrong? ```js dir/ └── (special-*file).txt ``` ```js fg.sync(['(special-*file).txt']) // [] ``` Refers to Bash. You need to escape special characters: ```js fg.sync(['\\(special-*file\\).txt']) // ['(special-*file).txt'] ``` Read more about [matching special characters as literals][picomatch_matching_special_characters_as_literals]. ## How to exclude directory from reading? You can use a negative pattern like this: `!**/node_modules` or `!**/node_modules/**`. Also you can use [`ignore`](#ignore) option. Just look at the example below. ```js first/ ├── file.md └── second/ └── file.txt ``` If you don't want to read the `second` directory, you must write the following pattern: `!**/second` or `!**/second/**`. ```js fg.sync(['**/*.md', '!**/second']); // ['first/file.md'] fg.sync(['**/*.md'], { ignore: ['**/second/**'] }); // ['first/file.md'] ``` > :warning: When you write `!**/second/**/*` it means that the directory will be **read**, but all the entries will not be included in the results. You have to understand that if you write the pattern to exclude directories, then the directory will not be read under any circumstances. ## How to use UNC path? You cannot use [Uniform Naming Convention (UNC)][unc_path] paths as patterns (due to syntax), but you can use them as [`cwd`](#cwd) directory. ```ts fg.sync('*', { cwd: '\\\\?\\C:\\Python27' /* or //?/C:/Python27 */ }); fg.sync('Python27/*', { cwd: '\\\\?\\C:\\' /* or //?/C:/ */ }); ``` ## Compatible with `node-glob`? | node-glob | fast-glob | | :----------: | :-------: | | `cwd` | [`cwd`](#cwd) | | `root` | – | | `dot` | [`dot`](#dot) | | `nomount` | – | | `mark` | [`markDirectories`](#markdirectories) | | `nosort` | – | | `nounique` | [`unique`](#unique) | | `nobrace` | [`braceExpansion`](#braceexpansion) | | `noglobstar` | [`globstar`](#globstar) | | `noext` | [`extglob`](#extglob) | | `nocase` | [`caseSensitiveMatch`](#casesensitivematch) | | `matchBase` | [`baseNameMatch`](#basenamematch) | | `nodir` | [`onlyFiles`](#onlyfiles) | | `ignore` | [`ignore`](#ignore) | | `follow` | [`followSymbolicLinks`](#followsymboliclinks) | | `realpath` | – | | `absolute` | [`absolute`](#absolute) | ## Benchmarks ### Server Link: [Vultr Bare Metal][vultr_pricing_baremetal] * Processor: E3-1270v6 (8 CPU) * RAM: 32GB * Disk: SSD ([Intel DC S3520 SSDSC2BB240G7][intel_ssd]) You can see results [here][github_gist_benchmark_server] for latest release. ### Nettop Link: [Zotac bi323][zotac_bi323] * Processor: Intel N3150 (4 CPU) * RAM: 8GB * Disk: SSD ([Silicon Power SP060GBSS3S55S25][silicon_power_ssd]) You can see results [here][github_gist_benchmark_nettop] for latest release. ## Changelog See the [Releases section of our GitHub project][github_releases] for changelog for each release version. ## License This software is released under the terms of the MIT license. [bash_hackers_syntax_expansion_brace]: https://wiki.bash-hackers.org/syntax/expansion/brace [github_gist_benchmark_nettop]: https://gist.github.com/mrmlnc/f06246b197f53c356895fa35355a367c#file-fg-benchmark-nettop-product-txt [github_gist_benchmark_server]: https://gist.github.com/mrmlnc/f06246b197f53c356895fa35355a367c#file-fg-benchmark-server-product-txt [github_releases]: https://github.com/mrmlnc/fast-glob/releases [glob_definition]: https://en.wikipedia.org/wiki/Glob_(programming) [glob_linux_man]: http://man7.org/linux/man-pages/man3/glob.3.html [intel_ssd]: https://ark.intel.com/content/www/us/en/ark/products/93012/intel-ssd-dc-s3520-series-240gb-2-5in-sata-6gb-s-3d1-mlc.html [micromatch_backslashes]: https://github.com/micromatch/micromatch#backslashes [micromatch_braces]: https://github.com/micromatch/braces [micromatch_extended_globbing]: https://github.com/micromatch/micromatch#extended-globbing [micromatch_extglobs]: https://github.com/micromatch/micromatch#extglobs [micromatch_regex_character_classes]: https://github.com/micromatch/micromatch#regex-character-classes [micromatch]: https://github.com/micromatch/micromatch [node_js_fs_class_fs_dirent]: https://nodejs.org/api/fs.html#fs_class_fs_dirent [node_js_fs_class_fs_stats]: https://nodejs.org/api/fs.html#fs_class_fs_stats [node_js_stream_readable_streams]: https://nodejs.org/api/stream.html#stream_readable_streams [node_js]: https://nodejs.org/en [nodelib_fs_scandir_old_and_modern_modern]: https://github.com/nodelib/nodelib/blob/master/packages/fs/fs.scandir/README.md#old-and-modern-mode [npm_normalize_path]: https://www.npmjs.com/package/normalize-path [npm_unixify]: https://www.npmjs.com/package/unixify [paypal_mrmlnc]:https://paypal.me/mrmlnc [picomatch_matching_behavior]: https://github.com/micromatch/picomatch#matching-behavior-vs-bash [picomatch_matching_special_characters_as_literals]: https://github.com/micromatch/picomatch#matching-special-characters-as-literals [picomatch_posix_brackets]: https://github.com/micromatch/picomatch#posix-brackets [regular_expressions_brackets]: https://www.regular-expressions.info/brackets.html [silicon_power_ssd]: https://www.silicon-power.com/web/product-1 [unc_path]: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-dtyp/62e862f4-2a51-452e-8eeb-dc4ff5ee33cc [vultr_pricing_baremetal]: https://www.vultr.com/pricing/baremetal [wikipedia_case_sensitivity]: https://en.wikipedia.org/wiki/Case_sensitivity [zotac_bi323]: https://www.zotac.com/ee/product/mini_pcs/zbox-bi323 # picocolors The tiniest and the fastest library for terminal output formatting with ANSI colors. ```javascript import pc from "picocolors" console.log( pc.green(`How are ${pc.italic(`you`)} doing?`) ) ``` - **No dependencies.** - **14 times** smaller and **2 times** faster than chalk. - Used by popular tools like PostCSS, SVGO, Stylelint, and Browserslist. - Node.js v6+ & browsers support. Support for both CJS and ESM projects. - TypeScript type declarations included. - [`NO_COLOR`](https://no-color.org/) friendly. ## Docs Read **[full docs](https://github.com/alexeyraspopov/picocolors#readme)** on GitHub. # Statuses [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Node.js Version][node-version-image]][node-version-url] [![Build Status][travis-image]][travis-url] [![Test Coverage][coveralls-image]][coveralls-url] HTTP status utility for node. This module provides a list of status codes and messages sourced from a few different projects: * The [IANA Status Code Registry](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) * The [Node.js project](https://nodejs.org/) * The [NGINX project](https://www.nginx.com/) * The [Apache HTTP Server project](https://httpd.apache.org/) ## Installation This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```sh $ npm install statuses ``` ## API <!-- eslint-disable no-unused-vars --> ```js var status = require('statuses') ``` ### var code = status(Integer || String) If `Integer` or `String` is a valid HTTP code or status message, then the appropriate `code` will be returned. Otherwise, an error will be thrown. <!-- eslint-disable no-undef --> ```js status(403) // => 403 status('403') // => 403 status('forbidden') // => 403 status('Forbidden') // => 403 status(306) // throws, as it's not supported by node.js ``` ### status.STATUS_CODES Returns an object which maps status codes to status messages, in the same format as the [Node.js http module](https://nodejs.org/dist/latest/docs/api/http.html#http_http_status_codes). ### status.codes Returns an array of all the status codes as `Integer`s. ### var msg = status[code] Map of `code` to `status message`. `undefined` for invalid `code`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status[404] // => 'Not Found' ``` ### var code = status[msg] Map of `status message` to `code`. `msg` can either be title-cased or lower-cased. `undefined` for invalid `status message`s. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status['not found'] // => 404 status['Not Found'] // => 404 ``` ### status.redirect[code] Returns `true` if a status code is a valid redirect status. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.redirect[200] // => undefined status.redirect[301] // => true ``` ### status.empty[code] Returns `true` if a status code expects an empty body. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.empty[200] // => undefined status.empty[204] // => true status.empty[304] // => true ``` ### status.retry[code] Returns `true` if you should retry the rest. <!-- eslint-disable no-undef, no-unused-expressions --> ```js status.retry[501] // => undefined status.retry[503] // => true ``` [npm-image]: https://img.shields.io/npm/v/statuses.svg [npm-url]: https://npmjs.org/package/statuses [node-version-image]: https://img.shields.io/node/v/statuses.svg [node-version-url]: https://nodejs.org/en/download [travis-image]: https://img.shields.io/travis/jshttp/statuses.svg [travis-url]: https://travis-ci.org/jshttp/statuses [coveralls-image]: https://img.shields.io/coveralls/jshttp/statuses.svg [coveralls-url]: https://coveralls.io/r/jshttp/statuses?branch=master [downloads-image]: https://img.shields.io/npm/dm/statuses.svg [downloads-url]: https://npmjs.org/package/statuses # has > Object.prototype.hasOwnProperty.call shortcut ## Installation ```sh npm install --save has ``` ## Usage ```js var has = require('has'); has({}, 'hasOwnProperty'); // false has(Object.prototype, 'hasOwnProperty'); // true ``` [![npm][npm]][npm-url] [![node][node]][node-url] [![deps][deps]][deps-url] [![test][test]][test-url] [![coverage][cover]][cover-url] [![code style][style]][style-url] [![chat][chat]][chat-url] <div align="center"> <img width="100" height="100" title="Load Options" src="http://michael-ciniawsky.github.io/postcss-load-options/logo.svg"> <a href="https://github.com/postcss/postcss"> <img width="110" height="110" title="PostCSS" src="http://postcss.github.io/postcss/logo.svg" hspace="10"> </a> <img width="100" height="100" title="Load Plugins" src="http://michael-ciniawsky.github.io/postcss-load-plugins/logo.svg"> <h1>Load Config</h1> </div> <h2 align="center">Install</h2> ```bash npm i -D postcss-load-config ``` <h2 align="center">Usage</h2> ```bash npm i -S|-D postcss-plugin ``` Install all required PostCSS plugins and save them to your **package.json** `dependencies`/`devDependencies` Then create a PostCSS config file by choosing one of the following formats ### `package.json` Create a **`postcss`** section in your project's **`package.json`** ``` Project (Root) |– client |– public | |- package.json ``` ```json { "postcss": { "parser": "sugarss", "map": false, "plugins": { "postcss-plugin": {} } } } ``` ### `.postcssrc` Create a **`.postcssrc`** file in JSON or YAML format > ℹ️ It's recommended to use an extension (e.g **`.postcssrc.json`** or **`.postcssrc.yml`**) instead of `.postcssrc` ``` Project (Root) |– client |– public | |- (.postcssrc|.postcssrc.json|.postcssrc.yml) |- package.json ``` **`.postcssrc.json`** ```json { "parser": "sugarss", "map": false, "plugins": { "postcss-plugin": {} } } ``` **`.postcssrc.yml`** ```yaml parser: sugarss map: false plugins: postcss-plugin: {} ``` ### `.postcssrc.js` or `postcss.config.js` You may need some logic within your config. In this case create JS file named **`.postcssrc.js`** or **`postcss.config.js`** ``` Project (Root) |– client |– public | |- (.postcssrc.js|postcss.config.js) |- package.json ``` You can export the config as an `{Object}` **.postcssrc.js** ```js module.exports = { parser: 'sugarss', map: false, plugins: { 'postcss-plugin': {} } } ``` Or export a `{Function}` that returns the config (more about the `ctx` param below) **.postcssrc.js** ```js module.exports = (ctx) => ({ parser: ctx.parser ? 'sugarss' : false, map: ctx.env === 'development' ? ctx.map : false, plugins: { 'postcss-plugin': ctx.options.plugin } }) ``` Plugins can be loaded either using an `{Object}` or an `{Array}` #### `{Object}` **.postcssrc.js** ```js module.exports = ({ env }) => ({ ...options, plugins: { 'postcss-plugin': env === 'production' ? {} : false } }) ``` > ℹ️ When using an `{Object}`, the key can be a Node.js module name, a path to a JavaScript file that is relative to the directory of the PostCSS config file, or an absolute path to a JavaScript file. #### `{Array}` **.postcssrc.js** ```js module.exports = ({ env }) => ({ ...options, plugins: [ env === 'production' ? require('postcss-plugin')() : false ] }) ``` > :warning: When using an `{Array}`, make sure to `require()` each plugin <h2 align="center">Options</h2> |Name|Type|Default|Description| |:--:|:--:|:-----:|:----------| |[**`to`**](#to)|`{String}`|`undefined`|Destination File Path| |[**`map`**](#map)|`{String\|Object}`|`false`|Enable/Disable Source Maps| |[**`from`**](#from)|`{String}`|`undefined`|Source File Path| |[**`parser`**](#parser)|`{String\|Function}`|`false`|Custom PostCSS Parser| |[**`syntax`**](#syntax)|`{String\|Function}`|`false`|Custom PostCSS Syntax| |[**`stringifier`**](#stringifier)|`{String\|Function}`|`false`|Custom PostCSS Stringifier| ### `parser` **.postcssrc.js** ```js module.exports = { parser: 'sugarss' } ``` ### `syntax` **.postcssrc.js** ```js module.exports = { syntax: 'postcss-scss' } ``` ### `stringifier` **.postcssrc.js** ```js module.exports = { stringifier: 'midas' } ``` ### [**`map`**](https://github.com/postcss/postcss/blob/master/docs/source-maps.md) **.postcssrc.js** ```js module.exports = { map: 'inline' } ``` > :warning: In most cases `options.from` && `options.to` are set by the third-party which integrates this package (CLI, gulp, webpack). It's unlikely one needs to set/use `options.from` && `options.to` within a config file. Unless you're a third-party plugin author using this module and its Node API directly **dont't set `options.from` && `options.to` yourself** ### `to` ```js module.exports = { to: 'path/to/dest.css' } ``` ### `from` ```js module.exports = { from: 'path/to/src.css' } ``` <h2 align="center">Plugins</h2> ### `{} || null` The plugin will be loaded with defaults ```js 'postcss-plugin': {} || null ``` **.postcssrc.js** ```js module.exports = { plugins: { 'postcss-plugin': {} || null } } ``` > :warning: `{}` must be an **empty** `{Object}` literal ### `{Object}` The plugin will be loaded with given options ```js 'postcss-plugin': { option: '', option: '' } ``` **.postcssrc.js** ```js module.exports = { plugins: { 'postcss-plugin': { option: '', option: '' } } } ``` ### `false` The plugin will not be loaded ```js 'postcss-plugin': false ``` **.postcssrc.js** ```js module.exports = { plugins: { 'postcss-plugin': false } } ``` ### `Ordering` Plugin **execution order** is determined by declaration in the plugins section (**top-down**) ```js { plugins: { 'postcss-plugin': {}, // [0] 'postcss-plugin': {}, // [1] 'postcss-plugin': {} // [2] } } ``` <h2 align="center">Context</h2> When using a `{Function}` (`postcss.config.js` or `.postcssrc.js`), it's possible to pass context to `postcss-load-config`, which will be evaluated while loading your config. By default `ctx.env (process.env.NODE_ENV)` and `ctx.cwd (process.cwd())` are available on the `ctx` `{Object}` > ℹ️ Most third-party integrations add additional properties to the `ctx` (e.g `postcss-loader`). Check the specific module's README for more information about what is available on the respective `ctx` <h2 align="center">Examples</h2> **postcss.config.js** ```js module.exports = (ctx) => ({ parser: ctx.parser ? 'sugarss' : false, map: ctx.env === 'development' ? ctx.map : false, plugins: { 'postcss-import': {}, 'postcss-nested': {}, cssnano: ctx.env === 'production' ? {} : false } }) ``` <div align="center"> <img width="80" height="80" src="https://worldvectorlogo.com/logos/nodejs-icon.svg"> </div> ```json "scripts": { "build": "NODE_ENV=production node postcss", "start": "NODE_ENV=development node postcss" } ``` ### `Async` ```js const { readFileSync } = require('fs') const postcss = require('postcss') const postcssrc = require('postcss-load-config') const css = readFileSync('index.sss', 'utf8') const ctx = { parser: true, map: 'inline' } postcssrc(ctx).then(({ plugins, options }) => { postcss(plugins) .process(css, options) .then((result) => console.log(result.css)) }) ``` ### `Sync` ```js const { readFileSync } = require('fs') const postcss = require('postcss') const postcssrc = require('postcss-load-config') const css = readFileSync('index.sss', 'utf8') const ctx = { parser: true, map: 'inline' } const { plugins, options } = postcssrc.sync(ctx) ``` <div align="center"> <img width="80" height="80" halign="10" src="https://worldvectorlogo.com/logos/gulp.svg"> </div> ```json "scripts": { "build": "NODE_ENV=production gulp", "start": "NODE_ENV=development gulp" } ``` ```js const { task, src, dest, series, watch } = require('gulp') const postcss = require('gulp-postcssrc') const css = () => { src('src/*.css') .pipe(postcss()) .pipe(dest('dest')) }) task('watch', () => { watch(['src/*.css', 'postcss.config.js'], css) }) task('default', series(css, 'watch')) ``` <div align="center"> <img width="80" height="80" src="https://cdn.rawgit.com/webpack/media/e7485eb2/logo/icon.svg"> </div> ```json "scripts": { "build": "NODE_ENV=production webpack", "start": "NODE_ENV=development webpack-dev-server" } ``` **webpack.config.js** ```js module.exports = (env) => ({ module: { rules: [ { test: /\.css$/, use: [ 'style-loader', 'css-loader', 'postcss-loader' ] } ] } }) ``` <h2 align="center">Maintainers</h2> <table> <tbody> <tr> <td align="center"> <img width="150" height="150" src="https://github.com/michael-ciniawsky.png?v=3&s=150"> <br /> <a href="https://github.com/michael-ciniawsky">Michael Ciniawsky</a> </td> <td align="center"> <img width="150" height="150" src="https://github.com/ertrzyiks.png?v=3&s=150"> <br /> <a href="https://github.com/ertrzyiks">Mateusz Derks</a> </td> </tr> <tbody> </table> <h2 align="center">Contributors</h2> <table> <tbody> <tr> <td align="center"> <img width="150" height="150" src="https://github.com/sparty02.png?v=3&s=150"> <br /> <a href="https://github.com/sparty02">Ryan Dunckel</a> </td> <td align="center"> <img width="150" height="150" src="https://github.com/pcgilday.png?v=3&s=150"> <br /> <a href="https://github.com/pcgilday">Patrick Gilday</a> </td> <td align="center"> <img width="150" height="150" src="https://github.com/daltones.png?v=3&s=150"> <br /> <a href="https://github.com/daltones">Dalton Santos</a> </td> <td align="center"> <img width="150" height="150" src="https://github.com/fwouts.png?v=3&s=150"> <br /> <a href="https://github.com/fwouts">François Wouts</a> </td> </tr> <tbody> </table> [npm]: https://img.shields.io/npm/v/postcss-load-config.svg [npm-url]: https://npmjs.com/package/postcss-load-config [node]: https://img.shields.io/node/v/postcss-load-plugins.svg [node-url]: https://nodejs.org/ [deps]: https://david-dm.org/michael-ciniawsky/postcss-load-config.svg [deps-url]: https://david-dm.org/michael-ciniawsky/postcss-load-config [test]: http://img.shields.io/travis/michael-ciniawsky/postcss-load-config.svg [test-url]: https://travis-ci.org/michael-ciniawsky/postcss-load-config [cover]: https://coveralls.io/repos/github/michael-ciniawsky/postcss-load-config/badge.svg [cover-url]: https://coveralls.io/github/michael-ciniawsky/postcss-load-config [style]: https://img.shields.io/badge/code%20style-standard-yellow.svg [style-url]: http://standardjs.com/ [chat]: https://img.shields.io/gitter/room/postcss/postcss.svg [chat-url]: https://gitter.im/postcss/postcss ## Security Contact To report a security vulnerability, please use the [Tidelift security contact]. Tidelift will coordinate the fix and disclosure. [Tidelift security contact]: https://tidelift.com/security <p align="center"> <a href="https://tailwindcss.com/#gh-light-mode-only" target="_blank"> <img src="./.github/logo-light.svg" alt="Tailwind CSS" width="350" height="70"> </a> <a href="https://tailwindcss.com/#gh-dark-mode-only" target="_blank"> <img src="./.github/logo-dark.svg" alt="Tailwind CSS" width="350" height="70"> </a> </p> <p align="center"> A utility-first CSS framework for rapidly building custom user interfaces. </p> <p align="center"> <a href="https://github.com/tailwindlabs/tailwindcss/actions"><img src="https://img.shields.io/github/workflow/status/tailwindlabs/tailwindcss/Node.js%20CI" alt="Build Status"></a> <a href="https://www.npmjs.com/package/tailwindcss"><img src="https://img.shields.io/npm/dt/tailwindcss.svg" alt="Total Downloads"></a> <a href="https://github.com/tailwindcss/tailwindcss/releases"><img src="https://img.shields.io/npm/v/tailwindcss.svg" alt="Latest Release"></a> <a href="https://github.com/tailwindcss/tailwindcss/blob/master/LICENSE"><img src="https://img.shields.io/npm/l/tailwindcss.svg" alt="License"></a> </p> ------ ## Documentation For full documentation, visit [tailwindcss.com](https://tailwindcss.com/). ## Community For help, discussion about best practices, or any other conversation that would benefit from being searchable: [Discuss Tailwind CSS on GitHub](https://github.com/tailwindcss/tailwindcss/discussions) For casual chit-chat with others using the framework: [Join the Tailwind CSS Discord Server](https://discord.gg/7NF8GNe) ## Contributing If you're interested in contributing to Tailwind CSS, please read our [contributing docs](https://github.com/tailwindcss/tailwindcss/blob/master/.github/CONTRIBUTING.md) **before submitting a pull request**. didYouMean.js - A simple JavaScript matching engine =================================================== [Available on GitHub](https://github.com/dcporter/didyoumean.js). A super-simple, highly optimized JS library for matching human-quality input to a list of potential matches. You can use it to suggest a misspelled command-line utility option to a user, or to offer links to nearby valid URLs on your 404 page. (The examples below are taken from a personal project, my [HTML5 business card](http://dcporter.aws.af.cm/me), which uses didYouMean.js to suggest correct URLs from misspelled ones, such as [dcporter.aws.af.cm/me/instagarm](http://dcporter.aws.af.cm/me/instagarm).) Uses the [Levenshtein distance algorithm](https://en.wikipedia.org/wiki/Levenshtein_distance). didYouMean.js works in the browser as well as in node.js. To install it for use in node: ``` npm install didyoumean ``` Examples -------- Matching against a list of strings: ``` var input = 'insargrm' var list = ['facebook', 'twitter', 'instagram', 'linkedin']; console.log(didYouMean(input, list)); > 'instagram' // The method matches 'insargrm' to 'instagram'. input = 'google plus'; console.log(didYouMean(input, list)); > null // The method was unable to find 'google plus' in the list of options. ``` Matching against a list of objects: ``` var input = 'insargrm'; var list = [ { id: 'facebook' }, { id: 'twitter' }, { id: 'instagram' }, { id: 'linkedin' } ]; var key = 'id'; console.log(didYouMean(input, list, key)); > 'instagram' // The method returns the matching value. didYouMean.returnWinningObject = true; console.log(didYouMean(input, list, key)); > { id: 'instagram' } // The method returns the matching object. ``` didYouMean(str, list, [key]) ---------------------------- - str: The string input to match. - list: An array of strings or objects to match against. - key (OPTIONAL): If your list array contains objects, you must specify the key which contains the string to match against. Returns: the closest matching string, or null if no strings exceed the threshold. Options ------- Options are set on the didYouMean function object. You may change them at any time. ### threshold By default, the method will only return strings whose edit distance is less than 40% (0.4x) of their length. For example, if a ten-letter string is five edits away from its nearest match, the method will return null. You can control this by setting the "threshold" value on the didYouMean function. For example, to set the edit distance threshold to 50% of the input string's length: ``` didYouMean.threshold = 0.5; ``` To return the nearest match no matter the threshold, set this value to null. ### thresholdAbsolute This option behaves the same as threshold, but instead takes an integer number of edit steps. For example, if thresholdAbsolute is set to 20 (the default), then the method will only return strings whose edit distance is less than 20. Both options apply. ### caseSensitive By default, the method will perform case-insensitive comparisons. If you wish to force case sensitivity, set the "caseSensitive" value to true: ``` didYouMean.caseSensitive = true; ``` ### nullResultValue By default, the method will return null if there is no sufficiently close match. You can change this value here. ### returnWinningObject By default, the method will return the winning string value (if any). If your list contains objects rather than strings, you may set returnWinningObject to true. ``` didYouMean.returnWinningObject = true; ``` This option has no effect on lists of strings. ### returnFirstMatch By default, the method will search all values and return the closest match. If you're simply looking for a "good- enough" match, you can set your thresholds appropriately and set returnFirstMatch to true to substantially speed things up. License ------- didYouMean copyright (c) 2013-2014 Dave Porter. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License [here](http://www.apache.org/licenses/LICENSE-2.0). Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 67 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. # braces [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/braces.svg?style=flat)](https://www.npmjs.com/package/braces) [![NPM monthly downloads](https://img.shields.io/npm/dm/braces.svg?style=flat)](https://npmjs.org/package/braces) [![NPM total downloads](https://img.shields.io/npm/dt/braces.svg?style=flat)](https://npmjs.org/package/braces) [![Linux Build Status](https://img.shields.io/travis/micromatch/braces.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/braces) > Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save braces ``` ## v3.0.0 Released!! See the [changelog](CHANGELOG.md) for details. ## Why use braces? Brace patterns make globs more powerful by adding the ability to match specific ranges and sequences of characters. * **Accurate** - complete support for the [Bash 4.3 Brace Expansion](www.gnu.org/software/bash/) specification (passes all of the Bash braces tests) * **[fast and performant](#benchmarks)** - Starts fast, runs fast and [scales well](#performance) as patterns increase in complexity. * **Organized code base** - The parser and compiler are easy to maintain and update when edge cases crop up. * **Well-tested** - Thousands of test assertions, and passes all of the Bash, minimatch, and [brace-expansion](https://github.com/juliangruber/brace-expansion) unit tests (as of the date this was written). * **Safer** - You shouldn't have to worry about users defining aggressive or malicious brace patterns that can break your application. Braces takes measures to prevent malicious regex that can be used for DDoS attacks (see [catastrophic backtracking](https://www.regular-expressions.info/catastrophic.html)). * [Supports lists](#lists) - (aka "sets") `a/{b,c}/d` => `['a/b/d', 'a/c/d']` * [Supports sequences](#sequences) - (aka "ranges") `{01..03}` => `['01', '02', '03']` * [Supports steps](#steps) - (aka "increments") `{2..10..2}` => `['2', '4', '6', '8', '10']` * [Supports escaping](#escaping) - To prevent evaluation of special characters. ## Usage The main export is a function that takes one or more brace `patterns` and `options`. ```js const braces = require('braces'); // braces(patterns[, options]); console.log(braces(['{01..05}', '{a..e}'])); //=> ['(0[1-5])', '([a-e])'] console.log(braces(['{01..05}', '{a..e}'], { expand: true })); //=> ['01', '02', '03', '04', '05', 'a', 'b', 'c', 'd', 'e'] ``` ### Brace Expansion vs. Compilation By default, brace patterns are compiled into strings that are optimized for creating regular expressions and matching. **Compiled** ```js console.log(braces('a/{x,y,z}/b')); //=> ['a/(x|y|z)/b'] console.log(braces(['a/{01..20}/b', 'a/{1..5}/b'])); //=> [ 'a/(0[1-9]|1[0-9]|20)/b', 'a/([1-5])/b' ] ``` **Expanded** Enable brace expansion by setting the `expand` option to true, or by using [braces.expand()](#expand) (returns an array similar to what you'd expect from Bash, or `echo {1..5}`, or [minimatch](https://github.com/isaacs/minimatch)): ```js console.log(braces('a/{x,y,z}/b', { expand: true })); //=> ['a/x/b', 'a/y/b', 'a/z/b'] console.log(braces.expand('{01..10}')); //=> ['01','02','03','04','05','06','07','08','09','10'] ``` ### Lists Expand lists (like Bash "sets"): ```js console.log(braces('a/{foo,bar,baz}/*.js')); //=> ['a/(foo|bar|baz)/*.js'] console.log(braces.expand('a/{foo,bar,baz}/*.js')); //=> ['a/foo/*.js', 'a/bar/*.js', 'a/baz/*.js'] ``` ### Sequences Expand ranges of characters (like Bash "sequences"): ```js console.log(braces.expand('{1..3}')); // ['1', '2', '3'] console.log(braces.expand('a/{1..3}/b')); // ['a/1/b', 'a/2/b', 'a/3/b'] console.log(braces('{a..c}', { expand: true })); // ['a', 'b', 'c'] console.log(braces('foo/{a..c}', { expand: true })); // ['foo/a', 'foo/b', 'foo/c'] // supports zero-padded ranges console.log(braces('a/{01..03}/b')); //=> ['a/(0[1-3])/b'] console.log(braces('a/{001..300}/b')); //=> ['a/(0{2}[1-9]|0[1-9][0-9]|[12][0-9]{2}|300)/b'] ``` See [fill-range](https://github.com/jonschlinkert/fill-range) for all available range-expansion options. ### Steppped ranges Steps, or increments, may be used with ranges: ```js console.log(braces.expand('{2..10..2}')); //=> ['2', '4', '6', '8', '10'] console.log(braces('{2..10..2}')); //=> ['(2|4|6|8|10)'] ``` When the [.optimize](#optimize) method is used, or [options.optimize](#optionsoptimize) is set to true, sequences are passed to [to-regex-range](https://github.com/jonschlinkert/to-regex-range) for expansion. ### Nesting Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved. **"Expanded" braces** ```js console.log(braces.expand('a{b,c,/{x,y}}/e')); //=> ['ab/e', 'ac/e', 'a/x/e', 'a/y/e'] console.log(braces.expand('a/{x,{1..5},y}/c')); //=> ['a/x/c', 'a/1/c', 'a/2/c', 'a/3/c', 'a/4/c', 'a/5/c', 'a/y/c'] ``` **"Optimized" braces** ```js console.log(braces('a{b,c,/{x,y}}/e')); //=> ['a(b|c|/(x|y))/e'] console.log(braces('a/{x,{1..5},y}/c')); //=> ['a/(x|([1-5])|y)/c'] ``` ### Escaping **Escaping braces** A brace pattern will not be expanded or evaluted if _either the opening or closing brace is escaped_: ```js console.log(braces.expand('a\\{d,c,b}e')); //=> ['a{d,c,b}e'] console.log(braces.expand('a{d,c,b\\}e')); //=> ['a{d,c,b}e'] ``` **Escaping commas** Commas inside braces may also be escaped: ```js console.log(braces.expand('a{b\\,c}d')); //=> ['a{b,c}d'] console.log(braces.expand('a{d\\,c,b}e')); //=> ['ad,ce', 'abe'] ``` **Single items** Following bash conventions, a brace pattern is also not expanded when it contains a single character: ```js console.log(braces.expand('a{b}c')); //=> ['a{b}c'] ``` ## Options ### options.maxLength **Type**: `Number` **Default**: `65,536` **Description**: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera. ```js console.log(braces('a/{b,c}/d', { maxLength: 3 })); //=> throws an error ``` ### options.expand **Type**: `Boolean` **Default**: `undefined` **Description**: Generate an "expanded" brace pattern (alternatively you can use the `braces.expand()` method, which does the same thing). ```js console.log(braces('a/{b,c}/d', { expand: true })); //=> [ 'a/b/d', 'a/c/d' ] ``` ### options.nodupes **Type**: `Boolean` **Default**: `undefined` **Description**: Remove duplicates from the returned array. ### options.rangeLimit **Type**: `Number` **Default**: `1000` **Description**: To prevent malicious patterns from being passed by users, an error is thrown when `braces.expand()` is used or `options.expand` is true and the generated range will exceed the `rangeLimit`. You can customize `options.rangeLimit` or set it to `Inifinity` to disable this altogether. **Examples** ```js // pattern exceeds the "rangeLimit", so it's optimized automatically console.log(braces.expand('{1..1000}')); //=> ['([1-9]|[1-9][0-9]{1,2}|1000)'] // pattern does not exceed "rangeLimit", so it's NOT optimized console.log(braces.expand('{1..100}')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100'] ``` ### options.transform **Type**: `Function` **Default**: `undefined` **Description**: Customize range expansion. **Example: Transforming non-numeric values** ```js const alpha = braces.expand('x/{a..e}/y', { transform(value, index) { // When non-numeric values are passed, "value" is a character code. return 'foo/' + String.fromCharCode(value) + '-' + index; } }); console.log(alpha); //=> [ 'x/foo/a-0/y', 'x/foo/b-1/y', 'x/foo/c-2/y', 'x/foo/d-3/y', 'x/foo/e-4/y' ] ``` **Example: Transforming numeric values** ```js const numeric = braces.expand('{1..5}', { transform(value) { // when numeric values are passed, "value" is a number return 'foo/' + value * 2; } }); console.log(numeric); //=> [ 'foo/2', 'foo/4', 'foo/6', 'foo/8', 'foo/10' ] ``` ### options.quantifiers **Type**: `Boolean` **Default**: `undefined` **Description**: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, `a{1,3}` will match the letter `a` one to three times. Unfortunately, regex quantifiers happen to share the same syntax as [Bash lists](#lists) The `quantifiers` option tells braces to detect when [regex quantifiers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp#quantifiers) are defined in the given pattern, and not to try to expand them as lists. **Examples** ```js const braces = require('braces'); console.log(braces('a/b{1,3}/{x,y,z}')); //=> [ 'a/b(1|3)/(x|y|z)' ] console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true})); //=> [ 'a/b{1,3}/(x|y|z)' ] console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true, expand: true})); //=> [ 'a/b{1,3}/x', 'a/b{1,3}/y', 'a/b{1,3}/z' ] ``` ### options.unescape **Type**: `Boolean` **Default**: `undefined` **Description**: Strip backslashes that were used for escaping from the result. ## What is "brace expansion"? Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs). In addition to "expansion", braces are also used for matching. In other words: * [brace expansion](#brace-expansion) is for generating new lists * [brace matching](#brace-matching) is for filtering existing lists <details> <summary><strong>More about brace expansion</strong> (click to expand)</summary> There are two main types of brace expansion: 1. **lists**: which are defined using comma-separated values inside curly braces: `{a,b,c}` 2. **sequences**: which are defined using a starting value and an ending value, separated by two dots: `a{1..3}b`. Optionally, a third argument may be passed to define a "step" or increment to use: `a{1..100..10}b`. These are also sometimes referred to as "ranges". Here are some example brace patterns to illustrate how they work: **Sets** ``` {a,b,c} => a b c {a,b,c}{1,2} => a1 a2 b1 b2 c1 c2 ``` **Sequences** ``` {1..9} => 1 2 3 4 5 6 7 8 9 {4..-4} => 4 3 2 1 0 -1 -2 -3 -4 {1..20..3} => 1 4 7 10 13 16 19 {a..j} => a b c d e f g h i j {j..a} => j i h g f e d c b a {a..z..3} => a d g j m p s v y ``` **Combination** Sets and sequences can be mixed together or used along with any other strings. ``` {a,b,c}{1..3} => a1 a2 a3 b1 b2 b3 c1 c2 c3 foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar ``` The fact that braces can be "expanded" from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases. ## Brace matching In addition to _expansion_, brace patterns are also useful for performing regular-expression-like matching. For example, the pattern `foo/{1..3}/bar` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar ``` But not: ``` baz/1/qux baz/2/qux baz/3/qux ``` Braces can also be combined with [glob patterns](https://github.com/jonschlinkert/micromatch) to perform more advanced wildcard matching. For example, the pattern `*/{1..3}/*` would match any of following strings: ``` foo/1/bar foo/2/bar foo/3/bar baz/1/qux baz/2/qux baz/3/qux ``` ## Brace matching pitfalls Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of. ### tldr **"brace bombs"** * brace expansion can eat up a huge amount of processing resources * as brace patterns increase _linearly in size_, the system resources required to expand the pattern increase exponentially * users can accidentally (or intentially) exhaust your system's resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!) For a more detailed explanation with examples, see the [geometric complexity](#geometric-complexity) section. ### The solution Jump to the [performance section](#performance) to see how Braces solves this problem in comparison to other libraries. ### Geometric complexity At minimum, brace patterns with sets limited to two elements have quadradic or `O(n^2)` complexity. But the complexity of the algorithm increases exponentially as the number of sets, _and elements per set_, increases, which is `O(n^c)`. For example, the following sets demonstrate quadratic (`O(n^2)`) complexity: ``` {1,2}{3,4} => (2X2) => 13 14 23 24 {1,2}{3,4}{5,6} => (2X2X2) => 135 136 145 146 235 236 245 246 ``` But add an element to a set, and we get a n-fold Cartesian product with `O(n^c)` complexity: ``` {1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 249 257 258 259 267 268 269 347 348 349 357 358 359 367 368 369 ``` Now, imagine how this complexity grows given that each element is a n-tuple: ``` {1..100}{1..100} => (100X100) => 10,000 elements (38.4 kB) {1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB) ``` Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control. **More information** Interested in learning more about brace expansion? * [linuxjournal/bash-brace-expansion](http://www.linuxjournal.com/content/bash-brace-expansion) * [rosettacode/Brace_expansion](https://rosettacode.org/wiki/Brace_expansion) * [cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) </details> ## Performance Braces is not only screaming fast, it's also more accurate the other brace expansion libraries. ### Better algorithms Fortunately there is a solution to the ["brace bomb" problem](#brace-matching-pitfalls): _don't expand brace patterns into an array when they're used for matching_. Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently. **The proof is in the numbers** Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using `braces()` and `minimatch.braceExpand()`, respectively. | **Pattern** | **braces** | **[minimatch][]** | | --- | --- | --- | | `{1..9007199254740991}`[^1] | `298 B` (5ms 459μs)| N/A (freezes) | | `{1..1000000000000000}` | `41 B` (1ms 15μs) | N/A (freezes) | | `{1..100000000000000}` | `40 B` (890μs) | N/A (freezes) | | `{1..10000000000000}` | `39 B` (2ms 49μs) | N/A (freezes) | | `{1..1000000000000}` | `38 B` (608μs) | N/A (freezes) | | `{1..100000000000}` | `37 B` (397μs) | N/A (freezes) | | `{1..10000000000}` | `35 B` (983μs) | N/A (freezes) | | `{1..1000000000}` | `34 B` (798μs) | N/A (freezes) | | `{1..100000000}` | `33 B` (733μs) | N/A (freezes) | | `{1..10000000}` | `32 B` (5ms 632μs) | `78.89 MB` (16s 388ms 569μs) | | `{1..1000000}` | `31 B` (1ms 381μs) | `6.89 MB` (1s 496ms 887μs) | | `{1..100000}` | `30 B` (950μs) | `588.89 kB` (146ms 921μs) | | `{1..10000}` | `29 B` (1ms 114μs) | `48.89 kB` (14ms 187μs) | | `{1..1000}` | `28 B` (760μs) | `3.89 kB` (1ms 453μs) | | `{1..100}` | `22 B` (345μs) | `291 B` (196μs) | | `{1..10}` | `10 B` (533μs) | `20 B` (37μs) | | `{1..3}` | `7 B` (190μs) | `5 B` (27μs) | ### Faster algorithms When you need expansion, braces is still much faster. _(the following results were generated using `braces.expand()` and `minimatch.braceExpand()`, respectively)_ | **Pattern** | **braces** | **[minimatch][]** | | --- | --- | --- | | `{1..10000000}` | `78.89 MB` (2s 698ms 642μs) | `78.89 MB` (18s 601ms 974μs) | | `{1..1000000}` | `6.89 MB` (458ms 576μs) | `6.89 MB` (1s 491ms 621μs) | | `{1..100000}` | `588.89 kB` (20ms 728μs) | `588.89 kB` (156ms 919μs) | | `{1..10000}` | `48.89 kB` (2ms 202μs) | `48.89 kB` (13ms 641μs) | | `{1..1000}` | `3.89 kB` (1ms 796μs) | `3.89 kB` (1ms 958μs) | | `{1..100}` | `291 B` (424μs) | `291 B` (211μs) | | `{1..10}` | `20 B` (487μs) | `20 B` (72μs) | | `{1..3}` | `5 B` (166μs) | `5 B` (27μs) | If you'd like to run these comparisons yourself, see [test/support/generate.js](test/support/generate.js). ## Benchmarks ### Running benchmarks Install dev dependencies: ```bash npm i -d && npm benchmark ``` ### Latest results Braces is more accurate, without sacrificing performance. ```bash # range (expanded) braces x 29,040 ops/sec ±3.69% (91 runs sampled)) minimatch x 4,735 ops/sec ±1.28% (90 runs sampled) # range (optimized for regex) braces x 382,878 ops/sec ±0.56% (94 runs sampled) minimatch x 1,040 ops/sec ±0.44% (93 runs sampled) # nested ranges (expanded) braces x 19,744 ops/sec ±2.27% (92 runs sampled)) minimatch x 4,579 ops/sec ±0.50% (93 runs sampled) # nested ranges (optimized for regex) braces x 246,019 ops/sec ±2.02% (93 runs sampled) minimatch x 1,028 ops/sec ±0.39% (94 runs sampled) # set (expanded) braces x 138,641 ops/sec ±0.53% (95 runs sampled) minimatch x 219,582 ops/sec ±0.98% (94 runs sampled) # set (optimized for regex) braces x 388,408 ops/sec ±0.41% (95 runs sampled) minimatch x 44,724 ops/sec ±0.91% (89 runs sampled) # nested sets (expanded) braces x 84,966 ops/sec ±0.48% (94 runs sampled) minimatch x 140,720 ops/sec ±0.37% (95 runs sampled) # nested sets (optimized for regex) braces x 263,340 ops/sec ±2.06% (92 runs sampled) minimatch x 28,714 ops/sec ±0.40% (90 runs sampled) ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Contributors | **Commits** | **Contributor** | | --- | --- | | 197 | [jonschlinkert](https://github.com/jonschlinkert) | | 4 | [doowb](https://github.com/doowb) | | 1 | [es128](https://github.com/es128) | | 1 | [eush77](https://github.com/eush77) | | 1 | [hemanth](https://github.com/hemanth) | | 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 08, 2019._ # merge2 Merge multiple streams into one stream in sequence or parallel. [![NPM version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Downloads][downloads-image]][downloads-url] ## Install Install with [npm](https://npmjs.org/package/merge2) ```sh npm install merge2 ``` ## Usage ```js const gulp = require('gulp') const merge2 = require('merge2') const concat = require('gulp-concat') const minifyHtml = require('gulp-minify-html') const ngtemplate = require('gulp-ngtemplate') gulp.task('app-js', function () { return merge2( gulp.src('static/src/tpl/*.html') .pipe(minifyHtml({empty: true})) .pipe(ngtemplate({ module: 'genTemplates', standalone: true }) ), gulp.src([ 'static/src/js/app.js', 'static/src/js/locale_zh-cn.js', 'static/src/js/router.js', 'static/src/js/tools.js', 'static/src/js/services.js', 'static/src/js/filters.js', 'static/src/js/directives.js', 'static/src/js/controllers.js' ]) ) .pipe(concat('app.js')) .pipe(gulp.dest('static/dist/js/')) }) ``` ```js const stream = merge2([stream1, stream2], stream3, {end: false}) //... stream.add(stream4, stream5) //.. stream.end() ``` ```js // equal to merge2([stream1, stream2], stream3) const stream = merge2() stream.add([stream1, stream2]) stream.add(stream3) ``` ```js // merge order: // 1. merge `stream1`; // 2. merge `stream2` and `stream3` in parallel after `stream1` merged; // 3. merge 'stream4' after `stream2` and `stream3` merged; const stream = merge2(stream1, [stream2, stream3], stream4) // merge order: // 1. merge `stream5` and `stream6` in parallel after `stream4` merged; // 2. merge 'stream7' after `stream5` and `stream6` merged; stream.add([stream5, stream6], stream7) ``` ```js // nest merge // equal to merge2(stream1, stream2, stream6, stream3, [stream4, stream5]); const streamA = merge2(stream1, stream2) const streamB = merge2(stream3, [stream4, stream5]) const stream = merge2(streamA, streamB) streamA.add(stream6) ``` ## API ```js const merge2 = require('merge2') ``` ### merge2() ### merge2(options) ### merge2(stream1, stream2, ..., streamN) ### merge2(stream1, stream2, ..., streamN, options) ### merge2(stream1, [stream2, stream3, ...], streamN, options) return a duplex stream (mergedStream). streams in array will be merged in parallel. ### mergedStream.add(stream) ### mergedStream.add(stream1, [stream2, stream3, ...], ...) return the mergedStream. ### mergedStream.on('queueDrain', function() {}) It will emit 'queueDrain' when all streams merged. If you set `end === false` in options, this event give you a notice that should add more streams to merge or end the mergedStream. #### stream *option* Type: `Readable` or `Duplex` or `Transform` stream. #### options *option* Type: `Object`. * **end** - `Boolean` - if `end === false` then mergedStream will not be auto ended, you should end by yourself. **Default:** `undefined` * **pipeError** - `Boolean` - if `pipeError === true` then mergedStream will emit `error` event from source streams. **Default:** `undefined` * **objectMode** - `Boolean` . **Default:** `true` `objectMode` and other options(`highWaterMark`, `defaultEncoding` ...) is same as Node.js `Stream`. ## License MIT © [Teambition](https://www.teambition.com) [npm-url]: https://npmjs.org/package/merge2 [npm-image]: http://img.shields.io/npm/v/merge2.svg [travis-url]: https://travis-ci.org/teambition/merge2 [travis-image]: http://img.shields.io/travis/teambition/merge2.svg [downloads-url]: https://npmjs.org/package/merge2 [downloads-image]: http://img.shields.io/npm/dm/merge2.svg?style=flat-square # Javascript Error Polyfill [![Build Status](https://travis-ci.org/inf3rno/error-polyfill.png?branch=master)](https://travis-ci.org/inf3rno/error-polyfill) Implementing the [V8 Stack Trace API](https://github.com/v8/v8/wiki/Stack-Trace-API) in non-V8 environments as much as possible ## Installation ```bash npm install error-polyfill ``` ```bash bower install error-polyfill ``` ### Environment compatibility Tested on the following environments: Windows 7 - **Node.js** 9.6 - **Chrome** 64.0 - **Firefox** 58.0 - **Internet Explorer** 10.0, 11.0 - **PhantomJS** 2.1 - **Opera** 51.0 Travis - **Node.js** 8, 9 - **Chrome** - **Firefox** - **PhantomJS** The polyfill might work on other environments too due to its adaptive design. I use [Karma](https://github.com/karma-runner/karma) with [Browserify](https://github.com/substack/node-browserify) to test the framework in browsers. ### Requirements ES5 support is required, without that the lib throws an Error and stops working. The ES5 features are tested by the [capability](https://github.com/inf3rno/capability) lib run time. Classes are created by the [o3](https://github.com/inf3rno/o3) lib. Utility functions are implemented in the [u3](https://github.com/inf3rno/u3) lib. ## API documentation ### Usage In this documentation I used the framework as follows: ```js require("error-polyfill"); // <- your code here ``` It is recommended to require the polyfill in your main script. ### Getting a past stack trace with `Error.getStackTrace` This static method is not part of the V8 Stack Trace API, but it is recommended to **use `Error.getStackTrace(throwable)` instead of `throwable.stack`** to get the stack trace of Error instances! Explanation: By non-V8 environments we cannot replace the default stack generation algorithm, so we need a workaround to generate the stack when somebody tries to access it. So the original stack string will be parsed and the result will be properly formatted by accessing the stack using the `Error.getStackTrace` method. Arguments and return values: - The `throwable` argument should be an `Error` (descendant) instance, but it can be an `Object` instance as well. - The return value is the generated `stack` of the `throwable` argument. Example: ```js try { theNotDefinedFunction(); } catch (error) { console.log(Error.getStackTrace(error)); // ReferenceError: theNotDefinedFunction is not defined // at ... // ... } ``` ### Capturing the present stack trace with `Error.captureStackTrace` The `Error.captureStackTrace(throwable [, terminator])` sets the present stack above the `terminator` on the `throwable`. Arguments and return values: - The `throwable` argument should be an instance of an `Error` descendant, but it can be an `Object` instance as well. It is recommended to use `Error` descendant instances instead of inline objects, because we can recognize them by type e.g. `error instanceof UserError`. - The optional `terminator` argument should be a `Function`. Only the calls before this function will be reported in the stack, so without a `terminator` argument, the last call in the stack will be the call of the `Error.captureStackTrace`. - There is no return value, the `stack` will be set on the `throwable` so you will be able to access it using `Error.getStackTrace`. The format of the stack depends on the `Error.prepareStackTrace` implementation. Example: ```js var UserError = function (message){ this.name = "UserError"; this.message = message; Error.captureStackTrace(this, this.constructor); }; UserError.prototype = Object.create(Error.prototype); function codeSmells(){ throw new UserError("What's going on?!"); } codeSmells(); // UserError: What's going on?! // at codeSmells (myModule.js:23:1) // ... ``` Limitations: By the current implementation the `terminator` can be only the `Error.captureStackTrace` caller function. This will change soon, but in certain conditions, e.g. by using strict mode (`"use strict";`) it is not possible to access the information necessary to implement this feature. You will get an empty `frames` array and a `warning` in the `Error.prepareStackTrace` when the stack parser meets with such conditions. ### Formatting the stack trace with `Error.prepareStackTrace` The `Error.prepareStackTrace(throwable, frames [, warnings])` formats the stack `frames` and returns the `stack` value for `Error.captureStackTrace` or `Error.getStackTrace`. The native implementation returns a stack string, but you can override that by setting a new function value. Arguments and return values: - The `throwable` argument is an `Error` or `Object` instance coming from the `Error.captureStackTrace` or from the creation of a new `Error` instance. Be aware that in some environments you need to throw that instance to get a parsable stack. Without that you will get only a `warning` by trying to access the stack with `Error.getStackTrace`. - The `frames` argument is an array of `Frame` instances. Each `frame` represents a function call in the stack. You can use these frames to build a stack string. To access information about individual frames you can use the following methods. - `frame.toString()` - Returns the string representation of the frame, e.g. `codeSmells (myModule.js:23:1)`. - `frame.getThis()` - **Cannot be supported.** Returns the context of the call, only V8 environments support this natively. - `frame.getTypeName()` - **Not implemented yet.** Returns the type name of the context, by the global namespace it is `Window` in Chrome. - `frame.getFunction()` - Returns the called function or `undefined` by strict mode. - `frame.getFunctionName()` - **Not implemented yet.** Returns the name of the called function. - `frame.getMethodName()` - **Not implemented yet.** Returns the method name of the called function is a method of an object. - `frame.getFileName()` - **Not implemented yet.** Returns the file name where the function was called. - `frame.getLineNumber()` - **Not implemented yet.** Returns at which line the function was called in the file. - `frame.getColumnNumber()` - **Not implemented yet.** Returns at which column the function was called in the file. This information is not always available. - `frame.getEvalOrigin()` - **Not implemented yet.** Returns the original of an `eval` call. - `frame.isTopLevel()` - **Not implemented yet.** Returns whether the function was called from the top level. - `frame.isEval()` - **Not implemented yet.** Returns whether the called function was `eval`. - `frame.isNative()` - **Not implemented yet.** Returns whether the called function was native. - `frame.isConstructor()` - **Not implemented yet.** Returns whether the called function was a constructor. - The optional `warnings` argument contains warning messages coming from the stack parser. It is not part of the V8 Stack Trace API. - The return value will be the stack you can access with `Error.getStackTrace(throwable)`. If it is an object, it is recommended to add a `toString` method, so you will be able to read it in the console. Example: ```js Error.prepareStackTrace = function (throwable, frames, warnings) { var string = ""; string += throwable.name || "Error"; string += ": " + (throwable.message || ""); if (warnings instanceof Array) for (var warningIndex in warnings) { var warning = warnings[warningIndex]; string += "\n # " + warning; } for (var frameIndex in frames) { var frame = frames[frameIndex]; string += "\n at " + frame.toString(); } return string; }; ``` ### Stack trace size limits with `Error.stackTraceLimit` **Not implemented yet.** You can set size limits on the stack trace, so you won't have any problems because of too long stack traces. Example: ```js Error.stackTraceLimit = 10; ``` ### Handling uncaught errors and rejections **Not implemented yet.** ## Differences between environments and modes Since there is no Stack Trace API standard, every browsers solves this problem differently. I try to document what I've found about these differences as detailed as possible, so it will be easier to follow the code. Overriding the `error.stack` property with custom Stack instances - by Node.js and Chrome the `Error.prepareStackTrace()` can override every `error.stack` automatically right by creation - by Firefox, Internet Explorer and Opera you cannot automatically override every `error.stack` by native errors - by PhantomJS you cannot override the `error.stack` property of native errors, it is not configurable Capturing the current stack trace - by Node.js, Chrome, Firefox and Opera the stack property is added by instantiating a native error - by Node.js and Chrome the stack creation is lazy loaded and cached, so the `Error.prepareStackTrace()` is called only by the first access - by Node.js and Chrome the current stack can be added to any object with `Error.captureStackTrace()` - by Internet Explorer the stack is created by throwing a native error - by PhantomJS the stack is created by throwing any object, but not a primitive Accessing the stack - by Node.js, Chrome, Firefox, Internet Explorer, Opera and PhantomJS you can use the `error.stack` property - by old Opera you have to use the `error.stacktrace` property to get the stack Prefixes and postfixes on the stack string - by Node.js, Chrome, Internet Explorer and Opera you have the `error.name` and the `error.message` in a `{name}: {message}` format at the beginning of the stack string - by Firefox and PhantomJS the stack string does not contain the `error.name` and the `error.message` - by Firefox you have an empty line at the end of the stack string Accessing the stack frames array - by Node.js and Chrome you can access the frame objects directly by overriding the `Error.prepareStackTrace()` - by Firefox, Internet Explorer, PhantomJS, and Opera you need to parse the stack string in order to get the frames The structure of the frame string - by Node.js and Chrome - the frame string of calling a function from a module: `thirdFn (http://localhost/myModule.js:45:29)` - the frame strings contain an ` at ` prefix, which is not present by the `frame.toString()` output, so it is added by the `stack.toString()` - by Firefox - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Internet Explorer - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45:29)` - by PhantomJS - the frame string of calling a function from a module: `thirdFn@http://localhost/myModule.js:45:29` - by Opera - the frame string of calling a function from a module: ` at thirdFn (http://localhost/myModule.js:45)` Accessing information by individual frames - by Node.js and Chrome the `frame.getThis()` and the `frame.getFunction()` returns `undefined` by frames originate from [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode) code - by Firefox, Internet Explorer, PhantomJS, and Opera the context of the function calls is not accessible, so the `frame.getThis()` cannot be implemented - by Firefox, Internet Explorer, PhantomJS, and Opera functions are not accessible with `arguments.callee.caller` by frames originate from strict mode, so by these frames `frame.getFunction()` can return only `undefined` (this is consistent with V8 behavior) ## License MIT - 2016 Jánszky László Lajos # normalize-path [![NPM version](https://img.shields.io/npm/v/normalize-path.svg?style=flat)](https://www.npmjs.com/package/normalize-path) [![NPM monthly downloads](https://img.shields.io/npm/dm/normalize-path.svg?style=flat)](https://npmjs.org/package/normalize-path) [![NPM total downloads](https://img.shields.io/npm/dt/normalize-path.svg?style=flat)](https://npmjs.org/package/normalize-path) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/normalize-path.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/normalize-path) > Normalize slashes in a file path to be posix/unix-like forward slashes. Also condenses repeat slashes to a single slash and removes and trailing slashes, unless disabled. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save normalize-path ``` ## Usage ```js const normalize = require('normalize-path'); console.log(normalize('\\foo\\bar\\baz\\')); //=> '/foo/bar/baz' ``` **win32 namespaces** ```js console.log(normalize('\\\\?\\UNC\\Server01\\user\\docs\\Letter.txt')); //=> '//?/UNC/Server01/user/docs/Letter.txt' console.log(normalize('\\\\.\\CdRomX')); //=> '//./CdRomX' ``` **Consecutive slashes** Condenses multiple consecutive forward slashes (except for leading slashes in win32 namespaces) to a single slash. ```js console.log(normalize('.//foo//bar///////baz/')); //=> './foo/bar/baz' ``` ### Trailing slashes By default trailing slashes are removed. Pass `false` as the last argument to disable this behavior and _**keep** trailing slashes_: ```js console.log(normalize('foo\\bar\\baz\\', false)); //=> 'foo/bar/baz/' console.log(normalize('./foo/bar/baz/', false)); //=> './foo/bar/baz/' ``` ## Release history ### v3.0 No breaking changes in this release. * a check was added to ensure that [win32 namespaces](https://msdn.microsoft.com/library/windows/desktop/aa365247(v=vs.85).aspx#namespaces) are handled properly by win32 `path.parse()` after a path has been normalized by this library. * a minor optimization was made to simplify how the trailing separator was handled ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects Other useful path-related libraries: * [contains-path](https://www.npmjs.com/package/contains-path): Return true if a file path contains the given path. | [homepage](https://github.com/jonschlinkert/contains-path "Return true if a file path contains the given path.") * [is-absolute](https://www.npmjs.com/package/is-absolute): Returns true if a file path is absolute. Does not rely on the path module… [more](https://github.com/jonschlinkert/is-absolute) | [homepage](https://github.com/jonschlinkert/is-absolute "Returns true if a file path is absolute. Does not rely on the path module and can be used as a polyfill for node.js native `path.isAbolute`.") * [is-relative](https://www.npmjs.com/package/is-relative): Returns `true` if the path appears to be relative. | [homepage](https://github.com/jonschlinkert/is-relative "Returns `true` if the path appears to be relative.") * [parse-filepath](https://www.npmjs.com/package/parse-filepath): Pollyfill for node.js `path.parse`, parses a filepath into an object. | [homepage](https://github.com/jonschlinkert/parse-filepath "Pollyfill for node.js `path.parse`, parses a filepath into an object.") * [path-ends-with](https://www.npmjs.com/package/path-ends-with): Return `true` if a file path ends with the given string/suffix. | [homepage](https://github.com/jonschlinkert/path-ends-with "Return `true` if a file path ends with the given string/suffix.") * [unixify](https://www.npmjs.com/package/unixify): Convert Windows file paths to unix paths. | [homepage](https://github.com/jonschlinkert/unixify "Convert Windows file paths to unix paths.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 35 | [jonschlinkert](https://github.com/jonschlinkert) | | 1 | [phated](https://github.com/phated) | ### Author **Jon Schlinkert** * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) ### License Copyright © 2018, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.6.0, on April 19, 2018._ # fill-range [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/fill-range.svg?style=flat)](https://www.npmjs.com/package/fill-range) [![NPM monthly downloads](https://img.shields.io/npm/dm/fill-range.svg?style=flat)](https://npmjs.org/package/fill-range) [![NPM total downloads](https://img.shields.io/npm/dt/fill-range.svg?style=flat)](https://npmjs.org/package/fill-range) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/fill-range.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/fill-range) > Fill in a range of numbers or letters, optionally passing an increment or `step` to use, or create a regex-compatible range with `options.toRegex` Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save fill-range ``` ## Usage Expands numbers and letters, optionally using a `step` as the last argument. _(Numbers may be defined as JavaScript numbers or strings)_. ```js const fill = require('fill-range'); // fill(from, to[, step, options]); console.log(fill('1', '10')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10'] console.log(fill('1', '10', { toRegex: true })); //=> [1-9]|10 ``` **Params** * `from`: **{String|Number}** the number or letter to start with * `to`: **{String|Number}** the number or letter to end with * `step`: **{String|Number|Object|Function}** Optionally pass a [step](#optionsstep) to use. * `options`: **{Object|Function}**: See all available [options](#options) ## Examples By default, an array of values is returned. **Alphabetical ranges** ```js console.log(fill('a', 'e')); //=> ['a', 'b', 'c', 'd', 'e'] console.log(fill('A', 'E')); //=> [ 'A', 'B', 'C', 'D', 'E' ] ``` **Numerical ranges** Numbers can be defined as actual numbers or strings. ```js console.log(fill(1, 5)); //=> [ 1, 2, 3, 4, 5 ] console.log(fill('1', '5')); //=> [ 1, 2, 3, 4, 5 ] ``` **Negative ranges** Numbers can be defined as actual numbers or strings. ```js console.log(fill('-5', '-1')); //=> [ '-5', '-4', '-3', '-2', '-1' ] console.log(fill('-5', '5')); //=> [ '-5', '-4', '-3', '-2', '-1', '0', '1', '2', '3', '4', '5' ] ``` **Steps (increments)** ```js // numerical ranges with increments console.log(fill('0', '25', 4)); //=> [ '0', '4', '8', '12', '16', '20', '24' ] console.log(fill('0', '25', 5)); //=> [ '0', '5', '10', '15', '20', '25' ] console.log(fill('0', '25', 6)); //=> [ '0', '6', '12', '18', '24' ] // alphabetical ranges with increments console.log(fill('a', 'z', 4)); //=> [ 'a', 'e', 'i', 'm', 'q', 'u', 'y' ] console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ] console.log(fill('a', 'z', 6)); //=> [ 'a', 'g', 'm', 's', 'y' ] ``` ## Options ### options.step **Type**: `number` (formatted as a string or number) **Default**: `undefined` **Description**: The increment to use for the range. Can be used with letters or numbers. **Example(s)** ```js // numbers console.log(fill('1', '10', 2)); //=> [ '1', '3', '5', '7', '9' ] console.log(fill('1', '10', 3)); //=> [ '1', '4', '7', '10' ] console.log(fill('1', '10', 4)); //=> [ '1', '5', '9' ] // letters console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ] console.log(fill('a', 'z', 7)); //=> [ 'a', 'h', 'o', 'v' ] console.log(fill('a', 'z', 9)); //=> [ 'a', 'j', 's' ] ``` ### options.strictRanges **Type**: `boolean` **Default**: `false` **Description**: By default, `null` is returned when an invalid range is passed. Enable this option to throw a `RangeError` on invalid ranges. **Example(s)** The following are all invalid: ```js fill('1.1', '2'); // decimals not supported in ranges fill('a', '2'); // incompatible range values fill(1, 10, 'foo'); // invalid "step" argument ``` ### options.stringify **Type**: `boolean` **Default**: `undefined` **Description**: Cast all returned values to strings. By default, integers are returned as numbers. **Example(s)** ```js console.log(fill(1, 5)); //=> [ 1, 2, 3, 4, 5 ] console.log(fill(1, 5, { stringify: true })); //=> [ '1', '2', '3', '4', '5' ] ``` ### options.toRegex **Type**: `boolean` **Default**: `undefined` **Description**: Create a regex-compatible source string, instead of expanding values to an array. **Example(s)** ```js // alphabetical range console.log(fill('a', 'e', { toRegex: true })); //=> '[a-e]' // alphabetical with step console.log(fill('a', 'z', 3, { toRegex: true })); //=> 'a|d|g|j|m|p|s|v|y' // numerical range console.log(fill('1', '100', { toRegex: true })); //=> '[1-9]|[1-9][0-9]|100' // numerical range with zero padding console.log(fill('000001', '100000', { toRegex: true })); //=> '0{5}[1-9]|0{4}[1-9][0-9]|0{3}[1-9][0-9]{2}|0{2}[1-9][0-9]{3}|0[1-9][0-9]{4}|100000' ``` ### options.transform **Type**: `function` **Default**: `undefined` **Description**: Customize each value in the returned array (or [string](#optionstoRegex)). _(you can also pass this function as the last argument to `fill()`)_. **Example(s)** ```js // add zero padding console.log(fill(1, 5, value => String(value).padStart(4, '0'))); //=> ['0001', '0002', '0003', '0004', '0005'] ``` ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Contributors | **Commits** | **Contributor** | | --- | --- | | 116 | [jonschlinkert](https://github.com/jonschlinkert) | | 4 | [paulmillr](https://github.com/paulmillr) | | 2 | [realityking](https://github.com/realityking) | | 2 | [bluelovers](https://github.com/bluelovers) | | 1 | [edorivai](https://github.com/edorivai) | | 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) Please consider supporting me on Patreon, or [start your own Patreon page](https://patreon.com/invite/bxpbvm)! <a href="https://www.patreon.com/jonschlinkert"> <img src="https://c5.patreon.com/external/logo/[email protected]" height="50"> </a> ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 08, 2019._ # `dlv(obj, keypath)` [![NPM](https://img.shields.io/npm/v/dlv.svg)](https://npmjs.com/package/dlv) [![Build](https://travis-ci.org/developit/dlv.svg?branch=master)](https://travis-ci.org/developit/dlv) > Safely get a dot-notated path within a nested object, with ability to return a default if the full key path does not exist or the value is undefined ### Why? Smallest possible implementation: only **130 bytes.** You could write this yourself, but then you'd have to write [tests]. Supports ES Modules, CommonJS and globals. ### Installation `npm install --save dlv` ### Usage `delve(object, keypath, [default])` ```js import delve from 'dlv'; let obj = { a: { b: { c: 1, d: undefined, e: null } } }; //use string dot notation for keys delve(obj, 'a.b.c') === 1; //or use an array key delve(obj, ['a', 'b', 'c']) === 1; delve(obj, 'a.b') === obj.a.b; //returns undefined if the full key path does not exist and no default is specified delve(obj, 'a.b.f') === undefined; //optional third parameter for default if the full key in path is missing delve(obj, 'a.b.f', 'foo') === 'foo'; //or if the key exists but the value is undefined delve(obj, 'a.b.d', 'foo') === 'foo'; //Non-truthy defined values are still returned if they exist at the full keypath delve(obj, 'a.b.e', 'foo') === null; //undefined obj or key returns undefined, unless a default is supplied delve(undefined, 'a.b.c') === undefined; delve(undefined, 'a.b.c', 'foo') === 'foo'; delve(obj, undefined, 'foo') === 'foo'; ``` ### Setter Counterparts - [dset](https://github.com/lukeed/dset) by [@lukeed](https://github.com/lukeed) is the spiritual "set" counterpart of `dlv` and very fast. - [bury](https://github.com/kalmbach/bury) by [@kalmbach](https://github.com/kalmbach) does the opposite of `dlv` and is implemented in a very similar manner. ### License [MIT](https://oss.ninja/mit/developit/) [preact]: https://github.com/developit/preact [tests]: https://github.com/developit/dlv/blob/master/test.js # run-parallel [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/run-parallel/master.svg [travis-url]: https://travis-ci.org/feross/run-parallel [npm-image]: https://img.shields.io/npm/v/run-parallel.svg [npm-url]: https://npmjs.org/package/run-parallel [downloads-image]: https://img.shields.io/npm/dm/run-parallel.svg [downloads-url]: https://npmjs.org/package/run-parallel [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com ### Run an array of functions in parallel ![parallel](https://raw.githubusercontent.com/feross/run-parallel/master/img.png) [![Sauce Test Status](https://saucelabs.com/browser-matrix/run-parallel.svg)](https://saucelabs.com/u/run-parallel) ### install ``` npm install run-parallel ``` ### usage #### parallel(tasks, [callback]) Run the `tasks` array of functions in parallel, without waiting until the previous function has completed. If any of the functions pass an error to its callback, the main `callback` is immediately called with the value of the error. Once the `tasks` have completed, the results are passed to the final `callback` as an array. It is also possible to use an object instead of an array. Each property will be run as a function and the results will be passed to the final `callback` as an object instead of an array. This can be a more readable way of handling the results. ##### arguments - `tasks` - An array or object containing functions to run. Each function is passed a `callback(err, result)` which it must call on completion with an error `err` (which can be `null`) and an optional `result` value. - `callback(err, results)` - An optional callback to run once all the functions have completed. This function gets a results array (or object) containing all the result arguments passed to the task callbacks. ##### example ```js var parallel = require('run-parallel') parallel([ function (callback) { setTimeout(function () { callback(null, 'one') }, 200) }, function (callback) { setTimeout(function () { callback(null, 'two') }, 100) } ], // optional callback function (err, results) { // the results array will equal ['one','two'] even though // the second function had a shorter timeout. }) ``` This module is basically equavalent to [`async.parallel`](https://github.com/caolan/async#paralleltasks-callback), but it's handy to just have the one function you need instead of the kitchen sink. Modularity! Especially handy if you're serving to the browser and need to reduce your javascript bundle size. Works great in the browser with [browserify](http://browserify.org/)! ### see also - [run-auto](https://github.com/feross/run-auto) - [run-parallel-limit](https://github.com/feross/run-parallel-limit) - [run-series](https://github.com/feross/run-series) - [run-waterfall](https://github.com/feross/run-waterfall) ### license MIT. Copyright (c) [Feross Aboukhadijeh](http://feross.org). <h1 align="center">Picomatch</h1> <p align="center"> <a href="https://npmjs.org/package/picomatch"> <img src="https://img.shields.io/npm/v/picomatch.svg" alt="version"> </a> <a href="https://github.com/micromatch/picomatch/actions?workflow=Tests"> <img src="https://github.com/micromatch/picomatch/workflows/Tests/badge.svg" alt="test status"> </a> <a href="https://coveralls.io/github/micromatch/picomatch"> <img src="https://img.shields.io/coveralls/github/micromatch/picomatch/master.svg" alt="coverage status"> </a> <a href="https://npmjs.org/package/picomatch"> <img src="https://img.shields.io/npm/dm/picomatch.svg" alt="downloads"> </a> </p> <br> <br> <p align="center"> <strong>Blazing fast and accurate glob matcher written in JavaScript.</strong></br> <em>No dependencies and full support for standard and extended Bash glob features, including braces, extglobs, POSIX brackets, and regular expressions.</em> </p> <br> <br> ## Why picomatch? * **Lightweight** - No dependencies * **Minimal** - Tiny API surface. Main export is a function that takes a glob pattern and returns a matcher function. * **Fast** - Loads in about 2ms (that's several times faster than a [single frame of a HD movie](http://www.endmemo.com/sconvert/framespersecondframespermillisecond.php) at 60fps) * **Performant** - Use the returned matcher function to speed up repeat matching (like when watching files) * **Accurate matching** - Using wildcards (`*` and `?`), globstars (`**`) for nested directories, [advanced globbing](#advanced-globbing) with extglobs, braces, and POSIX brackets, and support for escaping special characters with `\` or quotes. * **Well tested** - Thousands of unit tests See the [library comparison](#library-comparisons) to other libraries. <br> <br> ## Table of Contents <details><summary> Click to expand </summary> - [Install](#install) - [Usage](#usage) - [API](#api) * [picomatch](#picomatch) * [.test](#test) * [.matchBase](#matchbase) * [.isMatch](#ismatch) * [.parse](#parse) * [.scan](#scan) * [.compileRe](#compilere) * [.makeRe](#makere) * [.toRegex](#toregex) - [Options](#options) * [Picomatch options](#picomatch-options) * [Scan Options](#scan-options) * [Options Examples](#options-examples) - [Globbing features](#globbing-features) * [Basic globbing](#basic-globbing) * [Advanced globbing](#advanced-globbing) * [Braces](#braces) * [Matching special characters as literals](#matching-special-characters-as-literals) - [Library Comparisons](#library-comparisons) - [Benchmarks](#benchmarks) - [Philosophies](#philosophies) - [About](#about) * [Author](#author) * [License](#license) _(TOC generated by [verb](https://github.com/verbose/verb) using [markdown-toc](https://github.com/jonschlinkert/markdown-toc))_ </details> <br> <br> ## Install Install with [npm](https://www.npmjs.com/): ```sh npm install --save picomatch ``` <br> ## Usage The main export is a function that takes a glob pattern and an options object and returns a function for matching strings. ```js const pm = require('picomatch'); const isMatch = pm('*.js'); console.log(isMatch('abcd')); //=> false console.log(isMatch('a.js')); //=> true console.log(isMatch('a.md')); //=> false console.log(isMatch('a/b.js')); //=> false ``` <br> ## API ### [picomatch](lib/picomatch.js#L32) Creates a matcher function from one or more glob patterns. The returned function takes a string to match as its first argument, and returns true if the string is a match. The returned matcher function also takes a boolean as the second argument that, when true, returns an object with additional information. **Params** * `globs` **{String|Array}**: One or more glob patterns. * `options` **{Object=}** * `returns` **{Function=}**: Returns a matcher function. **Example** ```js const picomatch = require('picomatch'); // picomatch(glob[, options]); const isMatch = picomatch('*.!(*a)'); console.log(isMatch('a.a')); //=> false console.log(isMatch('a.b')); //=> true ``` ### [.test](lib/picomatch.js#L117) Test `input` with the given `regex`. This is used by the main `picomatch()` function to test the input string. **Params** * `input` **{String}**: String to test. * `regex` **{RegExp}** * `returns` **{Object}**: Returns an object with matching info. **Example** ```js const picomatch = require('picomatch'); // picomatch.test(input, regex[, options]); console.log(picomatch.test('foo/bar', /^(?:([^/]*?)\/([^/]*?))$/)); // { isMatch: true, match: [ 'foo/', 'foo', 'bar' ], output: 'foo/bar' } ``` ### [.matchBase](lib/picomatch.js#L161) Match the basename of a filepath. **Params** * `input` **{String}**: String to test. * `glob` **{RegExp|String}**: Glob pattern or regex created by [.makeRe](#makeRe). * `returns` **{Boolean}** **Example** ```js const picomatch = require('picomatch'); // picomatch.matchBase(input, glob[, options]); console.log(picomatch.matchBase('foo/bar.js', '*.js'); // true ``` ### [.isMatch](lib/picomatch.js#L183) Returns true if **any** of the given glob `patterns` match the specified `string`. **Params** * **{String|Array}**: str The string to test. * **{String|Array}**: patterns One or more glob patterns to use for matching. * **{Object}**: See available [options](#options). * `returns` **{Boolean}**: Returns true if any patterns match `str` **Example** ```js const picomatch = require('picomatch'); // picomatch.isMatch(string, patterns[, options]); console.log(picomatch.isMatch('a.a', ['b.*', '*.a'])); //=> true console.log(picomatch.isMatch('a.a', 'b.*')); //=> false ``` ### [.parse](lib/picomatch.js#L199) Parse a glob pattern to create the source string for a regular expression. **Params** * `pattern` **{String}** * `options` **{Object}** * `returns` **{Object}**: Returns an object with useful properties and output to be used as a regex source string. **Example** ```js const picomatch = require('picomatch'); const result = picomatch.parse(pattern[, options]); ``` ### [.scan](lib/picomatch.js#L231) Scan a glob pattern to separate the pattern into segments. **Params** * `input` **{String}**: Glob pattern to scan. * `options` **{Object}** * `returns` **{Object}**: Returns an object with **Example** ```js const picomatch = require('picomatch'); // picomatch.scan(input[, options]); const result = picomatch.scan('!./foo/*.js'); console.log(result); { prefix: '!./', input: '!./foo/*.js', start: 3, base: 'foo', glob: '*.js', isBrace: false, isBracket: false, isGlob: true, isExtglob: false, isGlobstar: false, negated: true } ``` ### [.compileRe](lib/picomatch.js#L245) Compile a regular expression from the `state` object returned by the [parse()](#parse) method. **Params** * `state` **{Object}** * `options` **{Object}** * `returnOutput` **{Boolean}**: Intended for implementors, this argument allows you to return the raw output from the parser. * `returnState` **{Boolean}**: Adds the state to a `state` property on the returned regex. Useful for implementors and debugging. * `returns` **{RegExp}** ### [.makeRe](lib/picomatch.js#L286) Create a regular expression from a parsed glob pattern. **Params** * `state` **{String}**: The object returned from the `.parse` method. * `options` **{Object}** * `returnOutput` **{Boolean}**: Implementors may use this argument to return the compiled output, instead of a regular expression. This is not exposed on the options to prevent end-users from mutating the result. * `returnState` **{Boolean}**: Implementors may use this argument to return the state from the parsed glob with the returned regular expression. * `returns` **{RegExp}**: Returns a regex created from the given pattern. **Example** ```js const picomatch = require('picomatch'); const state = picomatch.parse('*.js'); // picomatch.compileRe(state[, options]); console.log(picomatch.compileRe(state)); //=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/ ``` ### [.toRegex](lib/picomatch.js#L321) Create a regular expression from the given regex source string. **Params** * `source` **{String}**: Regular expression source string. * `options` **{Object}** * `returns` **{RegExp}** **Example** ```js const picomatch = require('picomatch'); // picomatch.toRegex(source[, options]); const { output } = picomatch.parse('*.js'); console.log(picomatch.toRegex(output)); //=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/ ``` <br> ## Options ### Picomatch options The following options may be used with the main `picomatch()` function or any of the methods on the picomatch API. | **Option** | **Type** | **Default value** | **Description** | | --- | --- | --- | --- | | `basename` | `boolean` | `false` | If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. | | `bash` | `boolean` | `false` | Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (`**`). | | `capture` | `boolean` | `undefined` | Return regex matches in supporting methods. | | `contains` | `boolean` | `undefined` | Allows glob to match any part of the given string(s). | | `cwd` | `string` | `process.cwd()` | Current working directory. Used by `picomatch.split()` | | `debug` | `boolean` | `undefined` | Debug regular expressions when an error is thrown. | | `dot` | `boolean` | `false` | Enable dotfile matching. By default, dotfiles are ignored unless a `.` is explicitly defined in the pattern, or `options.dot` is true | | `expandRange` | `function` | `undefined` | Custom function for expanding ranges in brace patterns, such as `{a..z}`. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It's recommended that returned strings be wrapped in parentheses. | | `failglob` | `boolean` | `false` | Throws an error if no matches are found. Based on the bash option of the same name. | | `fastpaths` | `boolean` | `true` | To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to `false`. | | `flags` | `string` | `undefined` | Regex flags to use in the generated regex. If defined, the `nocase` option will be overridden. | | [format](#optionsformat) | `function` | `undefined` | Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc. | | `ignore` | `array\|string` | `undefined` | One or more glob patterns for excluding strings that should not be matched from the result. | | `keepQuotes` | `boolean` | `false` | Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes. | | `literalBrackets` | `boolean` | `undefined` | When `true`, brackets in the glob pattern will be escaped so that only literal brackets will be matched. | | `matchBase` | `boolean` | `false` | Alias for `basename` | | `maxLength` | `boolean` | `65536` | Limit the max length of the input string. An error is thrown if the input string is longer than this value. | | `nobrace` | `boolean` | `false` | Disable brace matching, so that `{a,b}` and `{1..3}` would be treated as literal characters. | | `nobracket` | `boolean` | `undefined` | Disable matching with regex brackets. | | `nocase` | `boolean` | `false` | Make matching case-insensitive. Equivalent to the regex `i` flag. Note that this option is overridden by the `flags` option. | | `nodupes` | `boolean` | `true` | Deprecated, use `nounique` instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false. | | `noext` | `boolean` | `false` | Alias for `noextglob` | | `noextglob` | `boolean` | `false` | Disable support for matching with extglobs (like `+(a\|b)`) | | `noglobstar` | `boolean` | `false` | Disable support for matching nested directories with globstars (`**`) | | `nonegate` | `boolean` | `false` | Disable support for negating with leading `!` | | `noquantifiers` | `boolean` | `false` | Disable support for regex quantifiers (like `a{1,2}`) and treat them as brace patterns to be expanded. | | [onIgnore](#optionsonIgnore) | `function` | `undefined` | Function to be called on ignored items. | | [onMatch](#optionsonMatch) | `function` | `undefined` | Function to be called on matched items. | | [onResult](#optionsonResult) | `function` | `undefined` | Function to be called on all items, regardless of whether or not they are matched or ignored. | | `posix` | `boolean` | `false` | Support POSIX character classes ("posix brackets"). | | `posixSlashes` | `boolean` | `undefined` | Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself | | `prepend` | `boolean` | `undefined` | String to prepend to the generated regex used for matching. | | `regex` | `boolean` | `false` | Use regular expression rules for `+` (instead of matching literal `+`), and for stars that follow closing parentheses or brackets (as in `)*` and `]*`). | | `strictBrackets` | `boolean` | `undefined` | Throw an error if brackets, braces, or parens are imbalanced. | | `strictSlashes` | `boolean` | `undefined` | When true, picomatch won't match trailing slashes with single stars. | | `unescape` | `boolean` | `undefined` | Remove backslashes preceding escaped characters in the glob pattern. By default, backslashes are retained. | | `unixify` | `boolean` | `undefined` | Alias for `posixSlashes`, for backwards compatibility. | picomatch has automatic detection for regex positive and negative lookbehinds. If the pattern contains a negative lookbehind, you must be using Node.js >= 8.10 or else picomatch will throw an error. ### Scan Options In addition to the main [picomatch options](#picomatch-options), the following options may also be used with the [.scan](#scan) method. | **Option** | **Type** | **Default value** | **Description** | | --- | --- | --- | --- | | `tokens` | `boolean` | `false` | When `true`, the returned object will include an array of tokens (objects), representing each path "segment" in the scanned glob pattern | | `parts` | `boolean` | `false` | When `true`, the returned object will include an array of strings representing each path "segment" in the scanned glob pattern. This is automatically enabled when `options.tokens` is true | **Example** ```js const picomatch = require('picomatch'); const result = picomatch.scan('!./foo/*.js', { tokens: true }); console.log(result); // { // prefix: '!./', // input: '!./foo/*.js', // start: 3, // base: 'foo', // glob: '*.js', // isBrace: false, // isBracket: false, // isGlob: true, // isExtglob: false, // isGlobstar: false, // negated: true, // maxDepth: 2, // tokens: [ // { value: '!./', depth: 0, isGlob: false, negated: true, isPrefix: true }, // { value: 'foo', depth: 1, isGlob: false }, // { value: '*.js', depth: 1, isGlob: true } // ], // slashes: [ 2, 6 ], // parts: [ 'foo', '*.js' ] // } ``` <br> ### Options Examples #### options.expandRange **Type**: `function` **Default**: `undefined` Custom function for expanding ranges in brace patterns. The [fill-range](https://github.com/jonschlinkert/fill-range) library is ideal for this purpose, or you can use custom code to do whatever you need. **Example** The following example shows how to create a glob that matches a folder ```js const fill = require('fill-range'); const regex = pm.makeRe('foo/{01..25}/bar', { expandRange(a, b) { return `(${fill(a, b, { toRegex: true })})`; } }); console.log(regex); //=> /^(?:foo\/((?:0[1-9]|1[0-9]|2[0-5]))\/bar)$/ console.log(regex.test('foo/00/bar')) // false console.log(regex.test('foo/01/bar')) // true console.log(regex.test('foo/10/bar')) // true console.log(regex.test('foo/22/bar')) // true console.log(regex.test('foo/25/bar')) // true console.log(regex.test('foo/26/bar')) // false ``` #### options.format **Type**: `function` **Default**: `undefined` Custom function for formatting strings before they're matched. **Example** ```js // strip leading './' from strings const format = str => str.replace(/^\.\//, ''); const isMatch = picomatch('foo/*.js', { format }); console.log(isMatch('./foo/bar.js')); //=> true ``` #### options.onMatch ```js const onMatch = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = picomatch('*', { onMatch }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` #### options.onIgnore ```js const onIgnore = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = picomatch('*', { onIgnore, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` #### options.onResult ```js const onResult = ({ glob, regex, input, output }) => { console.log({ glob, regex, input, output }); }; const isMatch = picomatch('*', { onResult, ignore: 'f*' }); isMatch('foo'); isMatch('bar'); isMatch('baz'); ``` <br> <br> ## Globbing features * [Basic globbing](#basic-globbing) (Wildcard matching) * [Advanced globbing](#advanced-globbing) (extglobs, posix brackets, brace matching) ### Basic globbing | **Character** | **Description** | | --- | --- | | `*` | Matches any character zero or more times, excluding path separators. Does _not match_ path separators or hidden files or directories ("dotfiles"), unless explicitly enabled by setting the `dot` option to `true`. | | `**` | Matches any character zero or more times, including path separators. Note that `**` will only match path separators (`/`, and `\\` on Windows) when they are the only characters in a path segment. Thus, `foo**/bar` is equivalent to `foo*/bar`, and `foo/a**b/bar` is equivalent to `foo/a*b/bar`, and _more than two_ consecutive stars in a glob path segment are regarded as _a single star_. Thus, `foo/***/bar` is equivalent to `foo/*/bar`. | | `?` | Matches any character excluding path separators one time. Does _not match_ path separators or leading dots. | | `[abc]` | Matches any characters inside the brackets. For example, `[abc]` would match the characters `a`, `b` or `c`, and nothing else. | #### Matching behavior vs. Bash Picomatch's matching features and expected results in unit tests are based on Bash's unit tests and the Bash 4.3 specification, with the following exceptions: * Bash will match `foo/bar/baz` with `*`. Picomatch only matches nested directories with `**`. * Bash greedily matches with negated extglobs. For example, Bash 4.3 says that `!(foo)*` should match `foo` and `foobar`, since the trailing `*` bracktracks to match the preceding pattern. This is very memory-inefficient, and IMHO, also incorrect. Picomatch would return `false` for both `foo` and `foobar`. <br> ### Advanced globbing * [extglobs](#extglobs) * [POSIX brackets](#posix-brackets) * [Braces](#brace-expansion) #### Extglobs | **Pattern** | **Description** | | --- | --- | | `@(pattern)` | Match _only one_ consecutive occurrence of `pattern` | | `*(pattern)` | Match _zero or more_ consecutive occurrences of `pattern` | | `+(pattern)` | Match _one or more_ consecutive occurrences of `pattern` | | `?(pattern)` | Match _zero or **one**_ consecutive occurrences of `pattern` | | `!(pattern)` | Match _anything but_ `pattern` | **Examples** ```js const pm = require('picomatch'); // *(pattern) matches ZERO or more of "pattern" console.log(pm.isMatch('a', 'a*(z)')); // true console.log(pm.isMatch('az', 'a*(z)')); // true console.log(pm.isMatch('azzz', 'a*(z)')); // true // +(pattern) matches ONE or more of "pattern" console.log(pm.isMatch('a', 'a*(z)')); // true console.log(pm.isMatch('az', 'a*(z)')); // true console.log(pm.isMatch('azzz', 'a*(z)')); // true // supports multiple extglobs console.log(pm.isMatch('foo.bar', '!(foo).!(bar)')); // false // supports nested extglobs console.log(pm.isMatch('foo.bar', '!(!(foo)).!(!(bar))')); // true ``` #### POSIX brackets POSIX classes are disabled by default. Enable this feature by setting the `posix` option to true. **Enable POSIX bracket support** ```js console.log(pm.makeRe('[[:word:]]+', { posix: true })); //=> /^(?:(?=.)[A-Za-z0-9_]+\/?)$/ ``` **Supported POSIX classes** The following named POSIX bracket expressions are supported: * `[:alnum:]` - Alphanumeric characters, equ `[a-zA-Z0-9]` * `[:alpha:]` - Alphabetical characters, equivalent to `[a-zA-Z]`. * `[:ascii:]` - ASCII characters, equivalent to `[\\x00-\\x7F]`. * `[:blank:]` - Space and tab characters, equivalent to `[ \\t]`. * `[:cntrl:]` - Control characters, equivalent to `[\\x00-\\x1F\\x7F]`. * `[:digit:]` - Numerical digits, equivalent to `[0-9]`. * `[:graph:]` - Graph characters, equivalent to `[\\x21-\\x7E]`. * `[:lower:]` - Lowercase letters, equivalent to `[a-z]`. * `[:print:]` - Print characters, equivalent to `[\\x20-\\x7E ]`. * `[:punct:]` - Punctuation and symbols, equivalent to `[\\-!"#$%&\'()\\*+,./:;<=>?@[\\]^_`{|}~]`. * `[:space:]` - Extended space characters, equivalent to `[ \\t\\r\\n\\v\\f]`. * `[:upper:]` - Uppercase letters, equivalent to `[A-Z]`. * `[:word:]` - Word characters (letters, numbers and underscores), equivalent to `[A-Za-z0-9_]`. * `[:xdigit:]` - Hexadecimal digits, equivalent to `[A-Fa-f0-9]`. See the [Bash Reference Manual](https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html) for more information. ### Braces Picomatch does not do brace expansion. For [brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html) and advanced matching with braces, use [micromatch](https://github.com/micromatch/micromatch) instead. Picomatch has very basic support for braces. ### Matching special characters as literals If you wish to match the following special characters in a filepath, and you want to use these characters in your glob pattern, they must be escaped with backslashes or quotes: **Special Characters** Some characters that are used for matching in regular expressions are also regarded as valid file path characters on some platforms. To match any of the following characters as literals: `$^*+?()[] Examples: ```js console.log(pm.makeRe('foo/bar \\(1\\)')); console.log(pm.makeRe('foo/bar \\(1\\)')); ``` <br> <br> ## Library Comparisons The following table shows which features are supported by [minimatch](https://github.com/isaacs/minimatch), [micromatch](https://github.com/micromatch/micromatch), [picomatch](https://github.com/micromatch/picomatch), [nanomatch](https://github.com/micromatch/nanomatch), [extglob](https://github.com/micromatch/extglob), [braces](https://github.com/micromatch/braces), and [expand-brackets](https://github.com/micromatch/expand-brackets). | **Feature** | `minimatch` | `micromatch` | `picomatch` | `nanomatch` | `extglob` | `braces` | `expand-brackets` | | --- | --- | --- | --- | --- | --- | --- | --- | | Wildcard matching (`*?+`) | ✔ | ✔ | ✔ | ✔ | - | - | - | | Advancing globbing | ✔ | ✔ | ✔ | - | - | - | - | | Brace _matching_ | ✔ | ✔ | ✔ | - | - | ✔ | - | | Brace _expansion_ | ✔ | ✔ | - | - | - | ✔ | - | | Extglobs | partial | ✔ | ✔ | - | ✔ | - | - | | Posix brackets | - | ✔ | ✔ | - | - | - | ✔ | | Regular expression syntax | - | ✔ | ✔ | ✔ | ✔ | - | ✔ | | File system operations | - | - | - | - | - | - | - | <br> <br> ## Benchmarks Performance comparison of picomatch and minimatch. ``` # .makeRe star picomatch x 1,993,050 ops/sec ±0.51% (91 runs sampled) minimatch x 627,206 ops/sec ±1.96% (87 runs sampled)) # .makeRe star; dot=true picomatch x 1,436,640 ops/sec ±0.62% (91 runs sampled) minimatch x 525,876 ops/sec ±0.60% (88 runs sampled) # .makeRe globstar picomatch x 1,592,742 ops/sec ±0.42% (90 runs sampled) minimatch x 962,043 ops/sec ±1.76% (91 runs sampled)d) # .makeRe globstars picomatch x 1,615,199 ops/sec ±0.35% (94 runs sampled) minimatch x 477,179 ops/sec ±1.33% (91 runs sampled) # .makeRe with leading star picomatch x 1,220,856 ops/sec ±0.40% (92 runs sampled) minimatch x 453,564 ops/sec ±1.43% (94 runs sampled) # .makeRe - basic braces picomatch x 392,067 ops/sec ±0.70% (90 runs sampled) minimatch x 99,532 ops/sec ±2.03% (87 runs sampled)) ``` <br> <br> ## Philosophies The goal of this library is to be blazing fast, without compromising on accuracy. **Accuracy** The number one of goal of this library is accuracy. However, it's not unusual for different glob implementations to have different rules for matching behavior, even with simple wildcard matching. It gets increasingly more complicated when combinations of different features are combined, like when extglobs are combined with globstars, braces, slashes, and so on: `!(**/{a,b,*/c})`. Thus, given that there is no canonical glob specification to use as a single source of truth when differences of opinion arise regarding behavior, sometimes we have to implement our best judgement and rely on feedback from users to make improvements. **Performance** Although this library performs well in benchmarks, and in most cases it's faster than other popular libraries we benchmarked against, we will always choose accuracy over performance. It's not helpful to anyone if our library is faster at returning the wrong answer. <br> <br> ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). Please read the [contributing guide](.github/contributing.md) for advice on opening issues, pull requests, and coding standards. </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) ### License Copyright © 2017-present, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). text-encoding-utf-8 ============== This is a **partial** polyfill for the [Encoding Living Standard](https://encoding.spec.whatwg.org/) API for the Web, allowing encoding and decoding of textual data to and from Typed Array buffers for binary data in JavaScript. This is fork of [text-encoding](https://github.com/inexorabletash/text-encoding) that **only** support **UTF-8**. Basic examples and tests are included. ### Install ### There are a few ways you can get the `text-encoding-utf-8` library. #### Node #### `text-encoding-utf-8` is on `npm`. Simply run: ```js npm install text-encoding-utf-8 ``` Or add it to your `package.json` dependencies. ### HTML Page Usage ### ```html <script src="encoding.js"></script> ``` ### API Overview ### Basic Usage ```js var uint8array = TextEncoder(encoding).encode(string); var string = TextDecoder(encoding).decode(uint8array); ``` Streaming Decode ```js var string = "", decoder = TextDecoder(encoding), buffer; while (buffer = next_chunk()) { string += decoder.decode(buffer, {stream:true}); } string += decoder.decode(); // finish the stream ``` ### Encodings ### Only `utf-8` and `UTF-8` are supported. ### Non-Standard Behavior ### Only `utf-8` and `UTF-8` are supported. ### Motivation Binary size matters, especially on a mobile phone. Safari on iOS does not support TextDecoder or TextEncoder. # Arg `arg` is an unopinionated, no-frills CLI argument parser. ## Installation ```bash npm install arg ``` ## Usage `arg()` takes either 1 or 2 arguments: 1. Command line specification object (see below) 2. Parse options (_Optional_, defaults to `{permissive: false, argv: process.argv.slice(2), stopAtPositional: false}`) It returns an object with any values present on the command-line (missing options are thus missing from the resulting object). Arg performs no validation/requirement checking - we leave that up to the application. All parameters that aren't consumed by options (commonly referred to as "extra" parameters) are added to `result._`, which is _always_ an array (even if no extra parameters are passed, in which case an empty array is returned). ```javascript const arg = require('arg'); // `options` is an optional parameter const args = arg( spec, (options = { permissive: false, argv: process.argv.slice(2) }) ); ``` For example: ```console $ node ./hello.js --verbose -vvv --port=1234 -n 'My name' foo bar --tag qux --tag=qix -- --foobar ``` ```javascript // hello.js const arg = require('arg'); const args = arg({ // Types '--help': Boolean, '--version': Boolean, '--verbose': arg.COUNT, // Counts the number of times --verbose is passed '--port': Number, // --port <number> or --port=<number> '--name': String, // --name <string> or --name=<string> '--tag': [String], // --tag <string> or --tag=<string> // Aliases '-v': '--verbose', '-n': '--name', // -n <string>; result is stored in --name '--label': '--name' // --label <string> or --label=<string>; // result is stored in --name }); console.log(args); /* { _: ["foo", "bar", "--foobar"], '--port': 1234, '--verbose': 4, '--name': "My name", '--tag': ["qux", "qix"] } */ ``` The values for each key=&gt;value pair is either a type (function or [function]) or a string (indicating an alias). - In the case of a function, the string value of the argument's value is passed to it, and the return value is used as the ultimate value. - In the case of an array, the only element _must_ be a type function. Array types indicate that the argument may be passed multiple times, and as such the resulting value in the returned object is an array with all of the values that were passed using the specified flag. - In the case of a string, an alias is established. If a flag is passed that matches the _key_, then the _value_ is substituted in its place. Type functions are passed three arguments: 1. The parameter value (always a string) 2. The parameter name (e.g. `--label`) 3. The previous value for the destination (useful for reduce-like operations or for supporting `-v` multiple times, etc.) This means the built-in `String`, `Number`, and `Boolean` type constructors "just work" as type functions. Note that `Boolean` and `[Boolean]` have special treatment - an option argument is _not_ consumed or passed, but instead `true` is returned. These options are called "flags". For custom handlers that wish to behave as flags, you may pass the function through `arg.flag()`: ```javascript const arg = require('arg'); const argv = [ '--foo', 'bar', '-ff', 'baz', '--foo', '--foo', 'qux', '-fff', 'qix' ]; function myHandler(value, argName, previousValue) { /* `value` is always `true` */ return 'na ' + (previousValue || 'batman!'); } const args = arg( { '--foo': arg.flag(myHandler), '-f': '--foo' }, { argv } ); console.log(args); /* { _: ['bar', 'baz', 'qux', 'qix'], '--foo': 'na na na na na na na na batman!' } */ ``` As well, `arg` supplies a helper argument handler called `arg.COUNT`, which equivalent to a `[Boolean]` argument's `.length` property - effectively counting the number of times the boolean flag, denoted by the key, is passed on the command line.. For example, this is how you could implement `ssh`'s multiple levels of verbosity (`-vvvv` being the most verbose). ```javascript const arg = require('arg'); const argv = ['-AAAA', '-BBBB']; const args = arg( { '-A': arg.COUNT, '-B': [Boolean] }, { argv } ); console.log(args); /* { _: [], '-A': 4, '-B': [true, true, true, true] } */ ``` ### Options If a second parameter is specified and is an object, it specifies parsing options to modify the behavior of `arg()`. #### `argv` If you have already sliced or generated a number of raw arguments to be parsed (as opposed to letting `arg` slice them from `process.argv`) you may specify them in the `argv` option. For example: ```javascript const args = arg( { '--foo': String }, { argv: ['hello', '--foo', 'world'] } ); ``` results in: ```javascript const args = { _: ['hello'], '--foo': 'world' }; ``` #### `permissive` When `permissive` set to `true`, `arg` will push any unknown arguments onto the "extra" argument array (`result._`) instead of throwing an error about an unknown flag. For example: ```javascript const arg = require('arg'); const argv = [ '--foo', 'hello', '--qux', 'qix', '--bar', '12345', 'hello again' ]; const args = arg( { '--foo': String, '--bar': Number }, { argv, permissive: true } ); ``` results in: ```javascript const args = { _: ['--qux', 'qix', 'hello again'], '--foo': 'hello', '--bar': 12345 }; ``` #### `stopAtPositional` When `stopAtPositional` is set to `true`, `arg` will halt parsing at the first positional argument. For example: ```javascript const arg = require('arg'); const argv = ['--foo', 'hello', '--bar']; const args = arg( { '--foo': Boolean, '--bar': Boolean }, { argv, stopAtPositional: true } ); ``` results in: ```javascript const args = { _: ['hello', '--bar'], '--foo': true }; ``` ### Errors Some errors that `arg` throws provide a `.code` property in order to aid in recovering from user error, or to differentiate between user error and developer error (bug). ##### ARG_UNKNOWN_OPTION If an unknown option (not defined in the spec object) is passed, an error with code `ARG_UNKNOWN_OPTION` will be thrown: ```js // cli.js try { require('arg')({ '--hi': String }); } catch (err) { if (err.code === 'ARG_UNKNOWN_OPTION') { console.log(err.message); } else { throw err; } } ``` ```shell node cli.js --extraneous true Unknown or unexpected option: --extraneous ``` # FAQ A few questions and answers that have been asked before: ### How do I require an argument with `arg`? Do the assertion yourself, such as: ```javascript const args = arg({ '--name': String }); if (!args['--name']) throw new Error('missing required argument: --name'); ``` # License Released under the [MIT License](LICENSE.md). # YAML <a href="https://www.npmjs.com/package/yaml"><img align="right" src="https://badge.fury.io/js/yaml.svg" title="npm package" /></a> `yaml` is a JavaScript parser and stringifier for [YAML](http://yaml.org/), a human friendly data serialization standard. It supports both parsing and stringifying data using all versions of YAML, along with all common data schemas. As a particularly distinguishing feature, `yaml` fully supports reading and writing comments and blank lines in YAML documents. The library is released under the ISC open source license, and the code is [available on GitHub](https://github.com/eemeli/yaml/). It has no external dependencies and runs on Node.js 6 and later, and in browsers from IE 11 upwards. For the purposes of versioning, any changes that break any of the endpoints or APIs documented here will be considered semver-major breaking changes. Undocumented library internals may change between minor versions, and previous APIs may be deprecated (but not removed). For more information, see the project's documentation site: [**eemeli.org/yaml/v1**](https://eemeli.org/yaml/v1/) To install: ```sh npm install yaml ``` **Note:** This is `yaml@1`. You may also be interested in the next version, currently available as [`yaml@next`](https://www.npmjs.com/package/yaml/v/next). ## API Overview The API provided by `yaml` has three layers, depending on how deep you need to go: [Parse & Stringify](https://eemeli.org/yaml/v1/#parse-amp-stringify), [Documents](https://eemeli.org/yaml/#documents), and the [CST Parser](https://eemeli.org/yaml/#cst-parser). The first has the simplest API and "just works", the second gets you all the bells and whistles supported by the library along with a decent [AST](https://eemeli.org/yaml/#content-nodes), and the third is the closest to YAML source, making it fast, raw, and crude. ```js import YAML from 'yaml' // or const YAML = require('yaml') ``` ### Parse & Stringify - [`YAML.parse(str, options): value`](https://eemeli.org/yaml/v1/#yaml-parse) - [`YAML.stringify(value, options): string`](https://eemeli.org/yaml/v1/#yaml-stringify) ### YAML Documents - [`YAML.createNode(value, wrapScalars, tag): Node`](https://eemeli.org/yaml/v1/#creating-nodes) - [`YAML.defaultOptions`](https://eemeli.org/yaml/v1/#options) - [`YAML.Document`](https://eemeli.org/yaml/v1/#yaml-documents) - [`constructor(options)`](https://eemeli.org/yaml/v1/#creating-documents) - [`defaults`](https://eemeli.org/yaml/v1/#options) - [`#anchors`](https://eemeli.org/yaml/v1/#working-with-anchors) - [`#contents`](https://eemeli.org/yaml/v1/#content-nodes) - [`#errors`](https://eemeli.org/yaml/v1/#errors) - [`YAML.parseAllDocuments(str, options): YAML.Document[]`](https://eemeli.org/yaml/v1/#parsing-documents) - [`YAML.parseDocument(str, options): YAML.Document`](https://eemeli.org/yaml/v1/#parsing-documents) ```js import { Pair, YAMLMap, YAMLSeq } from 'yaml/types' ``` - [`new Pair(key, value)`](https://eemeli.org/yaml/v1/#creating-nodes) - [`new YAMLMap()`](https://eemeli.org/yaml/v1/#creating-nodes) - [`new YAMLSeq()`](https://eemeli.org/yaml/v1/#creating-nodes) ### CST Parser ```js import parseCST from 'yaml/parse-cst' ``` - [`parseCST(str): CSTDocument[]`](https://eemeli.org/yaml/v1/#parsecst) - [`YAML.parseCST(str): CSTDocument[]`](https://eemeli.org/yaml/v1/#parsecst) ## YAML.parse ```yaml # file.yml YAML: - A human-readable data serialization language - https://en.wikipedia.org/wiki/YAML yaml: - A complete JavaScript implementation - https://www.npmjs.com/package/yaml ``` ```js import fs from 'fs' import YAML from 'yaml' YAML.parse('3.14159') // 3.14159 YAML.parse('[ true, false, maybe, null ]\n') // [ true, false, 'maybe', null ] const file = fs.readFileSync('./file.yml', 'utf8') YAML.parse(file) // { YAML: // [ 'A human-readable data serialization language', // 'https://en.wikipedia.org/wiki/YAML' ], // yaml: // [ 'A complete JavaScript implementation', // 'https://www.npmjs.com/package/yaml' ] } ``` ## YAML.stringify ```js import YAML from 'yaml' YAML.stringify(3.14159) // '3.14159\n' YAML.stringify([true, false, 'maybe', null]) // `- true // - false // - maybe // - null // ` YAML.stringify({ number: 3, plain: 'string', block: 'two\nlines\n' }) // `number: 3 // plain: string // block: > // two // // lines // ` ``` --- Browser testing provided by: <a href="https://www.browserstack.com/open-source"> <img width=200 src="https://eemeli.org/yaml/images/browserstack.svg" /> </a> # postcss-import [![Build](https://img.shields.io/travis/postcss/postcss-import/master)](https://travis-ci.org/postcss/postcss-import) [![Version](https://img.shields.io/npm/v/postcss-import)](https://github.com/postcss/postcss-import/blob/master/CHANGELOG.md) [![postcss compatibility](https://img.shields.io/npm/dependency-version/postcss-import/peer/postcss)](https://postcss.org/) > [PostCSS](https://github.com/postcss/postcss) plugin to transform `@import` rules by inlining content. This plugin can consume local files, node modules or web_modules. To resolve path of an `@import` rule, it can look into root directory (by default `process.cwd()`), `web_modules`, `node_modules` or local modules. _When importing a module, it will look for `index.css` or file referenced in `package.json` in the `style` or `main` fields._ You can also provide manually multiples paths where to look at. **Notes:** - **This plugin should probably be used as the first plugin of your list. This way, other plugins will work on the AST as if there were only a single file to process, and will probably work as you can expect**. - This plugin works great with [postcss-url](https://github.com/postcss/postcss-url) plugin, which will allow you to adjust assets `url()` (or even inline them) after inlining imported files. - In order to optimize output, **this plugin will only import a file once** on a given scope (root, media query...). Tests are made from the path & the content of imported files (using a hash table). If this behavior is not what you want, look at `skipDuplicates` option - If you are looking for **Glob Imports**, you can use [postcss-import-ext-glob](https://github.com/dimitrinicolas/postcss-import-ext-glob) to extend postcss-import. - Imports which are not modified (by `options.filter` or because they are remote imports) are moved to the top of the output. - **This plugin attempts to follow the CSS `@import` spec**; `@import` statements must precede all other statements (besides `@charset`). ## Installation ```console $ npm install -D postcss-import ``` ## Usage Unless your stylesheet is in the same place where you run postcss (`process.cwd()`), you will need to use `from` option to make relative imports work. ```js // dependencies const fs = require("fs") const postcss = require("postcss") const atImport = require("postcss-import") // css to be processed const css = fs.readFileSync("css/input.css", "utf8") // process css postcss() .use(atImport()) .process(css, { // `from` option is needed here from: "css/input.css" }) .then((result) => { const output = result.css console.log(output) }) ``` `css/input.css`: ```css /* can consume `node_modules`, `web_modules` or local modules */ @import "cssrecipes-defaults"; /* == @import "../node_modules/cssrecipes-defaults/index.css"; */ @import "normalize.css"; /* == @import "../node_modules/normalize.css/normalize.css"; */ @import "foo.css"; /* relative to css/ according to `from` option above */ @import "bar.css" (min-width: 25em); body { background: black; } ``` will give you: ```css /* ... content of ../node_modules/cssrecipes-defaults/index.css */ /* ... content of ../node_modules/normalize.css/normalize.css */ /* ... content of css/foo.css */ @media (min-width: 25em) { /* ... content of css/bar.css */ } body { background: black; } ``` Checkout the [tests](test) for more examples. ### Options ### `filter` Type: `Function` Default: `() => true` Only transform imports for which the test function returns `true`. Imports for which the test function returns `false` will be left as is. The function gets the path to import as an argument and should return a boolean. #### `root` Type: `String` Default: `process.cwd()` or _dirname of [the postcss `from`](https://github.com/postcss/postcss#node-source)_ Define the root where to resolve path (eg: place where `node_modules` are). Should not be used that much. _Note: nested `@import` will additionally benefit of the relative dirname of imported files._ #### `path` Type: `String|Array` Default: `[]` A string or an array of paths in where to look for files. #### `plugins` Type: `Array` Default: `undefined` An array of plugins to be applied on each imported files. #### `resolve` Type: `Function` Default: `null` You can provide a custom path resolver with this option. This function gets `(id, basedir, importOptions)` arguments and should return a path, an array of paths or a promise resolving to the path(s). If you do not return an absolute path, your path will be resolved to an absolute path using the default resolver. You can use [resolve](https://github.com/substack/node-resolve) for this. #### `load` Type: `Function` Default: null You can overwrite the default loading way by setting this option. This function gets `(filename, importOptions)` arguments and returns content or promised content. #### `skipDuplicates` Type: `Boolean` Default: `true` By default, similar files (based on the same content) are being skipped. It's to optimize output and skip similar files like `normalize.css` for example. If this behavior is not what you want, just set this option to `false` to disable it. #### `addModulesDirectories` Type: `Array` Default: `[]` An array of folder names to add to [Node's resolver](https://github.com/substack/node-resolve). Values will be appended to the default resolve directories: `["node_modules", "web_modules"]`. This option is only for adding additional directories to default resolver. If you provide your own resolver via the `resolve` configuration option above, then this value will be ignored. #### Example with some options ```js const postcss = require("postcss") const atImport = require("postcss-import") postcss() .use(atImport({ path: ["src/css"], })) .process(cssString) .then((result) => { const { css } = result }) ``` ## `dependency` Message Support `postcss-import` adds a message to `result.messages` for each `@import`. Messages are in the following format: ``` { type: 'dependency', file: absoluteFilePath, parent: fileContainingTheImport } ``` This is mainly for use by postcss runners that implement file watching. --- ## CONTRIBUTING * ⇄ Pull requests and ★ Stars are always welcome. * For bugs and feature requests, please create an issue. * Pull requests must be accompanied by passing automated tests (`$ npm test`). ## [Changelog](CHANGELOG.md) ## [License](LICENSE) # Nano ID <img src="https://ai.github.io/nanoid/logo.svg" align="right" alt="Nano ID logo by Anton Lovchikov" width="180" height="94"> **English** | [Русский](./README.ru.md) | [简体中文](./README.zh-CN.md) | [Bahasa Indonesia](./README.id-ID.md) A tiny, secure, URL-friendly, unique string ID generator for JavaScript. > “An amazing level of senseless perfectionism, > which is simply impossible not to respect.” * **Small.** 130 bytes (minified and gzipped). No dependencies. [Size Limit] controls the size. * **Fast.** It is 2 times faster than UUID. * **Safe.** It uses hardware random generator. Can be used in clusters. * **Short IDs.** It uses a larger alphabet than UUID (`A-Za-z0-9_-`). So ID size was reduced from 36 to 21 symbols. * **Portable.** Nano ID was ported to [20 programming languages](#other-programming-languages). ```js import { nanoid } from 'nanoid' model.id = nanoid() //=> "V1StGXR8_Z5jdHi6B-myT" ``` Supports modern browsers, IE [with Babel], Node.js and React Native. [online tool]: https://gitpod.io/#https://github.com/ai/nanoid/ [with Babel]: https://developer.epages.com/blog/coding/how-to-transpile-node-modules-with-babel-and-webpack-in-a-monorepo/ [Size Limit]: https://github.com/ai/size-limit <a href="https://evilmartians.com/?utm_source=nanoid"> <img src="https://evilmartians.com/badges/sponsored-by-evil-martians.svg" alt="Sponsored by Evil Martians" width="236" height="54"> </a> ## Docs Read **[full docs](https://github.com/ai/nanoid#readme)** on GitHub. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # capability.js - javascript environment capability detection [![Build Status](https://travis-ci.org/inf3rno/capability.png?branch=master)](https://travis-ci.org/inf3rno/capability) The capability.js library provides capability detection for different javascript environments. ## Documentation This project is empty yet. ### Installation ```bash npm install capability ``` ```bash bower install capability ``` #### Environment compatibility The lib requires only basic javascript features, so it will run in every js environments. #### Requirements If you want to use the lib in browser, you'll need a node module loader, e.g. browserify, webpack, etc... #### Usage In this documentation I used the lib as follows: ```js var capability = require("capability"); ``` ### Capabilities API #### Defining a capability You can define a capability by using the `define(name, test)` function. ```js capability.define("Object.create", function () { return Object.create; }); ``` The `name` parameter should contain the identifier of the capability and the `test` parameter should contain a function, which can detect the capability. If the capability is supported by the environment, then the `test()` should return `true`, otherwise it should return `false`. You don't have to convert the return value into a `Boolean`, the library will do that for you, so you won't have memory leaks because of this. #### Testing a capability The `test(name)` function will return a `Boolean` about whether the capability is supported by the actual environment. ```js console.log(capability.test("Object.create")); // true - in recent environments // false - by pre ES5 environments without Object.create ``` You can use `capability(name)` instead of `capability.test(name)` if you want a short code by optional requirements. #### Checking a capability The `check(name)` function will throw an Error when the capability is not supported by the actual environment. ```js capability.check("Object.create"); // this will throw an Error by pre ES5 environments without Object.create ``` #### Checking capability with require and modules It is possible to check the environments with `require()` by adding a module, which calls the `check(name)` function. By the capability definitions in this lib I added such modules by each definition, so you can do for example `require("capability/es5")`. Ofc. you can do fun stuff if you want, e.g. you can call multiple `check`s from a single `requirements.js` file in your lib, etc... ### Definitions Currently the following definitions are supported by the lib: - strict mode - `arguments.callee.caller` - es5 - `Array.prototype.forEach` - `Array.prototype.map` - `Function.prototype.bind` - `Object.create` - `Object.defineProperties` - `Object.defineProperty` - `Object.prototype.hasOwnProperty` - `Error.captureStackTrace` - `Error.prototype.stack` ## License MIT - 2016 Jánszky László Lajos # Acorn AST walker An abstract syntax tree walker for the [ESTree](https://github.com/estree/estree) format. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn-walk/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn-walk ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface An algorithm for recursing through a syntax tree is stored as an object, with a property for each tree node type holding a function that will recurse through such a node. There are several ways to run such a walker. **simple**`(node, visitors, base, state)` does a 'simple' walk over a tree. `node` should be the AST node to walk, and `visitors` an object with properties whose names correspond to node types in the [ESTree spec](https://github.com/estree/estree). The properties should contain functions that will be called with the node object and, if applicable the state at that point. The last two arguments are optional. `base` is a walker algorithm, and `state` is a start state. The default walker will simply visit all statements and expressions and not produce a meaningful state. (An example of a use of state is to track scope at each point in the tree.) ```js const acorn = require("acorn") const walk = require("acorn-walk") walk.simple(acorn.parse("let x = 10"), { Literal(node) { console.log(`Found a literal: ${node.value}`) } }) ``` **ancestor**`(node, visitors, base, state)` does a 'simple' walk over a tree, building up an array of ancestor nodes (including the current node) and passing the array to the callbacks as a third parameter. ```js const acorn = require("acorn") const walk = require("acorn-walk") walk.ancestor(acorn.parse("foo('hi')"), { Literal(_, ancestors) { console.log("This literal's ancestors are:", ancestors.map(n => n.type)) } }) ``` **recursive**`(node, state, functions, base)` does a 'recursive' walk, where the walker functions are responsible for continuing the walk on the child nodes of their target node. `state` is the start state, and `functions` should contain an object that maps node types to walker functions. Such functions are called with `(node, state, c)` arguments, and can cause the walk to continue on a sub-node by calling the `c` argument on it with `(node, state)` arguments. The optional `base` argument provides the fallback walker functions for node types that aren't handled in the `functions` object. If not given, the default walkers will be used. **make**`(functions, base)` builds a new walker object by using the walker functions in `functions` and filling in the missing ones by taking defaults from `base`. **full**`(node, callback, base, state)` does a 'full' walk over a tree, calling the callback with the arguments (node, state, type) for each node **fullAncestor**`(node, callback, base, state)` does a 'full' walk over a tree, building up an array of ancestor nodes (including the current node) and passing the array to the callbacks as a third parameter. ```js const acorn = require("acorn") const walk = require("acorn-walk") walk.full(acorn.parse("1 + 1"), node => { console.log(`There's a ${node.type} node at ${node.ch}`) }) ``` **findNodeAt**`(node, start, end, test, base, state)` tries to locate a node in a tree at the given start and/or end offsets, which satisfies the predicate `test`. `start` and `end` can be either `null` (as wildcard) or a number. `test` may be a string (indicating a node type) or a function that takes `(nodeType, node)` arguments and returns a boolean indicating whether this node is interesting. `base` and `state` are optional, and can be used to specify a custom walker. Nodes are tested from inner to outer, so if two nodes match the boundaries, the inner one will be preferred. **findNodeAround**`(node, pos, test, base, state)` is a lot like `findNodeAt`, but will match any node that exists 'around' (spanning) the given position. **findNodeAfter**`(node, pos, test, base, state)` is similar to `findNodeAround`, but will match all nodes *after* the given position (testing outer nodes before inner nodes). <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Build Status][ci-image]][ci-url] [![Coveralls Status][coveralls-image]][coveralls-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true; } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/'); // 'foo' globParent('foo/\\[bar]/'); // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext'); // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext'); // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]'); // 'foo ' globParent('foo \\[bar]*'); // 'foo ' // GOOD globParent('./foo \\[bar]'); // 'foo [bar]' globParent('./foo \\[bar]*'); // '.' ``` ## License ISC <!-- prettier-ignore-start --> [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg?style=flat-square [ci-url]: https://github.com/gulpjs/glob-parent/actions?query=workflow:dev [ci-image]: https://img.shields.io/github/workflow/status/gulpjs/glob-parent/dev?style=flat-square [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg?style=flat-square <!-- prettier-ignore-end --> <!-- prettier-ignore-start --> [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets <!-- prettier-ignore-end --> # Borsh JS [![Project license](https://img.shields.io/badge/license-Apache2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Project license](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Discord](https://img.shields.io/discord/490367152054992913?label=discord)](https://discord.gg/Vyp7ETM) [![Travis status](https://travis-ci.com/near/borsh.svg?branch=master)](https://travis-ci.com/near/borsh-js) [![NPM version](https://img.shields.io/npm/v/borsh.svg?style=flat-square)](https://npmjs.com/borsh) [![Size on NPM](https://img.shields.io/bundlephobia/minzip/borsh.svg?style=flat-square)](https://npmjs.com/borsh) **Borsh JS** is an implementation of the [Borsh] binary serialization format for JavaScript and TypeScript projects. Borsh stands for _Binary Object Representation Serializer for Hashing_. It is meant to be used in security-critical projects as it prioritizes consistency, safety, speed, and comes with a strict specification. ## Examples ### Serializing an object ```javascript const value = new Test({ x: 255, y: 20, z: '123', q: [1, 2, 3] }); const schema = new Map([[Test, { kind: 'struct', fields: [['x', 'u8'], ['y', 'u64'], ['z', 'string'], ['q', [3]]] }]]); const buffer = borsh.serialize(schema, value); ``` ### Deserializing an object ```javascript const newValue = borsh.deserialize(schema, Test, buffer); ``` ## Type Mappings | Borsh | TypeScript | |-----------------------|----------------| | `u8` integer | `number` | | `u16` integer | `number` | | `u32` integer | `number` | | `u64` integer | `BN` | | `u128` integer | `BN` | | `u256` integer | `BN` | | `u512` integer | `BN` | | `f32` float | N/A | | `f64` float | N/A | | fixed-size byte array | `Uint8Array` | | UTF-8 string | `string` | | option | `null` or type | | map | N/A | | set | N/A | | structs | `any` | ## Contributing Install dependencies: ```bash yarn install ``` Continuously build with: ```bash yarn dev ``` Run tests: ```bash yarn test ``` Run linter ```bash yarn lint ``` ## Publish Prepare `dist` version by running: ```bash yarn build ``` When publishing to npm use [np](https://github.com/sindresorhus/np). # License This repository is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-MIT](LICENSE-MIT.txt) and [LICENSE-APACHE](LICENSE-APACHE) for details. [Borsh]: https://borsh.io <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # function-bind <!-- [![build status][travis-svg]][travis-url] [![NPM version][npm-badge-svg]][npm-url] [![Coverage Status][5]][6] [![gemnasium Dependency Status][7]][8] [![Dependency status][deps-svg]][deps-url] [![Dev Dependency status][dev-deps-svg]][dev-deps-url] --> <!-- [![browser support][11]][12] --> Implementation of function.prototype.bind ## Example I mainly do this for unit tests I run on phantomjs. PhantomJS does not have Function.prototype.bind :( ```js Function.prototype.bind = require("function-bind") ``` ## Installation `npm install function-bind` ## Contributors - Raynos ## MIT Licenced [travis-svg]: https://travis-ci.org/Raynos/function-bind.svg [travis-url]: https://travis-ci.org/Raynos/function-bind [npm-badge-svg]: https://badge.fury.io/js/function-bind.svg [npm-url]: https://npmjs.org/package/function-bind [5]: https://coveralls.io/repos/Raynos/function-bind/badge.png [6]: https://coveralls.io/r/Raynos/function-bind [7]: https://gemnasium.com/Raynos/function-bind.png [8]: https://gemnasium.com/Raynos/function-bind [deps-svg]: https://david-dm.org/Raynos/function-bind.svg [deps-url]: https://david-dm.org/Raynos/function-bind [dev-deps-svg]: https://david-dm.org/Raynos/function-bind/dev-status.svg [dev-deps-url]: https://david-dm.org/Raynos/function-bind#info=devDependencies [11]: https://ci.testling.com/Raynos/function-bind.png [12]: https://ci.testling.com/Raynos/function-bind <p align="center"> <a href="https://gulpjs.com"> <img height="257" width="114" src="https://raw.githubusercontent.com/gulpjs/artwork/master/gulp-2x.png"> </a> </p> # glob-parent [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] Extract the non-magic parent path from a glob string. ## Usage ```js var globParent = require('glob-parent'); globParent('path/to/*.js'); // 'path/to' globParent('/root/path/to/*.js'); // '/root/path/to' globParent('/*.js'); // '/' globParent('*.js'); // '.' globParent('**/*.js'); // '.' globParent('path/{to,from}'); // 'path' globParent('path/!(to|from)'); // 'path' globParent('path/?(to|from)'); // 'path' globParent('path/+(to|from)'); // 'path' globParent('path/*(to|from)'); // 'path' globParent('path/@(to|from)'); // 'path' globParent('path/**/*'); // 'path' // if provided a non-glob path, returns the nearest dir globParent('path/foo/bar.js'); // 'path/foo' globParent('path/foo/'); // 'path/foo' globParent('path/foo'); // 'path' (see issue #3 for details) ``` ## API ### `globParent(maybeGlobString, [options])` Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. #### options ```js { // Disables the automatic conversion of slashes for Windows flipBackslashes: true } ``` ## Escaping The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - `?` (question mark) unless used as a path segment alone - `*` (asterisk) - `|` (pipe) - `(` (opening parenthesis) - `)` (closing parenthesis) - `{` (opening curly brace) - `}` (closing curly brace) - `[` (opening bracket) - `]` (closing bracket) **Example** ```js globParent('foo/[bar]/') // 'foo' globParent('foo/\\[bar]/') // 'foo/[bar]' ``` ## Limitations ### Braces & Brackets This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with [expand-braces] and/or [expand-brackets]. ### Windows Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators). This cannot be used in conjunction with escape characters. ```js // BAD globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' // GOOD globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' ``` If you are using escape characters for a pattern without path parts (i.e. relative to `cwd`), prefix with `./` to avoid confusing glob-parent. ```js // BAD globParent('foo \\[bar]') // 'foo ' globParent('foo \\[bar]*') // 'foo ' // GOOD globParent('./foo \\[bar]') // 'foo [bar]' globParent('./foo \\[bar]*') // '.' ``` ## License ISC [expand-braces]: https://github.com/jonschlinkert/expand-braces [expand-brackets]: https://github.com/jonschlinkert/expand-brackets [downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg [npm-url]: https://www.npmjs.com/package/glob-parent [npm-image]: https://img.shields.io/npm/v/glob-parent.svg [azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master [azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master [travis-url]: https://travis-ci.org/gulpjs/glob-parent [travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci [appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent [appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor [coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent [coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg [gitter-url]: https://gitter.im/gulpjs/gulp [gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg # Acorn A tiny, fast JavaScript parser written in JavaScript. ## Community Acorn is open source software released under an [MIT license](https://github.com/acornjs/acorn/blob/master/acorn/LICENSE). You are welcome to [report bugs](https://github.com/acornjs/acorn/issues) or create pull requests on [github](https://github.com/acornjs/acorn). For questions and discussion, please use the [Tern discussion forum](https://discuss.ternjs.net). ## Installation The easiest way to install acorn is from [`npm`](https://www.npmjs.com/): ```sh npm install acorn ``` Alternately, you can download the source and build acorn yourself: ```sh git clone https://github.com/acornjs/acorn.git cd acorn npm install ``` ## Interface **parse**`(input, options)` is the main interface to the library. The `input` parameter is a string, `options` can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the [ESTree spec](https://github.com/estree/estree). ```javascript let acorn = require("acorn"); console.log(acorn.parse("1 + 1")); ``` When encountering a syntax error, the parser will raise a `SyntaxError` object with a meaningful message. The error object will have a `pos` property that indicates the string offset at which the error occurred, and a `loc` object that contains a `{line, column}` object referring to that same position. Options can be provided by passing a second argument, which should be an object containing any of these fields: - **ecmaVersion**: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10. **NOTE**: Only 'stage 4' (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins. - **sourceType**: Indicate the mode the code should be parsed in. Can be either `"script"` or `"module"`. This influences global strict mode and parsing of `import` and `export` declarations. **NOTE**: If set to `"module"`, then static `import` / `export` syntax will be valid, even if `ecmaVersion` is less than 6. - **onInsertedSemicolon**: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if `locations` is on, also a `{line, column}` object representing this position. - **onTrailingComma**: Like `onInsertedSemicolon`, but for trailing commas. - **allowReserved**: If `false`, using a reserved word will generate an error. Defaults to `true` for `ecmaVersion` 3, `false` for higher versions. When given the value `"never"`, reserved words and keywords can also not be used as property names (as in Internet Explorer's old parser). - **allowReturnOutsideFunction**: By default, a return statement at the top level raises an error. Set this to `true` to accept such code. - **allowImportExportEverywhere**: By default, `import` and `export` declarations can only appear at a program's top level. Setting this option to `true` allows them anywhere where a statement is allowed. - **allowAwaitOutsideFunction**: By default, `await` expressions can only appear inside `async` functions. Setting this option to `true` allows to have top-level `await` expressions. They are still not allowed in non-`async` functions, though. - **allowHashBang**: When this is enabled (off by default), if the code starts with the characters `#!` (as in a shellscript), the first line will be treated as a comment. - **locations**: When `true`, each node has a `loc` object attached with `start` and `end` subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. Default is `false`. - **onToken**: If a function is passed for this option, each found token will be passed in same format as tokens returned from `tokenizer().getToken()`. If array is passed, each found token is pushed to it. Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **onComment**: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters: - `block`: `true` if the comment is a block comment, false if it is a line comment. - `text`: The content of the comment. - `start`: Character offset of the start of the comment. - `end`: Character offset of the end of the comment. When the `locations` options is on, the `{line, column}` locations of the comment’s start and end are passed as two additional parameters. If array is passed for this option, each found comment is pushed to it as object in Esprima format: ```javascript { "type": "Line" | "Block", "value": "comment text", "start": Number, "end": Number, // If `locations` option is on: "loc": { "start": {line: Number, column: Number} "end": {line: Number, column: Number} }, // If `ranges` option is on: "range": [Number, Number] } ``` Note that you are not allowed to call the parser from the callback—that will corrupt its internal state. - **ranges**: Nodes have their start and end characters offsets recorded in `start` and `end` properties (directly on the node, rather than the `loc` object, which holds line/column data. To also add a [semi-standardized](https://bugzilla.mozilla.org/show_bug.cgi?id=745678) `range` property holding a `[start, end]` array with the same numbers, set the `ranges` option to `true`. - **program**: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the `program` option in subsequent parses. This will add the toplevel forms of the parsed file to the "Program" (top) node of an existing parse tree. - **sourceFile**: When the `locations` option is `true`, you can pass this option to add a `source` attribute in every node’s `loc` object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose. - **directSourceFile**: Like `sourceFile`, but a `sourceFile` property will be added (regardless of the `location` option) directly to the nodes, rather than the `loc` object. - **preserveParens**: If this option is `true`, parenthesized expressions are represented by (non-standard) `ParenthesizedExpression` nodes that have a single `expression` property containing the expression inside parentheses. **parseExpressionAt**`(input, offset, options)` will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression. **tokenizer**`(input, options)` returns an object with a `getToken` method that can be called repeatedly to get the next token, a `{start, end, type, value}` object (with added `loc` property when the `locations` option is enabled and `range` property when the `ranges` option is enabled). When the token's type is `tokTypes.eof`, you should stop calling the method, since it will keep returning that same token forever. In ES6 environment, returned result can be used as any other protocol-compliant iterable: ```javascript for (let token of acorn.tokenizer(str)) { // iterate over the tokens } // transform code to array of tokens: var tokens = [...acorn.tokenizer(str)]; ``` **tokTypes** holds an object mapping names to the token type objects that end up in the `type` properties of tokens. **getLineInfo**`(input, offset)` can be used to get a `{line, column}` object for a given program string and offset. ### The `Parser` class Instances of the **`Parser`** class contain all the state and logic that drives a parse. It has static methods `parse`, `parseExpressionAt`, and `tokenizer` that match the top-level functions by the same name. When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static `extend` method. ```javascript var acorn = require("acorn"); var jsx = require("acorn-jsx"); var JSXParser = acorn.Parser.extend(jsx()); JSXParser.parse("foo(<bar/>)"); ``` The `extend` method takes any number of plugin values, and returns a new `Parser` class that includes the extra parser logic provided by the plugins. ## Command line interface The `bin/acorn` utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options: - `--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10`: Sets the ECMAScript version to parse. Default is version 9. - `--module`: Sets the parsing mode to `"module"`. Is set to `"script"` otherwise. - `--locations`: Attaches a "loc" object to each node with "start" and "end" subobjects, each of which contains the one-based line and zero-based column numbers in `{line, column}` form. - `--allow-hash-bang`: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment. - `--compact`: No whitespace is used in the AST output. - `--silent`: Do not output the AST, just return the exit status. - `--help`: Print the usage information and quit. The utility spits out the syntax tree as JSON data. ## Existing plugins - [`acorn-jsx`](https://github.com/RReverser/acorn-jsx): Parse [Facebook JSX syntax extensions](https://github.com/facebook/jsx) Plugins for ECMAScript proposals: - [`acorn-stage3`](https://github.com/acornjs/acorn-stage3): Parse most stage 3 proposals, bundling: - [`acorn-class-fields`](https://github.com/acornjs/acorn-class-fields): Parse [class fields proposal](https://github.com/tc39/proposal-class-fields) - [`acorn-import-meta`](https://github.com/acornjs/acorn-import-meta): Parse [import.meta proposal](https://github.com/tc39/proposal-import-meta) - [`acorn-private-methods`](https://github.com/acornjs/acorn-private-methods): parse [private methods, getters and setters proposal](https://github.com/tc39/proposal-private-methods)n # js-sha256 [![Build Status](https://travis-ci.org/emn178/js-sha256.svg?branch=master)](https://travis-ci.org/emn178/js-sha256) [![Coverage Status](https://coveralls.io/repos/emn178/js-sha256/badge.svg?branch=master)](https://coveralls.io/r/emn178/js-sha256?branch=master) [![CDNJS](https://img.shields.io/cdnjs/v/js-sha256.svg)](https://cdnjs.com/libraries/js-sha256/) [![NPM](https://nodei.co/npm/js-sha256.png?stars&downloads)](https://nodei.co/npm/js-sha256/) A simple SHA-256 / SHA-224 hash function for JavaScript supports UTF-8 encoding. ## Demo [SHA256 Online](http://emn178.github.io/online-tools/sha256.html) [SHA224 Online](http://emn178.github.io/online-tools/sha224.html) ## Download [Compress](https://raw.github.com/emn178/js-sha256/master/build/sha256.min.js) [Uncompress](https://raw.github.com/emn178/js-sha256/master/src/sha256.js) ## Installation You can also install js-sha256 by using Bower. bower install js-sha256 For node.js, you can use this command to install: npm install js-sha256 ## Usage You could use like this: ```JavaScript sha256('Message to hash'); sha224('Message to hash'); var hash = sha256.create(); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.update('Message to hash'); hash2.update('Message2 to hash'); hash2.array(); // HMAC sha256.hmac('key', 'Message to hash'); sha224.hmac('key', 'Message to hash'); var hash = sha256.hmac.create('key'); hash.update('Message to hash'); hash.hex(); var hash2 = sha256.hmac.update('key', 'Message to hash'); hash2.update('Message2 to hash'); hash2.array(); ``` If you use node.js, you should require the module first: ```JavaScript var sha256 = require('js-sha256'); ``` or ```JavaScript var sha256 = require('js-sha256').sha256; var sha224 = require('js-sha256').sha224; ``` It supports AMD: ```JavaScript require(['your/path/sha256.js'], function(sha256) { // ... }); ``` or TypeScript ```TypeScript import { sha256, sha224 } from 'js-sha256'; ``` ## Example ```JavaScript sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256('The quick brown fox jumps over the lazy dog'); // d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592 sha256('The quick brown fox jumps over the lazy dog.'); // ef537f25c895bfa782526529a9b63d97aa631564d5d789c2b765448c8635fb6c sha224(''); // d14a028c2a3a2bc9476102bb288234c415a2b01f828ea62ac5b3e42f sha224('The quick brown fox jumps over the lazy dog'); // 730e109bd7a8a32b1cb9d9a09aa2325d2430587ddbc0c38bad911525 sha224('The quick brown fox jumps over the lazy dog.'); // 619cba8e8e05826e9b8c519c0a5c68f4fb653e8a3d8aa04bb2c8cd4c // It also supports UTF-8 encoding sha256('中文'); // 72726d8818f693066ceb69afa364218b692e62ea92b385782363780f47529c21 sha224('中文'); // dfbab71afdf54388af4d55f8bd3de8c9b15e0eb916bf9125f4a959d4 // It also supports byte `Array`, `Uint8Array`, `ArrayBuffer` input sha256([]); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256(new Uint8Array([211, 212])); // 182889f925ae4e5cc37118ded6ed87f7bdc7cab5ec5e78faef2e50048999473f // Different output sha256(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.hex(''); // e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 sha256.array(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.digest(''); // [227, 176, 196, 66, 152, 252, 28, 20, 154, 251, 244, 200, 153, 111, 185, 36, 39, 174, 65, 228, 100, 155, 147, 76, 164, 149, 153, 27, 120, 82, 184, 85] sha256.arrayBuffer(''); // ArrayBuffer ``` ## License The project is released under the [MIT license](http://www.opensource.org/licenses/MIT). ## Contact The project's website is located at https://github.com/emn178/js-sha256 Author: Chen, Yi-Cyuan ([email protected]) # postcss-value-parser [![Travis CI](https://travis-ci.org/TrySound/postcss-value-parser.svg)](https://travis-ci.org/TrySound/postcss-value-parser) Transforms CSS declaration values and at-rule parameters into a tree of nodes, and provides a simple traversal API. ## Usage ```js var valueParser = require('postcss-value-parser'); var cssBackgroundValue = 'url(foo.png) no-repeat 40px 73%'; var parsedValue = valueParser(cssBackgroundValue); // parsedValue exposes an API described below, // e.g. parsedValue.walk(..), parsedValue.toString(), etc. ``` For example, parsing the value `rgba(233, 45, 66, .5)` will return the following: ```js { nodes: [ { type: 'function', value: 'rgba', before: '', after: '', nodes: [ { type: 'word', value: '233' }, { type: 'div', value: ',', before: '', after: ' ' }, { type: 'word', value: '45' }, { type: 'div', value: ',', before: '', after: ' ' }, { type: 'word', value: '66' }, { type: 'div', value: ',', before: ' ', after: '' }, { type: 'word', value: '.5' } ] } ] } ``` If you wanted to convert each `rgba()` value in `sourceCSS` to a hex value, you could do so like this: ```js var valueParser = require('postcss-value-parser'); var parsed = valueParser(sourceCSS); // walk() will visit all the of the nodes in the tree, // invoking the callback for each. parsed.walk(function (node) { // Since we only want to transform rgba() values, // we can ignore anything else. if (node.type !== 'function' && node.value !== 'rgba') return; // We can make an array of the rgba() arguments to feed to a // convertToHex() function var color = node.nodes.filter(function (node) { return node.type === 'word'; }).map(function (node) { return Number(node.value); }); // [233, 45, 66, .5] // Now we will transform the existing rgba() function node // into a word node with the hex value node.type = 'word'; node.value = convertToHex(color); }) parsed.toString(); // #E92D42 ``` ## Nodes Each node is an object with these common properties: - **type**: The type of node (`word`, `string`, `div`, `space`, `comment`, or `function`). Each type is documented below. - **value**: Each node has a `value` property; but what exactly `value` means is specific to the node type. Details are documented for each type below. - **sourceIndex**: The starting index of the node within the original source string. For example, given the source string `10px 20px`, the `word` node whose value is `20px` will have a `sourceIndex` of `5`. ### word The catch-all node type that includes keywords (e.g. `no-repeat`), quantities (e.g. `20px`, `75%`, `1.5`), and hex colors (e.g. `#e6e6e6`). Node-specific properties: - **value**: The "word" itself. ### string A quoted string value, e.g. `"something"` in `content: "something";`. Node-specific properties: - **value**: The text content of the string. - **quote**: The quotation mark surrounding the string, either `"` or `'`. - **unclosed**: `true` if the string was not closed properly. e.g. `"unclosed string `. ### div A divider, for example - `,` in `animation-duration: 1s, 2s, 3s` - `/` in `border-radius: 10px / 23px` - `:` in `(min-width: 700px)` Node-specific properties: - **value**: The divider character. Either `,`, `/`, or `:` (see examples above). - **before**: Whitespace before the divider. - **after**: Whitespace after the divider. ### space Whitespace used as a separator, e.g. ` ` occurring twice in `border: 1px solid black;`. Node-specific properties: - **value**: The whitespace itself. ### comment A CSS comment starts with `/*` and ends with `*/` Node-specific properties: - **value**: The comment value without `/*` and `*/` - **unclosed**: `true` if the comment was not closed properly. e.g. `/* comment without an end `. ### function A CSS function, e.g. `rgb(0,0,0)` or `url(foo.bar)`. Function nodes have nodes nested within them: the function arguments. Additional properties: - **value**: The name of the function, e.g. `rgb` in `rgb(0,0,0)`. - **before**: Whitespace after the opening parenthesis and before the first argument, e.g. ` ` in `rgb( 0,0,0)`. - **after**: Whitespace before the closing parenthesis and after the last argument, e.g. ` ` in `rgb(0,0,0 )`. - **nodes**: More nodes representing the arguments to the function. - **unclosed**: `true` if the parentheses was not closed properly. e.g. `( unclosed-function `. Media features surrounded by parentheses are considered functions with an empty value. For example, `(min-width: 700px)` parses to these nodes: ```js [ { type: 'function', value: '', before: '', after: '', nodes: [ { type: 'word', value: 'min-width' }, { type: 'div', value: ':', before: '', after: ' ' }, { type: 'word', value: '700px' } ] } ] ``` `url()` functions can be parsed a little bit differently depending on whether the first character in the argument is a quotation mark. `url( /gfx/img/bg.jpg )` parses to: ```js { type: 'function', sourceIndex: 0, value: 'url', before: ' ', after: ' ', nodes: [ { type: 'word', sourceIndex: 5, value: '/gfx/img/bg.jpg' } ] } ``` `url( "/gfx/img/bg.jpg" )`, on the other hand, parses to: ```js { type: 'function', sourceIndex: 0, value: 'url', before: ' ', after: ' ', nodes: [ type: 'string', sourceIndex: 5, quote: '"', value: '/gfx/img/bg.jpg' }, ] } ``` ### unicode-range The unicode-range CSS descriptor sets the specific range of characters to be used from a font defined by @font-face and made available for use on the current page (`unicode-range: U+0025-00FF`). Node-specific properties: - **value**: The "unicode-range" itself. ## API ``` var valueParser = require('postcss-value-parser'); ``` ### valueParser.unit(quantity) Parses `quantity`, distinguishing the number from the unit. Returns an object like the following: ```js // Given 2rem { number: '2', unit: 'rem' } ``` If the `quantity` argument cannot be parsed as a number, returns `false`. *This function does not parse complete values*: you cannot pass it `1px solid black` and expect `px` as the unit. Instead, you should pass it single quantities only. Parse `1px solid black`, then pass it the stringified `1px` node (a `word` node) to parse the number and unit. ### valueParser.stringify(nodes[, custom]) Stringifies a node or array of nodes. The `custom` function is called for each `node`; return a string to override the default behaviour. ### valueParser.walk(nodes, callback[, bubble]) Walks each provided node, recursively walking all descendent nodes within functions. Returning `false` in the `callback` will prevent traversal of descendent nodes (within functions). You can use this feature to for shallow iteration, walking over only the *immediate* children. *Note: This only applies if `bubble` is `false` (which is the default).* By default, the tree is walked from the outermost node inwards. To reverse the direction, pass `true` for the `bubble` argument. The `callback` is invoked with three arguments: `callback(node, index, nodes)`. - `node`: The current node. - `index`: The index of the current node. - `nodes`: The complete nodes array passed to `walk()`. Returns the `valueParser` instance. ### var parsed = valueParser(value) Returns the parsed node tree. ### parsed.nodes The array of nodes. ### parsed.toString() Stringifies the node tree. ### parsed.walk(callback[, bubble]) Walks each node inside `parsed.nodes`. See the documentation for `valueParser.walk()` above. # License MIT © [Bogdan Chadkin](mailto:[email protected]) util-deprecate ============== ### The Node.js `util.deprecate()` function with browser support In Node.js, this module simply re-exports the `util.deprecate()` function. In the web browser (i.e. via browserify), a browser-specific implementation of the `util.deprecate()` function is used. ## API A `deprecate()` function is the only thing exposed by this module. ``` javascript // setup: exports.foo = deprecate(foo, 'foo() is deprecated, use bar() instead'); // users see: foo(); // foo() is deprecated, use bar() instead foo(); foo(); ``` ## License (The MIT License) Copyright (c) 2014 Nathan Rajlich <[email protected]> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # is-extglob [![NPM version](https://img.shields.io/npm/v/is-extglob.svg?style=flat)](https://www.npmjs.com/package/is-extglob) [![NPM downloads](https://img.shields.io/npm/dm/is-extglob.svg?style=flat)](https://npmjs.org/package/is-extglob) [![Build Status](https://img.shields.io/travis/jonschlinkert/is-extglob.svg?style=flat)](https://travis-ci.org/jonschlinkert/is-extglob) > Returns true if a string has an extglob. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save is-extglob ``` ## Usage ```js var isExtglob = require('is-extglob'); ``` **True** ```js isExtglob('?(abc)'); isExtglob('@(abc)'); isExtglob('!(abc)'); isExtglob('*(abc)'); isExtglob('+(abc)'); ``` **False** Escaped extglobs: ```js isExtglob('\\?(abc)'); isExtglob('\\@(abc)'); isExtglob('\\!(abc)'); isExtglob('\\*(abc)'); isExtglob('\\+(abc)'); ``` Everything else... ```js isExtglob('foo.js'); isExtglob('!foo.js'); isExtglob('*.js'); isExtglob('**/abc.js'); isExtglob('abc/*.js'); isExtglob('abc/(aaa|bbb).js'); isExtglob('abc/[a-z].js'); isExtglob('abc/{a,b}.js'); isExtglob('abc/?.js'); isExtglob('abc.js'); isExtglob('abc/def/ghi.js'); ``` ## History **v2.0** Adds support for escaping. Escaped exglobs no longer return true. ## About ### Related projects * [has-glob](https://www.npmjs.com/package/has-glob): Returns `true` if an array has a glob pattern. | [homepage](https://github.com/jonschlinkert/has-glob "Returns `true` if an array has a glob pattern.") * [is-glob](https://www.npmjs.com/package/is-glob): Returns `true` if the given string looks like a glob pattern or an extglob pattern… [more](https://github.com/jonschlinkert/is-glob) | [homepage](https://github.com/jonschlinkert/is-glob "Returns `true` if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a bet") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/jonschlinkert/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") ### Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). ### Building docs _(This document was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme) (a [verb](https://github.com/verbose/verb) generator), please don't edit the readme directly. Any changes to the readme must be made in [.verb.md](.verb.md).)_ To generate the readme and API documentation with [verb](https://github.com/verbose/verb): ```sh $ npm install -g verb verb-generate-readme && verb ``` ### Running tests Install dev dependencies: ```sh $ npm install -d && npm test ``` ### Author **Jon Schlinkert** * [github/jonschlinkert](https://github.com/jonschlinkert) * [twitter/jonschlinkert](http://twitter.com/jonschlinkert) ### License Copyright © 2016, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT license](https://github.com/jonschlinkert/is-extglob/blob/master/LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.1.31, on October 12, 2016._ # PostCSS JS <img align="right" width="135" height="95" title="Philosopher’s stone, logo of PostCSS" src="https://postcss.org/logo-leftp.svg"> [PostCSS] for for CSS-in-JS and styles in JS objects. For example, to use [Stylelint] or [RTLCSS] plugins in your workflow. <a href="https://evilmartians.com/?utm_source=postcss-js"> <img src="https://evilmartians.com/badges/sponsored-by-evil-martians.svg" alt="Sponsored by Evil Martians" width="236" height="54"> </a> [Stylelint]: https://github.com/stylelint/stylelint [PostCSS]: https://github.com/postcss/postcss [RTLCSS]: https://github.com/MohammadYounes/rtlcss ## Docs Read **[full docs](https://github.com/postcss/postcss-js#readme)** on GitHub. # path-parse [![Build Status](https://travis-ci.org/jbgutierrez/path-parse.svg?branch=master)](https://travis-ci.org/jbgutierrez/path-parse) > Node.js [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) [ponyfill](https://ponyfill.com). ## Install ``` $ npm install --save path-parse ``` ## Usage ```js var pathParse = require('path-parse'); pathParse('/home/user/dir/file.txt'); //=> { // root : "/", // dir : "/home/user/dir", // base : "file.txt", // ext : ".txt", // name : "file" // } ``` ## API See [`path.parse(pathString)`](https://nodejs.org/api/path.html#path_path_parse_pathstring) docs. ### pathParse(path) ### pathParse.posix(path) The Posix specific version. ### pathParse.win32(path) The Windows specific version. ## License MIT © [Javier Blanco](http://jbgutierrez.info) # toidentifier [![NPM Version][npm-image]][npm-url] [![NPM Downloads][downloads-image]][downloads-url] [![Build Status][github-actions-ci-image]][github-actions-ci-url] [![Test Coverage][codecov-image]][codecov-url] > Convert a string of words to a JavaScript identifier ## Install This is a [Node.js](https://nodejs.org/en/) module available through the [npm registry](https://www.npmjs.com/). Installation is done using the [`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): ```bash $ npm install toidentifier ``` ## Example ```js var toIdentifier = require('toidentifier') console.log(toIdentifier('Bad Request')) // => "BadRequest" ``` ## API This CommonJS module exports a single default function: `toIdentifier`. ### toIdentifier(string) Given a string as the argument, it will be transformed according to the following rules and the new string will be returned: 1. Split into words separated by space characters (`0x20`). 2. Upper case the first character of each word. 3. Join the words together with no separator. 4. Remove all non-word (`[0-9a-z_]`) characters. ## License [MIT](LICENSE) [codecov-image]: https://img.shields.io/codecov/c/github/component/toidentifier.svg [codecov-url]: https://codecov.io/gh/component/toidentifier [downloads-image]: https://img.shields.io/npm/dm/toidentifier.svg [downloads-url]: https://npmjs.org/package/toidentifier [github-actions-ci-image]: https://img.shields.io/github/workflow/status/component/toidentifier/ci/master?label=ci [github-actions-ci-url]: https://github.com/component/toidentifier?query=workflow%3Aci [npm-image]: https://img.shields.io/npm/v/toidentifier.svg [npm-url]: https://npmjs.org/package/toidentifier ## [npm]: https://www.npmjs.com/ [yarn]: https://yarnpkg.com/ # acorn-node [Acorn](https://github.com/acornjs/acorn) preloaded with plugins for syntax parity with recent Node versions. It also includes versions of the plugins compiled with [Bublé](https://github.com/rich-harris/buble), so they can be run on old Node versions (0.6 and up). [![npm][npm-image]][npm-url] [![travis][travis-image]][travis-url] [![standard][standard-image]][standard-url] [npm-image]: https://img.shields.io/npm/v/acorn-node.svg?style=flat-square [npm-url]: https://www.npmjs.com/package/acorn-node [travis-image]: https://img.shields.io/travis/browserify/acorn-node/master.svg?style=flat-square [travis-url]: https://travis-ci.org/browserify/acorn-node [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat-square [standard-url]: http://npm.im/standard ## Install ``` npm install acorn-node ``` ## Usage ```js var acorn = require('acorn-node') ``` The API is the same as [acorn](https://github.com/acornjs/acorn), but the following syntax features are enabled by default: - Bigint syntax `10n` - Numeric separators syntax `10_000` - Public and private class instance fields - Public and private class static fields - Dynamic `import()` - The `import.meta` property - `export * as ns from` syntax And the following options have different defaults from acorn, to match Node modules: - `ecmaVersion: 2019` - `allowHashBang: true` - `allowReturnOutsideFunction: true` ```js var walk = require('acorn-node/walk') ``` The Acorn syntax tree walker. Comes preconfigured for the syntax plugins if necessary. See the [acorn documentation](https://github.com/acornjs/acorn#distwalkjs) for details. ## License The files in the repo root and the ./test folder are licensed as [Apache-2.0](LICENSE.md). The files in lib/ are generated from other packages: - lib/bigint: [acorn-bigint](https://github.com/acornjs/acorn-bigint]), MIT - lib/class-private-elements: [acorn-class-private-elements](https://github.com/acornjs/acorn-class-private-elements), MIT - lib/dynamic-import: [acorn-dynamic-import](https://github.com/acornjs/acorn-dynamic-import), MIT - lib/export-ns-from: [acorn-export-ns-from](https://github.com/acornjs/acorn-export-ns-from), MIT - lib/import-meta: [acorn-import-meta](https://github.com/acornjs/acorn-import-meta), MIT - lib/numeric-separator: [acorn-numeric-separator](https://github.com/acornjs/acorn-numeric-separator]), MIT - lib/static-class-features: [acorn-static-class-features](https://github.com/acornjs/acorn-static-class-features), MIT # verifold <a href="https://gitpod.io/github.com/jeromtom/verifold"> <img src="https://img.shields.io/badge/Contribute%20with-Gitpod-908a85?logo=gitpod" alt="Contribute with Gitpod" /> </a> # mustache.js - Logic-less {{mustache}} templates with JavaScript > What could be more logical awesome than no logic at all? [![Build Status](https://travis-ci.org/janl/mustache.js.svg?branch=master)](https://travis-ci.org/janl/mustache.js) [mustache.js](http://github.com/janl/mustache.js) is a zero-dependency implementation of the [mustache](http://mustache.github.com/) template system in JavaScript. [Mustache](http://mustache.github.com/) is a logic-less template syntax. It can be used for HTML, config files, source code - anything. It works by expanding tags in a template using values provided in a hash or object. We call it "logic-less" because there are no if statements, else clauses, or for loops. Instead there are only tags. Some tags are replaced with a value, some nothing, and others a series of values. For a language-agnostic overview of mustache's template syntax, see the `mustache(5)` [manpage](http://mustache.github.com/mustache.5.html). ## Where to use mustache.js? You can use mustache.js to render mustache templates anywhere you can use JavaScript. This includes web browsers, server-side environments such as [Node.js](http://nodejs.org/), and [CouchDB](http://couchdb.apache.org/) views. mustache.js ships with support for the [CommonJS](http://www.commonjs.org/) module API, the [Asynchronous Module Definition](https://github.com/amdjs/amdjs-api/wiki/AMD) API (AMD) and [ECMAScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules). In addition to being a package to be used programmatically, you can use it as a [command line tool](#command-line-tool). And this will be your templates after you use Mustache: !['stache](https://cloud.githubusercontent.com/assets/288977/8779228/a3cf700e-2f02-11e5-869a-300312fb7a00.gif) ## Install You can get Mustache via [npm](http://npmjs.com). ```bash $ npm install mustache --save ``` ## Usage Below is a quick example how to use mustache.js: ```js var view = { title: "Joe", calc: function () { return 2 + 4; } }; var output = Mustache.render("{{title}} spends {{calc}}", view); ``` In this example, the `Mustache.render` function takes two parameters: 1) the [mustache](http://mustache.github.com/) template and 2) a `view` object that contains the data and code needed to render the template. ## Templates A [mustache](http://mustache.github.com/) template is a string that contains any number of mustache tags. Tags are indicated by the double mustaches that surround them. `{{person}}` is a tag, as is `{{#person}}`. In both examples we refer to `person` as the tag's key. There are several types of tags available in mustache.js, described below. There are several techniques that can be used to load templates and hand them to mustache.js, here are two of them: #### Include Templates If you need a template for a dynamic part in a static website, you can consider including the template in the static HTML file to avoid loading templates separately. Here's a small example: ```js // file: render.js function renderHello() { var template = document.getElementById('template').innerHTML; var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; } ``` ```html <html> <body onload="renderHello()"> <div id="target">Loading...</div> <script id="template" type="x-tmpl-mustache"> Hello {{ name }}! </script> <script src="https://unpkg.com/mustache@latest"></script> <script src="render.js"></script> </body> </html> ``` #### Load External Templates If your templates reside in individual files, you can load them asynchronously and render them when they arrive. Another example using [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch): ```js function renderHello() { fetch('template.mustache') .then((response) => response.text()) .then((template) => { var rendered = Mustache.render(template, { name: 'Luke' }); document.getElementById('target').innerHTML = rendered; }); } ``` ### Variables The most basic tag type is a simple variable. A `{{name}}` tag renders the value of the `name` key in the current context. If there is no such key, nothing is rendered. All variables are HTML-escaped by default. If you want to render unescaped HTML, use the triple mustache: `{{{name}}}`. You can also use `&` to unescape a variable. If you'd like to change HTML-escaping behavior globally (for example, to template non-HTML formats), you can override Mustache's escape function. For example, to disable all escaping: `Mustache.escape = function(text) {return text;};`. If you want `{{name}}` _not_ to be interpreted as a mustache tag, but rather to appear exactly as `{{name}}` in the output, you must change and then restore the default delimiter. See the [Custom Delimiters](#custom-delimiters) section for more information. View: ```json { "name": "Chris", "company": "<b>GitHub</b>" } ``` Template: ``` * {{name}} * {{age}} * {{company}} * {{{company}}} * {{&company}} {{=<% %>=}} * {{company}} <%={{ }}=%> ``` Output: ```html * Chris * * &lt;b&gt;GitHub&lt;/b&gt; * <b>GitHub</b> * <b>GitHub</b> * {{company}} ``` JavaScript's dot notation may be used to access keys that are properties of objects in a view. View: ```json { "name": { "first": "Michael", "last": "Jackson" }, "age": "RIP" } ``` Template: ```html * {{name.first}} {{name.last}} * {{age}} ``` Output: ```html * Michael Jackson * RIP ``` ### Sections Sections render blocks of text zero or more times, depending on the value of the key in the current context. A section begins with a pound and ends with a slash. That is, `{{#person}}` begins a `person` section, while `{{/person}}` ends it. The text between the two tags is referred to as that section's "block". The behavior of the section is determined by the value of the key. #### False Values or Empty Lists If the `person` key does not exist, or exists and has a value of `null`, `undefined`, `false`, `0`, or `NaN`, or is an empty string or an empty list, the block will not be rendered. View: ```json { "person": false } ``` Template: ```html Shown. {{#person}} Never shown! {{/person}} ``` Output: ```html Shown. ``` #### Non-Empty Lists If the `person` key exists and is not `null`, `undefined`, or `false`, and is not an empty list the block will be rendered one or more times. When the value is a list, the block is rendered once for each item in the list. The context of the block is set to the current item in the list for each iteration. In this way we can loop over collections. View: ```json { "stooges": [ { "name": "Moe" }, { "name": "Larry" }, { "name": "Curly" } ] } ``` Template: ```html {{#stooges}} <b>{{name}}</b> {{/stooges}} ``` Output: ```html <b>Moe</b> <b>Larry</b> <b>Curly</b> ``` When looping over an array of strings, a `.` can be used to refer to the current item in the list. View: ```json { "musketeers": ["Athos", "Aramis", "Porthos", "D'Artagnan"] } ``` Template: ```html {{#musketeers}} * {{.}} {{/musketeers}} ``` Output: ```html * Athos * Aramis * Porthos * D'Artagnan ``` If the value of a section variable is a function, it will be called in the context of the current item in the list on each iteration. View: ```js { "beatles": [ { "firstName": "John", "lastName": "Lennon" }, { "firstName": "Paul", "lastName": "McCartney" }, { "firstName": "George", "lastName": "Harrison" }, { "firstName": "Ringo", "lastName": "Starr" } ], "name": function () { return this.firstName + " " + this.lastName; } } ``` Template: ```html {{#beatles}} * {{name}} {{/beatles}} ``` Output: ```html * John Lennon * Paul McCartney * George Harrison * Ringo Starr ``` #### Functions If the value of a section key is a function, it is called with the section's literal block of text, un-rendered, as its first argument. The second argument is a special rendering function that uses the current view as its view argument. It is called in the context of the current view object. View: ```js { "name": "Tater", "bold": function () { return function (text, render) { return "<b>" + render(text) + "</b>"; } } } ``` Template: ```html {{#bold}}Hi {{name}}.{{/bold}} ``` Output: ```html <b>Hi Tater.</b> ``` ### Inverted Sections An inverted section opens with `{{^section}}` instead of `{{#section}}`. The block of an inverted section is rendered only if the value of that section's tag is `null`, `undefined`, `false`, *falsy* or an empty list. View: ```json { "repos": [] } ``` Template: ```html {{#repos}}<b>{{name}}</b>{{/repos}} {{^repos}}No repos :({{/repos}} ``` Output: ```html No repos :( ``` ### Comments Comments begin with a bang and are ignored. The following template: ```html <h1>Today{{! ignore me }}.</h1> ``` Will render as follows: ```html <h1>Today.</h1> ``` Comments may contain newlines. ### Partials Partials begin with a greater than sign, like {{> box}}. Partials are rendered at runtime (as opposed to compile time), so recursive partials are possible. Just avoid infinite loops. They also inherit the calling context. Whereas in ERB you may have this: ```html+erb <%= partial :next_more, :start => start, :size => size %> ``` Mustache requires only this: ```html {{> next_more}} ``` Why? Because the `next_more.mustache` file will inherit the `size` and `start` variables from the calling context. In this way you may want to think of partials as includes, imports, template expansion, nested templates, or subtemplates, even though those aren't literally the case here. For example, this template and partial: base.mustache: <h2>Names</h2> {{#names}} {{> user}} {{/names}} user.mustache: <strong>{{name}}</strong> Can be thought of as a single, expanded template: ```html <h2>Names</h2> {{#names}} <strong>{{name}}</strong> {{/names}} ``` In mustache.js an object of partials may be passed as the third argument to `Mustache.render`. The object should be keyed by the name of the partial, and its value should be the partial text. ```js Mustache.render(template, view, { user: userTemplate }); ``` ### Custom Delimiters Custom delimiters can be used in place of `{{` and `}}` by setting the new values in JavaScript or in templates. #### Setting in JavaScript The `Mustache.tags` property holds an array consisting of the opening and closing tag values. Set custom values by passing a new array of tags to `render()`, which gets honored over the default values, or by overriding the `Mustache.tags` property itself: ```js var customTags = [ '<%', '%>' ]; ``` ##### Pass Value into Render Method ```js Mustache.render(template, view, {}, customTags); ``` ##### Override Tags Property ```js Mustache.tags = customTags; // Subsequent parse() and render() calls will use customTags ``` #### Setting in Templates Set Delimiter tags start with an equals sign and change the tag delimiters from `{{` and `}}` to custom strings. Consider the following contrived example: ```html+erb * {{ default_tags }} {{=<% %>=}} * <% erb_style_tags %> <%={{ }}=%> * {{ default_tags_again }} ``` Here we have a list with three items. The first item uses the default tag style, the second uses ERB style as defined by the Set Delimiter tag, and the third returns to the default style after yet another Set Delimiter declaration. According to [ctemplates](https://htmlpreview.github.io/?https://raw.githubusercontent.com/OlafvdSpek/ctemplate/master/doc/howto.html), this "is useful for languages like TeX, where double-braces may occur in the text and are awkward to use for markup." Custom delimiters may not contain whitespace or the equals sign. ## Pre-parsing and Caching Templates By default, when mustache.js first parses a template it keeps the full parsed token tree in a cache. The next time it sees that same template it skips the parsing step and renders the template much more quickly. If you'd like, you can do this ahead of time using `mustache.parse`. ```js Mustache.parse(template); // Then, sometime later. Mustache.render(template, view); ``` ## Command line tool mustache.js is shipped with a Node.js based command line tool. It might be installed as a global tool on your computer to render a mustache template of some kind ```bash $ npm install -g mustache $ mustache dataView.json myTemplate.mustache > output.html ``` also supports stdin. ```bash $ cat dataView.json | mustache - myTemplate.mustache > output.html ``` or as a package.json `devDependency` in a build process maybe? ```bash $ npm install mustache --save-dev ``` ```json { "scripts": { "build": "mustache dataView.json myTemplate.mustache > public/output.html" } } ``` ```bash $ npm run build ``` The command line tool is basically a wrapper around `Mustache.render` so you get all the features. If your templates use partials you should pass paths to partials using `-p` flag: ```bash $ mustache -p path/to/partial1.mustache -p path/to/partial2.mustache dataView.json myTemplate.mustache ``` ## Plugins for JavaScript Libraries mustache.js may be built specifically for several different client libraries, including the following: - [jQuery](http://jquery.com/) - [MooTools](http://mootools.net/) - [Dojo](http://www.dojotoolkit.org/) - [YUI](http://developer.yahoo.com/yui/) - [qooxdoo](http://qooxdoo.org/) These may be built using [Rake](http://rake.rubyforge.org/) and one of the following commands: ```bash $ rake jquery $ rake mootools $ rake dojo $ rake yui3 $ rake qooxdoo ``` ## TypeScript Since the source code of this package is written in JavaScript, we follow the [TypeScript publishing docs](https://www.typescriptlang.org/docs/handbook/declaration-files/publishing.html) preferred approach by having type definitions available via [@types/mustache](https://www.npmjs.com/package/@types/mustache). ## Testing In order to run the tests you'll need to install [Node.js](http://nodejs.org/). You also need to install the sub module containing [Mustache specifications](http://github.com/mustache/spec) in the project root. ```bash $ git submodule init $ git submodule update ``` Install dependencies. ```bash $ npm install ``` Then run the tests. ```bash $ npm test ``` The test suite consists of both unit and integration tests. If a template isn't rendering correctly for you, you can make a test for it by doing the following: 1. Create a template file named `mytest.mustache` in the `test/_files` directory. Replace `mytest` with the name of your test. 2. Create a corresponding view file named `mytest.js` in the same directory. This file should contain a JavaScript object literal enclosed in parentheses. See any of the other view files for an example. 3. Create a file with the expected output in `mytest.txt` in the same directory. Then, you can run the test with: ```bash $ TEST=mytest npm run test-render ``` ### Browser tests Browser tests are not included in `npm test` as they run for too long, although they are ran automatically on Travis when merged into master. Run browser tests locally in any browser: ```bash $ npm run test-browser-local ``` then point your browser to `http://localhost:8080/__zuul` ## Who uses mustache.js? An updated list of mustache.js users is kept [on the Github wiki](https://github.com/janl/mustache.js/wiki/Beard-Competition). Add yourself or your company if you use mustache.js! ## Contributing mustache.js is a mature project, but it continues to actively invite maintainers. You can help out a high-profile project that is used in a lot of places on the web. No big commitment required, if all you do is review a single [Pull Request](https://github.com/janl/mustache.js/pulls), you are a maintainer. And a hero. ### Your First Contribution - review a [Pull Request](https://github.com/janl/mustache.js/pulls) - fix an [Issue](https://github.com/janl/mustache.js/issues) - update the [documentation](https://github.com/janl/mustache.js#usage) - make a website - write a tutorial ## Thanks mustache.js wouldn't kick ass if it weren't for these fine souls: * Chris Wanstrath / defunkt * Alexander Lang / langalex * Sebastian Cohnen / tisba * J Chris Anderson / jchris * Tom Robinson / tlrobinson * Aaron Quint / quirkey * Douglas Crockford * Nikita Vasilyev / NV * Elise Wood / glytch * Damien Mathieu / dmathieu * Jakub Kuźma / qoobaa * Will Leinweber / will * dpree * Jason Smith / jhs * Aaron Gibralter / agibralter * Ross Boucher / boucher * Matt Sanford / mzsanford * Ben Cherry / bcherry * Michael Jackson / mjackson * Phillip Johnsen / phillipj * David da Silva Contín / dasilvacontin # Source Map JS [![NPM](https://nodei.co/npm/source-map-js.png?downloads=true&downloadRank=true)](https://www.npmjs.com/package/source-map-js) Difference between original [source-map](https://github.com/mozilla/source-map): > TL,DR: it's fork of original [email protected], but with perfomance optimizations. This journey starts from [[email protected]](https://github.com/mozilla/source-map/blob/master/CHANGELOG.md#070). Some part of it was rewritten to Rust and WASM and API became async. It's still a major block for many libraries like PostCSS or Sass for example because they need to migrate the whole API to the async way. This is the reason why 0.6.1 has 2x more downloads than 0.7.3 while it's faster several times. ![Downloads count](media/downloads.png) More important that WASM version has some optimizations in JS code too. This is why [community asked to create branch for 0.6 version](https://github.com/mozilla/source-map/issues/324) and port these optimizations but, sadly, the answer was «no». A bit later I discovered [the issue](https://github.com/mozilla/source-map/issues/370) created by [Ben Rothman (@benthemonkey)](https://github.com/benthemonkey) with no response at all. [Roman Dvornov (@lahmatiy)](https://github.com/lahmatiy) wrote a [serveral posts](https://t.me/gorshochekvarit/76) (russian, only, sorry) about source-map library in his own Telegram channel. He mentioned the article [«Maybe you don't need Rust and WASM to speed up your JS»](https://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to-speed-up-your-js.html) written by [Vyacheslav Egorov (@mraleph)](https://github.com/mraleph). This article contains optimizations and hacks that lead to almost the same performance compare to WASM implementation. I decided to fork the original source-map and port these optimizations from the article and several others PR from the original source-map. --------- This is a library to generate and consume the source map format [described here][format]. [format]: https://docs.google.com/document/d/1U1RGAehQwRypUTovF1KRlpiOFze0b-_2gc6fAH0KY0k/edit ## Use with Node $ npm install source-map-js <!-- ## Use on the Web <script src="https://raw.githubusercontent.com/mozilla/source-map/master/dist/source-map.min.js" defer></script> --> -------------------------------------------------------------------------------- <!-- `npm run toc` to regenerate the Table of Contents --> <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ## Table of Contents - [Examples](#examples) - [Consuming a source map](#consuming-a-source-map) - [Generating a source map](#generating-a-source-map) - [With SourceNode (high level API)](#with-sourcenode-high-level-api) - [With SourceMapGenerator (low level API)](#with-sourcemapgenerator-low-level-api) - [API](#api) - [SourceMapConsumer](#sourcemapconsumer) - [new SourceMapConsumer(rawSourceMap)](#new-sourcemapconsumerrawsourcemap) - [SourceMapConsumer.prototype.computeColumnSpans()](#sourcemapconsumerprototypecomputecolumnspans) - [SourceMapConsumer.prototype.originalPositionFor(generatedPosition)](#sourcemapconsumerprototypeoriginalpositionforgeneratedposition) - [SourceMapConsumer.prototype.generatedPositionFor(originalPosition)](#sourcemapconsumerprototypegeneratedpositionfororiginalposition) - [SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition)](#sourcemapconsumerprototypeallgeneratedpositionsfororiginalposition) - [SourceMapConsumer.prototype.hasContentsOfAllSources()](#sourcemapconsumerprototypehascontentsofallsources) - [SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing])](#sourcemapconsumerprototypesourcecontentforsource-returnnullonmissing) - [SourceMapConsumer.prototype.eachMapping(callback, context, order)](#sourcemapconsumerprototypeeachmappingcallback-context-order) - [SourceMapGenerator](#sourcemapgenerator) - [new SourceMapGenerator([startOfSourceMap])](#new-sourcemapgeneratorstartofsourcemap) - [SourceMapGenerator.fromSourceMap(sourceMapConsumer)](#sourcemapgeneratorfromsourcemapsourcemapconsumer) - [SourceMapGenerator.prototype.addMapping(mapping)](#sourcemapgeneratorprototypeaddmappingmapping) - [SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent)](#sourcemapgeneratorprototypesetsourcecontentsourcefile-sourcecontent) - [SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]])](#sourcemapgeneratorprototypeapplysourcemapsourcemapconsumer-sourcefile-sourcemappath) - [SourceMapGenerator.prototype.toString()](#sourcemapgeneratorprototypetostring) - [SourceNode](#sourcenode) - [new SourceNode([line, column, source[, chunk[, name]]])](#new-sourcenodeline-column-source-chunk-name) - [SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath])](#sourcenodefromstringwithsourcemapcode-sourcemapconsumer-relativepath) - [SourceNode.prototype.add(chunk)](#sourcenodeprototypeaddchunk) - [SourceNode.prototype.prepend(chunk)](#sourcenodeprototypeprependchunk) - [SourceNode.prototype.setSourceContent(sourceFile, sourceContent)](#sourcenodeprototypesetsourcecontentsourcefile-sourcecontent) - [SourceNode.prototype.walk(fn)](#sourcenodeprototypewalkfn) - [SourceNode.prototype.walkSourceContents(fn)](#sourcenodeprototypewalksourcecontentsfn) - [SourceNode.prototype.join(sep)](#sourcenodeprototypejoinsep) - [SourceNode.prototype.replaceRight(pattern, replacement)](#sourcenodeprototypereplacerightpattern-replacement) - [SourceNode.prototype.toString()](#sourcenodeprototypetostring) - [SourceNode.prototype.toStringWithSourceMap([startOfSourceMap])](#sourcenodeprototypetostringwithsourcemapstartofsourcemap) <!-- END doctoc generated TOC please keep comment here to allow auto update --> ## Examples ### Consuming a source map ```js var rawSourceMap = { version: 3, file: 'min.js', names: ['bar', 'baz', 'n'], sources: ['one.js', 'two.js'], sourceRoot: 'http://example.com/www/js/', mappings: 'CAAC,IAAI,IAAM,SAAUA,GAClB,OAAOC,IAAID;CCDb,IAAI,IAAM,SAAUE,GAClB,OAAOA' }; var smc = new SourceMapConsumer(rawSourceMap); console.log(smc.sources); // [ 'http://example.com/www/js/one.js', // 'http://example.com/www/js/two.js' ] console.log(smc.originalPositionFor({ line: 2, column: 28 })); // { source: 'http://example.com/www/js/two.js', // line: 2, // column: 10, // name: 'n' } console.log(smc.generatedPositionFor({ source: 'http://example.com/www/js/two.js', line: 2, column: 10 })); // { line: 2, column: 28 } smc.eachMapping(function (m) { // ... }); ``` ### Generating a source map In depth guide: [**Compiling to JavaScript, and Debugging with Source Maps**](https://hacks.mozilla.org/2013/05/compiling-to-javascript-and-debugging-with-source-maps/) #### With SourceNode (high level API) ```js function compile(ast) { switch (ast.type) { case 'BinaryExpression': return new SourceNode( ast.location.line, ast.location.column, ast.location.source, [compile(ast.left), " + ", compile(ast.right)] ); case 'Literal': return new SourceNode( ast.location.line, ast.location.column, ast.location.source, String(ast.value) ); // ... default: throw new Error("Bad AST"); } } var ast = parse("40 + 2", "add.js"); console.log(compile(ast).toStringWithSourceMap({ file: 'add.js' })); // { code: '40 + 2', // map: [object SourceMapGenerator] } ``` #### With SourceMapGenerator (low level API) ```js var map = new SourceMapGenerator({ file: "source-mapped.js" }); map.addMapping({ generated: { line: 10, column: 35 }, source: "foo.js", original: { line: 33, column: 2 }, name: "christopher" }); console.log(map.toString()); // '{"version":3,"file":"source-mapped.js","sources":["foo.js"],"names":["christopher"],"mappings":";;;;;;;;;mCAgCEA"}' ``` ## API Get a reference to the module: ```js // Node.js var sourceMap = require('source-map'); // Browser builds var sourceMap = window.sourceMap; // Inside Firefox const sourceMap = require("devtools/toolkit/sourcemap/source-map.js"); ``` ### SourceMapConsumer A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source. #### new SourceMapConsumer(rawSourceMap) The only parameter is the raw source map (either as a string which can be `JSON.parse`'d, or an object). According to the spec, source maps have the following attributes: * `version`: Which version of the source map spec this map is following. * `sources`: An array of URLs to the original source files. * `names`: An array of identifiers which can be referenced by individual mappings. * `sourceRoot`: Optional. The URL root from which all sources are relative. * `sourcesContent`: Optional. An array of contents of the original source files. * `mappings`: A string of base64 VLQs which contain the actual mappings. * `file`: Optional. The generated filename this source map is associated with. ```js var consumer = new sourceMap.SourceMapConsumer(rawSourceMapJsonData); ``` #### SourceMapConsumer.prototype.computeColumnSpans() Compute the last column for each generated mapping. The last column is inclusive. ```js // Before: consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" }) // [ { line: 2, // column: 1 }, // { line: 2, // column: 10 }, // { line: 2, // column: 20 } ] consumer.computeColumnSpans(); // After: consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" }) // [ { line: 2, // column: 1, // lastColumn: 9 }, // { line: 2, // column: 10, // lastColumn: 19 }, // { line: 2, // column: 20, // lastColumn: Infinity } ] ``` #### SourceMapConsumer.prototype.originalPositionFor(generatedPosition) Returns the original source, line, and column information for the generated source's line and column positions provided. The only argument is an object with the following properties: * `line`: The line number in the generated source. Line numbers in this library are 1-based (note that the underlying source map specification uses 0-based line numbers -- this library handles the translation). * `column`: The column number in the generated source. Column numbers in this library are 0-based. * `bias`: Either `SourceMapConsumer.GREATEST_LOWER_BOUND` or `SourceMapConsumer.LEAST_UPPER_BOUND`. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to `SourceMapConsumer.GREATEST_LOWER_BOUND`. and an object is returned with the following properties: * `source`: The original source file, or null if this information is not available. * `line`: The line number in the original source, or null if this information is not available. The line number is 1-based. * `column`: The column number in the original source, or null if this information is not available. The column number is 0-based. * `name`: The original identifier, or null if this information is not available. ```js consumer.originalPositionFor({ line: 2, column: 10 }) // { source: 'foo.coffee', // line: 2, // column: 2, // name: null } consumer.originalPositionFor({ line: 99999999999999999, column: 999999999999999 }) // { source: null, // line: null, // column: null, // name: null } ``` #### SourceMapConsumer.prototype.generatedPositionFor(originalPosition) Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties: * `source`: The filename of the original source. * `line`: The line number in the original source. The line number is 1-based. * `column`: The column number in the original source. The column number is 0-based. and an object is returned with the following properties: * `line`: The line number in the generated source, or null. The line number is 1-based. * `column`: The column number in the generated source, or null. The column number is 0-based. ```js consumer.generatedPositionFor({ source: "example.js", line: 2, column: 10 }) // { line: 1, // column: 56 } ``` #### SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition) Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets. The only argument is an object with the following properties: * `source`: The filename of the original source. * `line`: The line number in the original source. The line number is 1-based. * `column`: Optional. The column number in the original source. The column number is 0-based. and an array of objects is returned, each with the following properties: * `line`: The line number in the generated source, or null. The line number is 1-based. * `column`: The column number in the generated source, or null. The column number is 0-based. ```js consumer.allGeneratedpositionsfor({ line: 2, source: "foo.coffee" }) // [ { line: 2, // column: 1 }, // { line: 2, // column: 10 }, // { line: 2, // column: 20 } ] ``` #### SourceMapConsumer.prototype.hasContentsOfAllSources() Return true if we have the embedded source content for every source listed in the source map, false otherwise. In other words, if this method returns `true`, then `consumer.sourceContentFor(s)` will succeed for every source `s` in `consumer.sources`. ```js // ... if (consumer.hasContentsOfAllSources()) { consumerReadyCallback(consumer); } else { fetchSources(consumer, consumerReadyCallback); } // ... ``` #### SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing]) Returns the original source content for the source provided. The only argument is the URL of the original source file. If the source content for the given source is not found, then an error is thrown. Optionally, pass `true` as the second param to have `null` returned instead. ```js consumer.sources // [ "my-cool-lib.clj" ] consumer.sourceContentFor("my-cool-lib.clj") // "..." consumer.sourceContentFor("this is not in the source map"); // Error: "this is not in the source map" is not in the source map consumer.sourceContentFor("this is not in the source map", true); // null ``` #### SourceMapConsumer.prototype.eachMapping(callback, context, order) Iterate over each mapping between an original source/line/column and a generated line/column in this source map. * `callback`: The function that is called with each mapping. Mappings have the form `{ source, generatedLine, generatedColumn, originalLine, originalColumn, name }` * `context`: Optional. If specified, this object will be the value of `this` every time that `callback` is called. * `order`: Either `SourceMapConsumer.GENERATED_ORDER` or `SourceMapConsumer.ORIGINAL_ORDER`. Specifies whether you want to iterate over the mappings sorted by the generated file's line/column order or the original's source/line/column order, respectively. Defaults to `SourceMapConsumer.GENERATED_ORDER`. ```js consumer.eachMapping(function (m) { console.log(m); }) // ... // { source: 'illmatic.js', // generatedLine: 1, // generatedColumn: 0, // originalLine: 1, // originalColumn: 0, // name: null } // { source: 'illmatic.js', // generatedLine: 2, // generatedColumn: 0, // originalLine: 2, // originalColumn: 0, // name: null } // ... ``` ### SourceMapGenerator An instance of the SourceMapGenerator represents a source map which is being built incrementally. #### new SourceMapGenerator([startOfSourceMap]) You may pass an object with the following properties: * `file`: The filename of the generated source that this source map is associated with. * `sourceRoot`: A root for all relative URLs in this source map. * `skipValidation`: Optional. When `true`, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible. ```js var generator = new sourceMap.SourceMapGenerator({ file: "my-generated-javascript-file.js", sourceRoot: "http://example.com/app/js/" }); ``` #### SourceMapGenerator.fromSourceMap(sourceMapConsumer) Creates a new `SourceMapGenerator` from an existing `SourceMapConsumer` instance. * `sourceMapConsumer` The SourceMap. ```js var generator = sourceMap.SourceMapGenerator.fromSourceMap(consumer); ``` #### SourceMapGenerator.prototype.addMapping(mapping) Add a single mapping from original source line and column to the generated source's line and column for this source map being created. The mapping object should have the following properties: * `generated`: An object with the generated line and column positions. * `original`: An object with the original line and column positions. * `source`: The original source file (relative to the sourceRoot). * `name`: An optional original token name for this mapping. ```js generator.addMapping({ source: "module-one.scm", original: { line: 128, column: 0 }, generated: { line: 3, column: 456 } }) ``` #### SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent) Set the source content for an original source file. * `sourceFile` the URL of the original source file. * `sourceContent` the content of the source file. ```js generator.setSourceContent("module-one.scm", fs.readFileSync("path/to/module-one.scm")) ``` #### SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]]) Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map. * `sourceMapConsumer`: The SourceMap to be applied. * `sourceFile`: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown. * `sourceMapPath`: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap. This parameter is needed when the two SourceMaps aren't in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap. If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying `'.'` has the same effect.) #### SourceMapGenerator.prototype.toString() Renders the source map being generated to a string. ```js generator.toString() // '{"version":3,"sources":["module-one.scm"],"names":[],"mappings":"...snip...","file":"my-generated-javascript-file.js","sourceRoot":"http://example.com/app/js/"}' ``` ### SourceNode SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map. #### new SourceNode([line, column, source[, chunk[, name]]]) * `line`: The original line number associated with this source node, or null if it isn't associated with an original line. The line number is 1-based. * `column`: The original column number associated with this source node, or null if it isn't associated with an original column. The column number is 0-based. * `source`: The original source's filename; null if no filename is provided. * `chunk`: Optional. Is immediately passed to `SourceNode.prototype.add`, see below. * `name`: Optional. The original identifier. ```js var node = new SourceNode(1, 2, "a.cpp", [ new SourceNode(3, 4, "b.cpp", "extern int status;\n"), new SourceNode(5, 6, "c.cpp", "std::string* make_string(size_t n);\n"), new SourceNode(7, 8, "d.cpp", "int main(int argc, char** argv) {}\n"), ]); ``` #### SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath]) Creates a SourceNode from generated code and a SourceMapConsumer. * `code`: The generated code * `sourceMapConsumer` The SourceMap for the generated code * `relativePath` The optional path that relative sources in `sourceMapConsumer` should be relative to. ```js var consumer = new SourceMapConsumer(fs.readFileSync("path/to/my-file.js.map", "utf8")); var node = SourceNode.fromStringWithSourceMap(fs.readFileSync("path/to/my-file.js"), consumer); ``` #### SourceNode.prototype.add(chunk) Add a chunk of generated JS to this source node. * `chunk`: A string snippet of generated JS code, another instance of `SourceNode`, or an array where each member is one of those things. ```js node.add(" + "); node.add(otherNode); node.add([leftHandOperandNode, " + ", rightHandOperandNode]); ``` #### SourceNode.prototype.prepend(chunk) Prepend a chunk of generated JS to this source node. * `chunk`: A string snippet of generated JS code, another instance of `SourceNode`, or an array where each member is one of those things. ```js node.prepend("/** Build Id: f783haef86324gf **/\n\n"); ``` #### SourceNode.prototype.setSourceContent(sourceFile, sourceContent) Set the source content for a source file. This will be added to the `SourceMap` in the `sourcesContent` field. * `sourceFile`: The filename of the source file * `sourceContent`: The content of the source file ```js node.setSourceContent("module-one.scm", fs.readFileSync("path/to/module-one.scm")) ``` #### SourceNode.prototype.walk(fn) Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source's line/column location. * `fn`: The traversal function. ```js var node = new SourceNode(1, 2, "a.js", [ new SourceNode(3, 4, "b.js", "uno"), "dos", [ "tres", new SourceNode(5, 6, "c.js", "quatro") ] ]); node.walk(function (code, loc) { console.log("WALK:", code, loc); }) // WALK: uno { source: 'b.js', line: 3, column: 4, name: null } // WALK: dos { source: 'a.js', line: 1, column: 2, name: null } // WALK: tres { source: 'a.js', line: 1, column: 2, name: null } // WALK: quatro { source: 'c.js', line: 5, column: 6, name: null } ``` #### SourceNode.prototype.walkSourceContents(fn) Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content. * `fn`: The traversal function. ```js var a = new SourceNode(1, 2, "a.js", "generated from a"); a.setSourceContent("a.js", "original a"); var b = new SourceNode(1, 2, "b.js", "generated from b"); b.setSourceContent("b.js", "original b"); var c = new SourceNode(1, 2, "c.js", "generated from c"); c.setSourceContent("c.js", "original c"); var node = new SourceNode(null, null, null, [a, b, c]); node.walkSourceContents(function (source, contents) { console.log("WALK:", source, ":", contents); }) // WALK: a.js : original a // WALK: b.js : original b // WALK: c.js : original c ``` #### SourceNode.prototype.join(sep) Like `Array.prototype.join` except for SourceNodes. Inserts the separator between each of this source node's children. * `sep`: The separator. ```js var lhs = new SourceNode(1, 2, "a.rs", "my_copy"); var operand = new SourceNode(3, 4, "a.rs", "="); var rhs = new SourceNode(5, 6, "a.rs", "orig.clone()"); var node = new SourceNode(null, null, null, [ lhs, operand, rhs ]); var joinedNode = node.join(" "); ``` #### SourceNode.prototype.replaceRight(pattern, replacement) Call `String.prototype.replace` on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc. * `pattern`: The pattern to replace. * `replacement`: The thing to replace the pattern with. ```js // Trim trailing white space. node.replaceRight(/\s*$/, ""); ``` #### SourceNode.prototype.toString() Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string. ```js var node = new SourceNode(1, 2, "a.js", [ new SourceNode(3, 4, "b.js", "uno"), "dos", [ "tres", new SourceNode(5, 6, "c.js", "quatro") ] ]); node.toString() // 'unodostresquatro' ``` #### SourceNode.prototype.toStringWithSourceMap([startOfSourceMap]) Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources. The arguments are the same as those to `new SourceMapGenerator`. ```js var node = new SourceNode(1, 2, "a.js", [ new SourceNode(3, 4, "b.js", "uno"), "dos", [ "tres", new SourceNode(5, 6, "c.js", "quatro") ] ]); node.toStringWithSourceMap({ file: "my-output-file.js" }) // { code: 'unodostresquatro', // map: [object SourceMapGenerator] } ``` # read-cache [![Build Status](https://travis-ci.org/TrySound/read-cache.svg?branch=master)](https://travis-ci.org/TrySound/read-cache) Reads and caches the entire contents of a file until it is modified. ## Install ``` $ npm i read-cache ``` ## Usage ```js // foo.js var readCache = require('read-cache'); readCache('foo.js').then(function (contents) { console.log(contents); }); ``` ## API ### readCache(path[, encoding]) Returns a promise that resolves with the file's contents. ### readCache.sync(path[, encoding]) Returns the content of the file. ### readCache.get(path[, encoding]) Returns the content of cached file or null. ### readCache.clear() Clears the contents of the cache. ## License MIT © [Bogdan Chadkin](mailto:[email protected]) # to-regex-range [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/to-regex-range.svg?style=flat)](https://www.npmjs.com/package/to-regex-range) [![NPM monthly downloads](https://img.shields.io/npm/dm/to-regex-range.svg?style=flat)](https://npmjs.org/package/to-regex-range) [![NPM total downloads](https://img.shields.io/npm/dt/to-regex-range.svg?style=flat)](https://npmjs.org/package/to-regex-range) [![Linux Build Status](https://img.shields.io/travis/micromatch/to-regex-range.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/to-regex-range) > Pass two numbers, get a regex-compatible source string for matching ranges. Validated against more than 2.78 million test assertions. Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. ## Install Install with [npm](https://www.npmjs.com/): ```sh $ npm install --save to-regex-range ``` <details> <summary><strong>What does this do?</strong></summary> <br> This libary generates the `source` string to be passed to `new RegExp()` for matching a range of numbers. **Example** ```js const toRegexRange = require('to-regex-range'); const regex = new RegExp(toRegexRange('15', '95')); ``` A string is returned so that you can do whatever you need with it before passing it to `new RegExp()` (like adding `^` or `$` boundaries, defining flags, or combining it another string). <br> </details> <details> <summary><strong>Why use this library?</strong></summary> <br> ### Convenience Creating regular expressions for matching numbers gets deceptively complicated pretty fast. For example, let's say you need a validation regex for matching part of a user-id, postal code, social security number, tax id, etc: * regex for matching `1` => `/1/` (easy enough) * regex for matching `1` through `5` => `/[1-5]/` (not bad...) * regex for matching `1` or `5` => `/(1|5)/` (still easy...) * regex for matching `1` through `50` => `/([1-9]|[1-4][0-9]|50)/` (uh-oh...) * regex for matching `1` through `55` => `/([1-9]|[1-4][0-9]|5[0-5])/` (no prob, I can do this...) * regex for matching `1` through `555` => `/([1-9]|[1-9][0-9]|[1-4][0-9]{2}|5[0-4][0-9]|55[0-5])/` (maybe not...) * regex for matching `0001` through `5555` => `/(0{3}[1-9]|0{2}[1-9][0-9]|0[1-9][0-9]{2}|[1-4][0-9]{3}|5[0-4][0-9]{2}|55[0-4][0-9]|555[0-5])/` (okay, I get the point!) The numbers are contrived, but they're also really basic. In the real world you might need to generate a regex on-the-fly for validation. **Learn more** If you're interested in learning more about [character classes](http://www.regular-expressions.info/charclass.html) and other regex features, I personally have always found [regular-expressions.info](http://www.regular-expressions.info/charclass.html) to be pretty useful. ### Heavily tested As of April 07, 2019, this library runs [>1m test assertions](./test/test.js) against generated regex-ranges to provide brute-force verification that results are correct. Tests run in ~280ms on my MacBook Pro, 2.5 GHz Intel Core i7. ### Optimized Generated regular expressions are optimized: * duplicate sequences and character classes are reduced using quantifiers * smart enough to use `?` conditionals when number(s) or range(s) can be positive or negative * uses fragment caching to avoid processing the same exact string more than once <br> </details> ## Usage Add this library to your javascript application with the following line of code ```js const toRegexRange = require('to-regex-range'); ``` The main export is a function that takes two integers: the `min` value and `max` value (formatted as strings or numbers). ```js const source = toRegexRange('15', '95'); //=> 1[5-9]|[2-8][0-9]|9[0-5] const regex = new RegExp(`^${source}$`); console.log(regex.test('14')); //=> false console.log(regex.test('50')); //=> true console.log(regex.test('94')); //=> true console.log(regex.test('96')); //=> false ``` ## Options ### options.capture **Type**: `boolean` **Deafault**: `undefined` Wrap the returned value in parentheses when there is more than one regex condition. Useful when you're dynamically generating ranges. ```js console.log(toRegexRange('-10', '10')); //=> -[1-9]|-?10|[0-9] console.log(toRegexRange('-10', '10', { capture: true })); //=> (-[1-9]|-?10|[0-9]) ``` ### options.shorthand **Type**: `boolean` **Deafault**: `undefined` Use the regex shorthand for `[0-9]`: ```js console.log(toRegexRange('0', '999999')); //=> [0-9]|[1-9][0-9]{1,5} console.log(toRegexRange('0', '999999', { shorthand: true })); //=> \d|[1-9]\d{1,5} ``` ### options.relaxZeros **Type**: `boolean` **Default**: `true` This option relaxes matching for leading zeros when when ranges are zero-padded. ```js const source = toRegexRange('-0010', '0010'); const regex = new RegExp(`^${source}$`); console.log(regex.test('-10')); //=> true console.log(regex.test('-010')); //=> true console.log(regex.test('-0010')); //=> true console.log(regex.test('10')); //=> true console.log(regex.test('010')); //=> true console.log(regex.test('0010')); //=> true ``` When `relaxZeros` is false, matching is strict: ```js const source = toRegexRange('-0010', '0010', { relaxZeros: false }); const regex = new RegExp(`^${source}$`); console.log(regex.test('-10')); //=> false console.log(regex.test('-010')); //=> false console.log(regex.test('-0010')); //=> true console.log(regex.test('10')); //=> false console.log(regex.test('010')); //=> false console.log(regex.test('0010')); //=> true ``` ## Examples | **Range** | **Result** | **Compile time** | | --- | --- | --- | | `toRegexRange(-10, 10)` | `-[1-9]\|-?10\|[0-9]` | _132μs_ | | `toRegexRange(-100, -10)` | `-1[0-9]\|-[2-9][0-9]\|-100` | _50μs_ | | `toRegexRange(-100, 100)` | `-[1-9]\|-?[1-9][0-9]\|-?100\|[0-9]` | _42μs_ | | `toRegexRange(001, 100)` | `0{0,2}[1-9]\|0?[1-9][0-9]\|100` | _109μs_ | | `toRegexRange(001, 555)` | `0{0,2}[1-9]\|0?[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5]` | _51μs_ | | `toRegexRange(0010, 1000)` | `0{0,2}1[0-9]\|0{0,2}[2-9][0-9]\|0?[1-9][0-9]{2}\|1000` | _31μs_ | | `toRegexRange(1, 50)` | `[1-9]\|[1-4][0-9]\|50` | _24μs_ | | `toRegexRange(1, 55)` | `[1-9]\|[1-4][0-9]\|5[0-5]` | _23μs_ | | `toRegexRange(1, 555)` | `[1-9]\|[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5]` | _30μs_ | | `toRegexRange(1, 5555)` | `[1-9]\|[1-9][0-9]{1,2}\|[1-4][0-9]{3}\|5[0-4][0-9]{2}\|55[0-4][0-9]\|555[0-5]` | _43μs_ | | `toRegexRange(111, 555)` | `11[1-9]\|1[2-9][0-9]\|[2-4][0-9]{2}\|5[0-4][0-9]\|55[0-5]` | _38μs_ | | `toRegexRange(29, 51)` | `29\|[34][0-9]\|5[01]` | _24μs_ | | `toRegexRange(31, 877)` | `3[1-9]\|[4-9][0-9]\|[1-7][0-9]{2}\|8[0-6][0-9]\|87[0-7]` | _32μs_ | | `toRegexRange(5, 5)` | `5` | _8μs_ | | `toRegexRange(5, 6)` | `5\|6` | _11μs_ | | `toRegexRange(1, 2)` | `1\|2` | _6μs_ | | `toRegexRange(1, 5)` | `[1-5]` | _15μs_ | | `toRegexRange(1, 10)` | `[1-9]\|10` | _22μs_ | | `toRegexRange(1, 100)` | `[1-9]\|[1-9][0-9]\|100` | _25μs_ | | `toRegexRange(1, 1000)` | `[1-9]\|[1-9][0-9]{1,2}\|1000` | _31μs_ | | `toRegexRange(1, 10000)` | `[1-9]\|[1-9][0-9]{1,3}\|10000` | _34μs_ | | `toRegexRange(1, 100000)` | `[1-9]\|[1-9][0-9]{1,4}\|100000` | _36μs_ | | `toRegexRange(1, 1000000)` | `[1-9]\|[1-9][0-9]{1,5}\|1000000` | _42μs_ | | `toRegexRange(1, 10000000)` | `[1-9]\|[1-9][0-9]{1,6}\|10000000` | _42μs_ | ## Heads up! **Order of arguments** When the `min` is larger than the `max`, values will be flipped to create a valid range: ```js toRegexRange('51', '29'); ``` Is effectively flipped to: ```js toRegexRange('29', '51'); //=> 29|[3-4][0-9]|5[0-1] ``` **Steps / increments** This library does not support steps (increments). A pr to add support would be welcome. ## History ### v2.0.0 - 2017-04-21 **New features** Adds support for zero-padding! ### v1.0.0 **Optimizations** Repeating ranges are now grouped using quantifiers. rocessing time is roughly the same, but the generated regex is much smaller, which should result in faster matching. ## Attribution Inspired by the python library [range-regex](https://github.com/dimka665/range-regex). ## About <details> <summary><strong>Contributing</strong></summary> Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). </details> <details> <summary><strong>Running Tests</strong></summary> Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: ```sh $ npm install && npm test ``` </details> <details> <summary><strong>Building docs</strong></summary> _(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ To generate the readme, run the following command: ```sh $ npm install -g verbose/verb#dev verb-generate-readme && verb ``` </details> ### Related projects You might also be interested in these projects: * [expand-range](https://www.npmjs.com/package/expand-range): Fast, bash-like range expansion. Expand a range of numbers or letters, uppercase or lowercase. Used… [more](https://github.com/jonschlinkert/expand-range) | [homepage](https://github.com/jonschlinkert/expand-range "Fast, bash-like range expansion. Expand a range of numbers or letters, uppercase or lowercase. Used by micromatch.") * [fill-range](https://www.npmjs.com/package/fill-range): Fill in a range of numbers or letters, optionally passing an increment or `step` to… [more](https://github.com/jonschlinkert/fill-range) | [homepage](https://github.com/jonschlinkert/fill-range "Fill in a range of numbers or letters, optionally passing an increment or `step` to use, or create a regex-compatible range with `options.toRegex`") * [micromatch](https://www.npmjs.com/package/micromatch): Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | [homepage](https://github.com/micromatch/micromatch "Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.") * [repeat-element](https://www.npmjs.com/package/repeat-element): Create an array by repeating the given value n times. | [homepage](https://github.com/jonschlinkert/repeat-element "Create an array by repeating the given value n times.") * [repeat-string](https://www.npmjs.com/package/repeat-string): Repeat the given string n times. Fastest implementation for repeating a string. | [homepage](https://github.com/jonschlinkert/repeat-string "Repeat the given string n times. Fastest implementation for repeating a string.") ### Contributors | **Commits** | **Contributor** | | --- | --- | | 63 | [jonschlinkert](https://github.com/jonschlinkert) | | 3 | [doowb](https://github.com/doowb) | | 2 | [realityking](https://github.com/realityking) | ### Author **Jon Schlinkert** * [GitHub Profile](https://github.com/jonschlinkert) * [Twitter Profile](https://twitter.com/jonschlinkert) * [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) Please consider supporting me on Patreon, or [start your own Patreon page](https://patreon.com/invite/bxpbvm)! <a href="https://www.patreon.com/jonschlinkert"> <img src="https://c5.patreon.com/external/logo/[email protected]" height="50"> </a> ### License Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). Released under the [MIT License](LICENSE). *** _This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 07, 2019._ Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # node-supports-preserve-symlinks-flag <sup>[![Version Badge][npm-version-svg]][package-url]</sup> [![github actions][actions-image]][actions-url] [![coverage][codecov-image]][codecov-url] [![dependency status][deps-svg]][deps-url] [![dev dependency status][dev-deps-svg]][dev-deps-url] [![License][license-image]][license-url] [![Downloads][downloads-image]][downloads-url] [![npm badge][npm-badge-png]][package-url] Determine if the current node version supports the `--preserve-symlinks` flag. ## Example ```js var supportsPreserveSymlinks = require('node-supports-preserve-symlinks-flag'); var assert = require('assert'); assert.equal(supportsPreserveSymlinks, null); // in a browser assert.equal(supportsPreserveSymlinks, false); // in node < v6.2 assert.equal(supportsPreserveSymlinks, true); // in node v6.2+ ``` ## Tests Simply clone the repo, `npm install`, and run `npm test` [package-url]: https://npmjs.org/package/node-supports-preserve-symlinks-flag [npm-version-svg]: https://versionbadg.es/inspect-js/node-supports-preserve-symlinks-flag.svg [deps-svg]: https://david-dm.org/inspect-js/node-supports-preserve-symlinks-flag.svg [deps-url]: https://david-dm.org/inspect-js/node-supports-preserve-symlinks-flag [dev-deps-svg]: https://david-dm.org/inspect-js/node-supports-preserve-symlinks-flag/dev-status.svg [dev-deps-url]: https://david-dm.org/inspect-js/node-supports-preserve-symlinks-flag#info=devDependencies [npm-badge-png]: https://nodei.co/npm/node-supports-preserve-symlinks-flag.png?downloads=true&stars=true [license-image]: https://img.shields.io/npm/l/node-supports-preserve-symlinks-flag.svg [license-url]: LICENSE [downloads-image]: https://img.shields.io/npm/dm/node-supports-preserve-symlinks-flag.svg [downloads-url]: https://npm-stat.com/charts.html?package=node-supports-preserve-symlinks-flag [codecov-image]: https://codecov.io/gh/inspect-js/node-supports-preserve-symlinks-flag/branch/main/graphs/badge.svg [codecov-url]: https://app.codecov.io/gh/inspect-js/node-supports-preserve-symlinks-flag/ [actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/inspect-js/node-supports-preserve-symlinks-flag [actions-url]: https://github.com/inspect-js/node-supports-preserve-symlinks-flag/actions # tailwindcss/nesting This is a PostCSS plugin that wraps [postcss-nested](https://github.com/postcss/postcss-nested) or [postcss-nesting](https://github.com/csstools/postcss-plugins/tree/main/plugins/postcss-nesting) and acts as a compatibility layer to make sure your nesting plugin of choice properly understands Tailwind's custom syntax like `@apply` and `@screen`. Add it to your PostCSS configuration, somewhere before Tailwind itself: ```js // postcss.config.js module.exports = { plugins: [ require('postcss-import'), require('tailwindcss/nesting'), require('tailwindcss'), require('autoprefixer'), ] } ``` By default, it uses the [postcss-nested](https://github.com/postcss/postcss-nested) plugin under the hood, which uses a Sass-like syntax and is the plugin that powers nesting support in the [Tailwind CSS plugin API](https://tailwindcss.com/docs/plugins#css-in-js-syntax). If you'd rather use [postcss-nesting](https://github.com/csstools/postcss-plugins/tree/main/plugins/postcss-nesting) (which is based on the work-in-progress [CSS Nesting](https://drafts.csswg.org/css-nesting-1/) specification), first install the plugin alongside: ```shell npm install postcss-nesting ``` Then pass the plugin itself as an argument to `tailwindcss/nesting` in your PostCSS configuration: ```js // postcss.config.js module.exports = { plugins: [ require('postcss-import'), require('tailwindcss/nesting')(require('postcss-nesting')), require('tailwindcss'), require('autoprefixer'), ] } ``` This can also be helpful if for whatever reason you need to use a very specific version of `postcss-nested` and want to override the version we bundle with `tailwindcss/nesting` itself. # xtend [![browser support][3]][4] [![locked](http://badges.github.io/stability-badges/dist/locked.svg)](http://github.com/badges/stability-badges) Extend like a boss xtend is a basic utility library which allows you to extend an object by appending all of the properties from each object in a list. When there are identical properties, the right-most property takes precedence. ## Examples ```js var extend = require("xtend") // extend returns a new object. Does not mutate arguments var combination = extend({ a: "a", b: "c" }, { b: "b" }) // { a: "a", b: "b" } ``` ## Stability status: Locked ## MIT Licensed [3]: http://ci.testling.com/Raynos/xtend.png [4]: http://ci.testling.com/Raynos/xtend
OlexandrSai_NEAR--promises
README.md babel.config.js package-lock.json package.json postcss.config.js public index.html src composables near.js index.css main.js router index.js services near.js store store.js tailwind.config.js
# near--promises ## Project setup ``` npm install ``` ### Compiles and hot-reloads for development ``` npm run serve ``` ### Compiles and minifies for production ``` npm run build ``` ### Lints and fixes files ``` npm run lint ``` ### Customize configuration See [Configuration Reference](https://cli.vuejs.org/config/).
ptzagk_near__guest-book
.eslintrc.yml .github dependabot.yml workflows deploy.yml tests.yml .gitpod.yml .travis.yml README-Gitpod.md README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts guestbook.spec.ts as_types.d.ts main.ts model.ts tsconfig.json babel.config.js package.json src App.js config.js index.html index.js tests frontend App-ui.test.js integration-tests rs Cargo.toml src tests.rs ts main.ava.ts
Guest Book ========== [![Build Status](https://travis-ci.com/near-examples/guest-book.svg?branch=master)](https://travis-ci.com/near-examples/guest-book) [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/near-examples/guest-book) <!-- MAGIC COMMENT: DO NOT DELETE! Everything above this line is hidden on NEAR Examples page --> Sign in with [NEAR] and add a message to the guest book! A starter app built with an [AssemblyScript] backend and a [React] frontend. Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you have Node.js ≥ 12 installed (https://nodejs.org), then use it to install [yarn]: `npm install --global yarn` (or just `npm i -g yarn`) 2. Run the local development server: `yarn && yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Running `yarn dev` will tell you the URL you can visit in your browser to see the app. Exploring The Code ================== 1. The backend code lives in the `/assembly` folder. This code gets deployed to the NEAR blockchain when you run `yarn deploy:contract`. This sort of code-that-runs-on-a-blockchain is called a "smart contract" – [learn more about NEAR smart contracts][smart contract docs]. 2. The frontend code lives in the `/src` folder. [/src/index.html](/src/index.html) is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and backend. The backend code gets tested with the [asp] command for running the backend AssemblyScript tests, and [jest] for running frontend tests. You can run both of these at once with `yarn test`. Both contract and client-side code will auto-reload as you change source files. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contracts get deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli -------------------------- You need near-cli installed globally. Here's how: npm install --global near-cli This will give you the `near` [CLI] tool. Ensure that it's installed with: near --version Step 1: Create an account for the contract ------------------------------------------ Visit [NEAR Wallet] and make a new account. You'll be deploying these smart contracts to this new account. Now authorize NEAR CLI for this new account, and follow the instructions it gives you: near login Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'your-account-here!' Step 3: change remote URL if you cloned this repo ------------------------- Unless you forked this repository you will need to change the remote URL to a repo that you have commit access to. This will allow auto deployment to GitHub Pages from the command line. 1) go to GitHub and create a new repository for this project 2) open your terminal and in the root of this project enter the following: $ `git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY.git` Step 4: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contracts to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. [NEAR]: https://near.org/ [yarn]: https://yarnpkg.com/ [AssemblyScript]: https://www.assemblyscript.org/introduction.html [React]: https://reactjs.org [smart contract docs]: https://docs.near.org/docs/develop/contracts/overview [asp]: https://www.npmjs.com/package/@as-pect/cli [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.near.org [near-cli]: https://github.com/near/near-cli [CLI]: https://www.w3schools.com/whatis/whatis_cli.asp [create-near-app]: https://github.com/near/create-near-app [gh-pages]: https://github.com/tschaub/gh-pages
imatomster_campaignlayer-nep245
Cargo.toml README.md build.sh mt Cargo.toml src lib.rs test-approval-receiver Cargo.toml src lib.rs test-contract-defi Cargo.toml src lib.rs tests workspaces main.rs test_approval.rs test_core.rs test_enumeration.rs utils.rs
# CampaignLayerNEP245 Smart Contract The CampaignLayerNEP245 smart contract is designed to provide multi-token management functionality on the NEAR Protocol. This should mimic the existing ERC1155 we have been using on the Ethereum blockchain. This includes features for token minting, transferring, approvals, batch transfers, and now token burning! ## Functions ### `new_default_meta(owner_id: AccountId)` Initializes the contract with default metadata. ### `new(owner_id: AccountId, metadata: MtContractMetadata)` Initializes the contract with custom metadata. ### `mt_mint(token_owner_id: AccountId, token_metadata: TokenMetadata, supply: Balance) -> Token` Mints new tokens and assigns them to a specified account. ### `register(token_id: TokenId, account_id: AccountId)` Registers an account as the owner of a specific token. ### `mt_transfer(receiver_id: AccountId, token_id: TokenId, amount: U128, memo: Option<String>, msg: Option<String>)` Transfers tokens from the caller's account to another account. ### `mt_batch_transfer(receiver_id: AccountId, token_ids: Vec<TokenId>, amounts: Vec<U128>, memos: Option<Vec<String>>, msgs: Option<Vec<String>>)` Transfers batches of tokens from the caller's account to another account. ### `mt_approve(token_ids: Vec<TokenId>, amounts: Vec<U128>, account_id: AccountId, msg: Option<String>)` Approves an account to spend a specific amount of tokens on behalf of the caller. ### `mt_revoke(token_ids: Vec<TokenId>, account_id: AccountId)` Revokes approval for an account to spend tokens. ### `mt_revoke_all(token_ids: Vec<TokenId>)` Revokes all approvals for a set of token IDs. ### `mt_balance_of(account_id: AccountId, token_id: TokenId) -> U128` Retrieves the balance of a specific token for a given account. ### `mt_is_approved(token_ids: Vec<TokenId>, account_id: AccountId, amounts: Vec<U128>, msg: Option<String>) -> bool` Checks if an account is approved to spend a specific amount of tokens. ### `mt_burn(token_id: TokenId, amount: U128)` Burns a specified amount of tokens owned by the caller, reducing the token supply. ## Deploying the Smart Contract 1. Clone this repository to your local machine. 2. Install the necessary tools: Rust, Rustup, and NEAR CLI. 3. Compile the smart contract: Run `cargo build --target wasm32-unknown-unknown --release` in the project root directory. 4. Deploy the smart contract: Use NEAR CLI to deploy the compiled Wasm file to the NEAR Protocol. Example deployment command: ```sh near deploy --wasmFile target/wasm32-unknown-unknown/release/campaignlayernep245.wasm --accountId YOUR_ACCOUNT_ID ```
near_borsh-go
.github ISSUE_TEMPLATE BOUNTY.yml workflows go.yml README.md | bench_test.go borsh.go borsh_test.go decoder.go encoder.go example_test.go extend_type.go fuzz_test.go
# borsh-go [![Go Reference](https://pkg.go.dev/badge/github.com/near/borsh-go.svg)](https://pkg.go.dev/github.com/near/borsh-go) **borsh-go** is an implementation of the [Borsh] binary serialization format for Go projects. Borsh stands for _Binary Object Representation Serializer for Hashing_. It is meant to be used in security-critical projects as it prioritizes consistency, safety, speed, and comes with a strict specification. ## Features - Based on Go Reflection. Avoids the need for create protocol file and code generation. Simply defining `struct` and go. ## Usage ### Example ```go package demo import ( "log" "reflect" "testing" "github.com/near/borsh-go" ) type A struct { X uint64 Y string Z string `borsh_skip:"true"` // will skip this field when serializing/deserializing } func TestSimple(t *testing.T) { x := A{ X: 3301, Y: "liber primus", } data, err := borsh.Serialize(x) log.Print(data) if err != nil { t.Error(err) } y := new(A) err = borsh.Deserialize(y, data) if err != nil { t.Error(err) } if !reflect.DeepEqual(x, *y) { t.Error(x, y) } } ``` For more examples of usage, refer to `borsh_test.go`. ## Type Mappings Borsh | Go | Description --------------------- | -------------- |-------- `bool` | `bool` | `u8` integer | `uint8` | `u16` integer | `uint16` | `u32` integer | `uint32` | `u64` integer | `uint64` | `u128` integer | `big.Int` | `i8` integer | `int8` | `i16` integer | `int16` | `i32` integer | `int32` | `i64` integer | `int64` | `i128` integer | | Not supported yet `f32` float | `float32` | `f64` float | `float64` | fixed-size array | `[size]type` | go array dynamic-size array | `[]type` | go slice string | `string` | option | `*type` | go pointer map | `map` | set | `map[type]struct{}` | go map with value type set to `struct{}` structs | `struct` | enum | `borsh.Enum` | use `type MyEnum borsh.Enum` to define enum type
PrimeLabCore_dazn-lambda-powertools
.circleci config.yml .github ISSUE_TEMPLATE bug_report.md feature_request.md PULL_REQUEST_TEMPLATE.md CHANGELOG.md CODE_OF_CONDUCT.md CONTRIBUTING.md README.md | SUMMARY.md book.json commitlint.config.js example CHANGELOG.md README.md functions add.js api-a.js api-b.js cloudwatchevents.js double.js dynamodb.js eventbridge.js firehose.js kinesis.js sns.js stand-alone.js package-lock.json package.json serverless.yml jest.config.js layer build.sh nodejs CHANGELOG.md package-lock.json package.json package.json template.txt lerna.json package-lock.json package.json packages lambda-powertools-cloudwatchevents-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-correlation-ids CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-dynamodb-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-eventbridge-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-firehose-client CHANGELOG.md README.md __tests__ index.test.js index.d.ts index.js package-lock.json package.json lambda-powertools-http-client .vscode launch.json CHANGELOG.md README.md __tests__ correlation-ids.js metrics.js request.js timeout.js index.js package-lock.json package.json lambda-powertools-kinesis-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-lambda-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-logger CHANGELOG.md README.md __tests__ correlation-ids.js index.js index.d.ts index.js package-lock.json package.json lambda-powertools-middleware-correlation-ids CHANGELOG.md README.md __tests__ alb.js api-gateway.js dynamodb.js event-templates alb.json apig.json dynamo-new-old.json eventbridge.json firehose.json kinesis.json sfn.json sns.json sqs-wrapped-sns.json sqs.json eventbridge.js firehose.js index.js kinesis.js lib.js sns.js sqs.js step-functions.js consts.js event-sources alb.js api-gateway.js direct-invoke.js dynamodb.js eventbridge.js firehose.js generic.js kinesis.js sns.js sqs.js index.d.ts index.js package-lock.json package.json lambda-powertools-middleware-log-timeout CHANGELOG.md README.md __tests__ index.js index.js package-lock.json package.json lambda-powertools-middleware-obfuscater CHANGELOG.md README.md __tests__ fixture blacklist expected.json fixture.json whitelist expected.json fixture.json obfuscater.js index.js obfuscater.js package-lock.json package.json lambda-powertools-middleware-sample-logging CHANGELOG.md README.md __tests__ index.js index.js package-lock.json package.json lambda-powertools-middleware-stop-infinite-loop CHANGELOG.md README.md __tests__ index.js index.js package-lock.json package.json lambda-powertools-pattern-basic CHANGELOG.md README.md __tests__ index.js supplement-csv.js index.d.ts index.js package-lock.json package.json supplement-csv.js lambda-powertools-pattern-obfuscate CHANGELOG.md README.md __tests__ index.js supplement-csv.js index.d.ts index.js package-lock.json package.json supplement-csv.js lambda-powertools-sns-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-sqs-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json lambda-powertools-step-functions-client CHANGELOG.md README.md __tests__ index.js index.d.ts index.js package-lock.json package.json powertools-illustrated.svg
# lambda-powertools-middleware-stop-infinite-loop A [Middy](https://github.com/middyjs/middy) middleware that will stop an invocation if it's deemed to be part of an infinite loop. Main features: * errors if the `call-chain-length` reaches the configured threshold (defaults to `10`) ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-middleware-stop-infinite-loop` ## API The middleware accepts an optional constructor parameter `threshold`, which is the max length allowed for the entire call chain. This middleware is intended to be used alongside `@dazn/lambda-powertools-middleware-correlation-ids`, which is responsible for collecting correlation IDs and incrementing the `call-chain-length` (i.e. the number of function invocations that are chained together) at the start of an invocation. Because this middleware relies on `@dazn/lambda-powertools-middleware-correlation-ids`, it needs to be applied **AFTER** `@dazn/lambda-powertools-middleware-correlation-ids` (as seen below). ```js const middy = require('middy') const correlationIds = require('@dazn/lambda-powertools-middleware-correlation-ids') const stopInfiniteLoop = require('@dazn/lambda-powertools-middleware-stop-infinite-loop') const handler = async (event, context) => { return 42 } module.exports = middy(handler) .use(correlationIds()) .use(stopInfiniteLoop()) // defaults to 10 } ``` # lambda-powertools-pattern-basic A basic pattern that helps you follow our guidelines around logging and monitoring. Main features: * configures Datadog metrics namespace using the function name if one is not specified already * configures Datadog default tags with `awsRegion`, `functionName`, `functionVersion` and `environment` * applies the `@dazn/lambda-powertools-middleware-correlation-ids` middleware at a default 1% sample rate * applies the `@dazn/lambda-powertools-middleware-sample-logging` middleware at a default 1% sample rate * applies the `@dazn/lambda-powertools-middleware-log-timeout` middleware at default 10ms threshold (i.e. log an error message 10ms before an invocation actually times out) * allow override for the default 1% sample rate via a `SAMPLE_DEBUG_LOG_RATE` environment variable, to sample debug logs at 5% rate then set `SAMPLE_DEBUG_LOG_RATE` to `0.05` ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-pattern-basic` ## API ```js const wrap = require('@dazn/lambda-powertools-pattern-basic') module.exports.handler = wrap(async (event, context) => { return 42 }) ``` # lambda-powertools-middleware-sample-logging A [Middy](https://github.com/middyjs/middy) middleware that will enable debug logging for a configurable % of invocations. Defaults is 1%. Main features: * integrates with the `@dazn/lambda-powertools-logger` package to enable debug logging * integrates with the `@dazn/lambda-powertools-correlation-ids` package to allow sampling decision to flow through correlation IDs - i.e. enable debug logging at the edge, and the entire call chain will respect that decision * enables debug logging for some % (defaults to 1%) of invocations * records an error log message with the invocation event as attribute when an invocation errors ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-middleware-sample-logging` Alternatively, if you use the template `@dazn/lambda-powertools-pattern-basic` then this would be configured for you. ## API Accepts a configuration object of the following shape: ```js { sampleRate: double [between 0 and 1] } ``` ```js const middy = require('middy') const sampleLogging = require('@dazn/lambda-powertools-middleware-sample-logging') const handler = async (event, context) => { return 42 } module.exports = middy(handler) .use(sampleLogging({ sampleRate: 0.01 })) } ``` This middleware is often used alongside the `@dazn/lambda-powertools-middleware-correlation-ids` middleware to implement sample logging. It's **recommended** that you use the `@dazn/lambda-powertools-pattern-basic` which configures both to enable debug logging at 1% of invocations. # lambda-powertools-sqs-client SQS client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * direct replacement for `AWS.SQS` client * auto-injects correlation IDs into SQS message when you call `sendMessage` or `sendMessageBatch` * allow correlation IDs to be overriden with `sendMessageWithCorrelationIds` and `sendMessageBatchWithCorrelationIds` (useful when processing batch-based event sources such as SQS and Kinesis, where every record has its own set of correlation IDs) ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-sqs-client` ## API It's exactly the same as the SQS client from the AWS SDK. ```js const SQS = require('@dazn/lambda-powertools-sqs-client') const sendMessage = async () => { const req = { MessageBody: JSON.stringify({ message: 'hello sqs' }), QueueUrl: 'my-sqs-queue' } await SQS.sendMessage(req).promise() } ``` # lambda-powertools-lambda-client Lambda client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * auto-injects correlation IDs into the invocation payload when you call `invoke` or `invokeAsync` * direct replacement for `AWS.Lambda` client ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-lambda-client` ## API It's exactly the same as the Lambda client from the AWS SDK. ```js const Lambda = require('@dazn/lambda-powertools-lambda-client') const invoke = async () => { const invokeReq = { FunctionName: 'my-function', InvocationType: 'Event', Payload: JSON.stringify({ message: 'hello lambda' }) } await Lambda.invoke(invokeReq).promise() const invokeAsyncReq = { FunctionName: 'my-function', InvokeArgs: JSON.stringify({ message: 'hello lambda' }) } await Lambda.invokeAsync(invokeAsyncReq).promise() } ``` # lambda-powertools-logger Logger that is tightly integrated with the rest of the `lambda-powertools`, and knows to automatically include any correlation IDs that have been captured with `@dazn/lambda-powertools-correlation-ids`. Main features: * structured logging with JSON * includes a number of common attributes: `awsRegion`, `functionName`, `functionVersion`, `functionMemorySize` and `environment` * supports sampling of debug logs with the `enableDebug` function (see below for more details) * allow log level to be changed live via the `LOG_LEVEL` environment variable (allowed values are `DEBUG`, `INFO`, `WARN` and `ERROR`) * for `WARN` and `ERROR` logs, include `errorName`, `errorMessage` and `stackTrace` ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-logger` ## API This illustrates the API for logging: ```js const Log = require('@dazn/lambda-powertools-logger') Log.debug('this is a debug message') Log.debug('this is a debug message with attributes', { userId: 'theburningmonk' }) Log.info('this is an info message') Log.info('this is an info message with attributes', { userId: 'theburningmonk' }) Log.warn('this is a warning message') Log.warn('this is a warning message with attributes', { userId: 'theburningmonk' }) Log.warn('this is a warning message', new Error('oops')) Log.warn('this is a warning message with attributes, and error details', { userId: 'theburningmonk' }, new Error('oops')) Log.error('this is an error message') Log.error('this is an error message with attributes', { userId: 'theburningmonk' }) Log.error('this is an error message', new Error('oops')) Log.error('this is an error message with attributes, and error details', { userId: 'theburningmonk' }, new Error('oops')) ``` We don't want to leave debug logging ON in production, as there are significant impact on: * CloudWatch Logs cost : CloudWatch Logs charges $0.50 per GB of data ingested * Logz.io cost : Logz.io also charges based on data ingested as well * Lambda cost : there are also Lambda invocation costs for shipping logs from CloudWatch Logs to Logz.io * Lambda concurrency : more things being logged = more Lambda invocations to ship them to Logz.io, which can potentially use up too much of our regional quota of concurrent Lambda executions (default limit is 1000, can be raised through support ticket) * too much noise in the logs, making it harder to find important information Instead, we should sample debug logs for, say, 1% of invocations. When used with other lambda-powertools, e.g. `@dazn/lambda-powertools-middleware-sample-logging`, debug logging can be enabled during an invocation using `enableDebug` function. The `@dazn/lambda-powertools-middleware-correlation-ids` middleware also supplements this behaviour by allowing you to propagate decisions to enable sample logging as a special correlation IDs. This allows an entire call chain (e.g. API Gateway -> Lambda -> Kinesis -> Lambda -> SNS -> Lambda -> HTTP -> API Gateway -> Lambda) to respect the sampling decisions. ```js const Log = require('@dazn/lambda-powertools-logger') // LOG_LEVEL is set to WARN via serverless.yml Log.debug('this is not logged') const undoDebugLog = Log.enableDebug() Log.debug('this is logged') undoDebugLog() Log.debug('this is not logged') ``` # lambda-powertools-kinesis-client Kinesis client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * auto-injects correlation IDs into Kinesis records when you call `putRecord` or `putRecords` (only JSON payloads are supported currently) * direct replacement for `AWS.Kinesis` client ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-kinesis-client` ## API It's exactly the same as the Kinesis client from the AWS SDK. ```js const Kinesis = require('@dazn/lambda-powertools-kinesis-client') const publishEvent = async () => { const putRecordReq = { StreamName: 'lambda-powertools-demo', PartitionKey: uuid(), Data: JSON.stringify({ message: 'hello kinesis' }) } await Kinesis.putRecord(putRecordReq).promise() } const publishEvents = async () => { const putRecordsReq = { StreamName: 'lambda-powertools-demo', Records: [ { PartitionKey: uuid(), Data: JSON.stringify({ message: 'hello kinesis' }) }, { PartitionKey: uuid(), Data: JSON.stringify({ message: 'hello lambda-powertools' }) } ] } await Kinesis.putRecords(putRecordsReq).promise() } ``` # lambda-powertools-correlation-ids A helper module for recording correlation IDs. Main features: * allows you to fetch, update, and delete correlation IDs * respects convention for correlation IDs - i.e. `x-correlation-` * Manually enable/disable debug logging (`debug-log-enabled`) to be picked up by other/downstream middleware * allows you to store more than one correlation IDs, which allows you to *correlate* logs on multiple dimensions (e.g. by `x-correlation-user-id`, or `x-correlation-order-id`, etc.) ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-correlation-ids` ## API ```js const CorrelationIds = require('@dazn/lambda-powertools-correlation-ids') // automatically inserts 'x-correlation-' prefix if not provided CorrelationIds.set('id', '12345678') // records id as x-correlation-id CorrelationIds.set('x-correlation-username', 'theburningmonk') // records as x-correlation-username // Manully enable debug logging (debug-log-enabled) CorrelationIds.debugLoggingEnabled = true const myCorrelationIds = CorrelationIds.get() // { // 'x-correlation-id': '12345678', // 'x-correlation-username': 'theburningmonk', // 'debug-log-enabled': 'true' // } CorrelationIds.clearAll() // removes all recorded correlation IDs CorrelationIds.replaceAllWith({ // bypasses the 'x-correlation-' convention 'debug-log-enabled': 'true', 'User-Agent': 'jest test' }) // Disable debug logging CorrelationIds.debugLoggingEnabled = false ``` In practice, you're likely to only need `set` when you want to record correlation IDs from your function. The middleware, `@dazn/lambda-powertools-middleware-correlation-ids`, would automatically capture the correlation IDs from the invocation event for supported event sources: * API Gateway (via HTTP headers) * Kinesis (via the JSON payload) * SNS (via message attributes) * any invocation event with the special field `__context__` (which is how we inject them with the Step Functions and Lambda clients below) Whilst other power tools would use `get` to make use of the correlation IDs: * `@dazn/lambda-powertools-logger` includes recorded correlation IDs in logs * `@dazn/lambda-powertools-http-client` includes recorded correlation IDs as HTTP headers when you make a HTTP request * `@dazn/lambda-powertools-sns-client` includes recorded correlation IDs as SNS message attributes when you publish a message to SNS (ie. `SNS.publish`) * `@dazn/lambda-powertools-kinesis-client` injects recorded correlation IDs as part of the event payload when you publish event(s) to Kinesis (ie. `Kinesis.putRecord` and `Kinesis.putRecords`) * `@dazn/lambda-powertools-step-functions-client` injects recorded correlation IDs as part of the payload when you start a Step Functions execution (ie. `SFN.startExecution`) * `@dazn/lambda-powertools-lambda-client` injects recorded correlation IDs as part of the invocation payload when you invoke a Lambda function directly (ie. `Lambda.invoke` and `Lambda.invokeAsync`) [![CircleCI](https://circleci.com/gh/getndazn/dazn-lambda-powertools.svg?style=svg)](https://circleci.com/gh/getndazn/dazn-lambda-powertools) # DAZN Lambda Powertools `dazn-lambda-powertools` is a collection of middlewares, AWS clients and helper libraries that make working with lambda easier. ## Motivation Writing Lambdas often involves the bootstrapping of specific tooling, like reading and forwarding on correlation-id's, emitting logs on a lambda timeout, and more. Re-writing and maintaining this bootstrapping logic into every individual lambda can be a pain, so to prevent this re-work we created `dazn-lambda-powertools`. ## Usage The quickest way to get setup is to use the opinionated [pattern basic](/packages/lambda-powertools-pattern-basic) package. `npm install @dazn/lambda-powertools-pattern-basic` ```javascript const wrap = require('@dazn/lambda-powertools-pattern-basic') module.exports.handler = wrap(async (event, context) => { return 42 }) ``` For more control, you can pick and choose from the [individual packages](/packages). ## Powertools and Middy All of the powertool middlewares use the [middy](https://github.com/middyjs/middy) library (**v2.x**), and therefore adhere to the middy API. However, the other tools such as the clients are generic. ## What's in Powertools An integrated suite of powertools for Lambda functions that reduces the effort to implement common lamdba tasks, such as dealing with correlation-ids. * support correlation IDs * debug logs are turned off in production, and are instead sampled for 1% of invocations * debug logging decisions are respected by all the functions on a call chain * HTTP requests always report both latency as well as response count metrics ## Overview of available tools * [logger](/packages/lambda-powertools-logger): structured logging with JSON, configurable log levels, and integrates with other tools to support correlation IDs and sampling (only enable debug logs on 1% of invocations) * [correlation IDs](/packages/lambda-powertools-correlation-ids): create and store correlation IDs that follow the DAZN naming convention * [correlation IDs middleware](/packages/lambda-powertools-middleware-correlation-ids): automatically extract correlation IDs from the invocation event * [sample logging middleware](/packages/lambda-powertools-middleware-sample-logging): enable debug logging for 1% of invocations, or when upstream caller has made the decision to enable debug logging * [obfuscater middleware](/packages/lambda-powertools-middleware-obfuscater): allows you to obfuscate the invocation event so that sensitive data (e.g. PII) is not logged accidentally * [log timeout middleware](/packages/lambda-powertools-middleware-log-timeout): logs an error message when a function invocation times out * [stop infinite loop middleware](/packages/lambda-powertools-middleware-stop-infinite-loop): stops infinite loops ### Client libraries * [http client](/packages/lambda-powertools-http-client): HTTP client that automatically forwards any correlation IDs you have captured or created, and records both latency as well as response count metrics * [CloudWatchEvents client](/packages/lambda-powertools-cloudwatchevents-client): CloudWatchEvents client that automatically forwards any correlation IDs you have captured or created when you put events to an event bus * [EventBridge client](/packages/lambda-powertools-eventbridge-client): EventBridge client that automatically forwards any correlation IDs you have captured or created when you put events to an event bus * [SNS client](/packages/lambda-powertools-sns-client): SNS client that automatically forwards any correlation IDs you have captured or created when you publish a message to SNS * [SQS client](/packages/lambda-powertools-sqs-client): SQS client that automatically forwards any correlation IDs you have captured or created when you publish a message to SQS * [Kinesis client](/packages/lambda-powertools-kinesis-client): Kinesis client that automatically forwards any correlation IDs you have captured or created when you publish record(s) to a Kinesis stream * [Firehose client](/packages/lambda-powertools-firehose-client): Firehose client that automatically forwards any correlation IDs you have captured or created when you publish record(s) to a Firehose delivery stream * [Step Functions client](/packages/lambda-powertools-step-functions-client): Step Functions client that automatically forwards any correlation IDs you have captured or created when you start an execution * [Lambda client](/packages/lambda-powertools-lambda-client): Lambda client that automatically forwards any correlation IDs you have captured or created when you invokes a Lambda function directly * [DynamoDB client](/packages/lambda-powertools-dynamodb-client): DynamoDB client that automatically forwards any correlation IDs you have captured or created when you perform put or update operations against DynamoDB. These correlation IDs are then available to functions processing these events from the table's DynamoDB Stream. ### Patterns * [basic template for a function](/packages/lambda-powertools-pattern-basic): wrapper for your function that applies and configures the function to work well with datadog metrics and sample logging * [obfuscate template](/packages/lambda-powertools-pattern-obfuscate): basic template (above) + obfuscate the invocation event so sensitive data is obfuscated in the `after` and `onError` handlers. ## Installing the powertools ### via NPM | Package | Install command | | --- | --- | | cloudwatchevents-client | npm install @dazn/lambda-powertools-cloudwatchevents-client | | correlation-ids | npm install @dazn/lambda-powertools-correlation-ids | | dynamodb-client | npm install @dazn/lambda-powertools-dynamodb-client | | eventbridge-client | npm install @dazn/lambda-powertools-eventbridge-client | | firehose-client | npm install @dazn/lambda-powertools-firehose-client | | http-client | npm install @dazn/lambda-powertools-http-client | | kinesis-client | npm install @dazn/lambda-powertools-kinesis-client | | lambda-client | npm install @dazn/lambda-powertools-lambda-client | | logger | npm install @dazn/lambda-powertools-logger | | middleware-correlation-ids | npm install @dazn/lambda-powertools-middleware-correlation-ids | | middleware-log-timeout | npm install @dazn/lambda-powertools-middleware-log-timeout | | middleware-obfuscater | npm install @dazn/lambda-powertools-middleware-obfuscater | | middleware-sample-logging | npm install @dazn/lambda-powertools-middleware-sample-logging | | middleware-stop-infinite-loop | npm install @dazn/lambda-powertools-middleware-stop-infinite-loop | | pattern-basic | npm install @dazn/lambda-powertools-pattern-basic | | pattern-obfuscate | npm install @dazn/lambda-powertools-pattern-obfuscate | | sns-client | npm install @dazn/lambda-powertools-sns-client | | sqs-client | npm install @dazn/lambda-powertools-sqs-client | | step-functions-client | npm install @dazn/lambda-powertools-step-functions-client | ### via Lambda layer You can also deploy the layer via our SAR app, which you can deploy either via [this page](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:570995107280:applications~dazn-lambda-powertools) (click `Deploy` and follow the instructions) or using CloudFormation/Serverless framework/AWS SAM: ```yml DaznLambdaPowertoolsLayer: Type: AWS::Serverless::Application Properties: Location: ApplicationId: arn:aws:serverlessrepo:us-east-1:570995107280:applications/dazn-lambda-powertools SemanticVersion: <enter latest version> ``` and reference the output `Outputs.LayerVersion` to get the ARN of the layer to reference in your function. e.g. `Fn::GetAtt: [DaznLambdaPowertoolsLayer, Outputs.LayerVersion]`. You can find the latest version of the SAR app in the `lerna.json` file [here](/lerna.json), in the `version` property. ## Design goal Compliance with best practices around logging and monitoring should be the default behaviour. These tools make it simple for you to **do the right thing** and **gets out of your way** as much as possible. Individually they are useful in their own right, but together they're so much more useful! The middlewares capture incoming correlation IDs, and the logger automatically includes them in every log message, and the other clients (HTTP, Kinesis, SNS, etc.) would also automatically forward them on to external systems. Even if your function doesn't do anything with correlation IDs, the tools make sure that it behaves correctly as these correlation IDs flow through it. ![](powertools-illustrated.svg) ### Did you consider monkey-patching the clients instead? Instead of forcing you to use dazn-powertools AWS clients, we could have monkey patched the AWS SDK clients (which we already do in the tests). We could also monkey patch Node's `http` module (like what [Nock](https://github.com/node-nock/nock) does) to intercept HTTP requests and inject correlation IDs as HTTP headers. We could apply the monkey patching when you apply the correlation IDs middleware, and your function would "automagically" forward correlation IDs without having to use our own client libraries. That way, as a user of the tools, you could use whatever HTTP client you wish, and can use the standard SDK clients as well. We did entertain this idea, but I wanted to leave at least one decision for you to make. The rationale is that when things go wrong (e.g. unhandled error, or bug in our wrapper code) or when they don't work as expected (e.g. you're using an AWS SDK client that we don't support yet), at least you have that one decision to start debugging (change the `require` statement to use the official library instead of our own to see if things things still work). ## Useful commands ### bootstrapping locally Because of the inter-dependencies between packages, it can be tricky to test your changes haven't broken another package. You can use [Lerna](https://lerna.js.org/) CLI to bootstrap all the dependencies with the current local version: ```sh lerna bootstrap ``` ### run all tests ```sh npm test ``` ### run tests for a specific package ```sh PKG=correlation-ids npm run test-package ``` ### create a new package ```sh lerna create <name of package> ``` and follow the instruction to bootstrap the new project. ## Contributing Please read our [contribution guide](CONTRIBUTING.md) to see how you can contribute towards this project. # lambda-powertools-dynamodb-client DynamoDB client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * auto-injects correlation IDs into the DynamoDB item(s) so they are available in the DynamoDB Stream * direct replacement for `AWS.DynamoDB.Document` client ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-dynamodb-client` ## API It's exactly the same as the DynamoDB Document client from the AWS SDK. ```js const DynamoDB = require('@dazn/lambda-powertools-dynamodb-client') await DynamoDB.put({ TableName: 'table-name', Item: { Id: 'theburningmonk' } }).promise() await DynamoDB.update({ TableName: 'table-name', Key: { Id: 'theburningmonk' }, UpdateExpression: 'SET #url = :url', ExpressionAttributeNames: { '#url': 'url' }, ExpressionAttributeValues: { ':url': 'https://theburningmonk.com' } }).promise() await DynamoDB.batchWrite({ RequestItems: { ['table-name']: [ { DeleteRequest: { Key: { Id: 'theburningmonk' } } }, { PutRequest: { Item: { Id: 'theburningmonk' } } } ] } }).promise() await DynamoDB.transactWrite({ TransactItems: [ { Put: { TableName: 'table-name', Item: { Id: 'theburningmonk' } } }, { Update: { TableName: tableName, Key: { Id: 'theburningmonk' }, UpdateExpression: 'SET #url = :url', ExpressionAttributeNames: { '#url': 'url' }, ExpressionAttributeValues: { ':url': 'https://theburningmonk.com' } } } ] }).promise() ``` # lambda-powertools-cloudwatchevents-client CloudWatchEvents client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * auto-injects correlation IDs into the CloudWatchEvents events when you call `putEvents` * direct replacement for `AWS.CloudWatchEvents` client ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-cloudwatchevents-client` ## API It's exactly the same as the CloudWatchEvents client from the AWS SDK. ```js const CloudWatchEvents = require('@dazn/lambda-powertools-cloudwatchevents-client') const publishEvents = async () => { const putEventsReq = { Entries: [ { Source: "my-source", "Detail-Type": "my-type", Detail: JSON.stringify({ message: 'hello cloudwatchevents' }) }, { Source: "my-source", "Detail-Type": "my-type", Detail: JSON.stringify({ message: 'hello lambda-powertools' }) } ] } await CloudWatchEvents.putEvents(putEventsReq).promise() } ``` # Demo app Demo project to show off the capabilities of the powertools, illustrating how correlation IDs and sample debug logging can be respected through an elaborate web of functions. ![](architecture.png) See it in action in this lovingly crafted short video: [![](https://img.youtube.com/vi/enmWTil-y7E/0.jpg)](https://www.youtube.com/watch?v=enmWTil-y7E) # lambda-powertools-http-client HTTP client that automatically forwards correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`), and follows DAZN's convention around recording metrics around integration points. Main features: * auto-forwards any correlation IDs captured with the `@dazn/lambda-powertools-correlation-ids` package as HTTP headers * auto-record custom metrics using the `@dazn/datadog-metrics` package, which defaults to async mode (i.e. writing to `stdout` in DogStatsD format) but can be configured via the `DATADOG_METRICS_MODE` environment variable * custom metrics include: * `{hostName}.response.latency` [`histogram`]: e.g. `google.com.response.latency` * `{hostName}.response.{statusCode}` [`count`]: e.g. `google.com.response.200` metric names can be overriden with the `metricName` option (see below for details) * all custom metrics include the tags `awsRegion`, `functionName`, `functionVersion`, `method` (e.g. `POST`) and `path` (e.g. `/v1/signin`) * you can add additional tags by passing them in via the `metricTags` option (see below for details) * supports timeout ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-http-client` ## API Basic usage looks like this: ```js const HTTP = require('@dazn/lambda-powertools-http-client') const sayIt = async () => { const httpRequest = { uri: `https://example.com/dev/say`, method: 'post', body: { message: 'hello world' } } await HTTP(httpRequest) } ``` It's essentially a function that accepts a request of type: ```js { uri/url : string (either uri or url must be specified) method : GET (default) | POST | PUT | HEAD | DELETE | PATCH headers : object qs : object body : object metricName [optional] : string // override the default metric name, e.g. 'adyenApi', which changes metrics to 'adyenapi.latency' and 'adyenapi.202' metricTags [optional] : string [] // additional tags for metrics, e.g. ['request_type:submit', 'load_test'] timeout [optional] : int (millis) } ``` # lambda-powertools-sns-client SNS client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * direct replacement for `AWS.SNS` client * auto-injects correlation IDs into SNS message when you call `publish` * allow correlation IDs to be overriden with `publishWithCorrelationIds` (useful when processing batch-based event sources such as SQS and Kinesis, where every record has its own set of correlation IDs) ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-sns-client` ## API It's exactly the same as the SNS client from the AWS SDK. ```js const SNS = require('@dazn/lambda-powertools-sns-client') const publishMessage = async () => { const req = { Message: JSON.stringify({ message: 'hello sns' }), TopicArn: 'my-sns-topic' } await SNS.publish(req).promise() } ``` # lambda-powertools-pattern-obfuscate A pattern that helps you follow our guidelines around logging and monitoring. With added ability to obfuscate personal fields. Main features: * configures Datadog metrics namespace using the function name if one is not specified already * configures Datadog default tags with `awsRegion`, `functionName`, `functionVersion` and `environment` * applies the `@dazn/lambda-powertools-middleware-correlation-ids` middleware at a default 1% sample rate * applies the `@dazn/lambda-powertools-middleware-sample-logging` middleware at a default 1% sample rate * applies the `@dazn/lambda-powertools-middleware-obfuscated-logging` middleware with passed obfuscation filters * applies the `@dazn/lambda-powertools-middleware-log-timeout` middleware at default 10ms threshold (i.e. log an error message 10ms before an invocation actually times out) * allow override for the default 1% sample rate via a `SAMPLE_DEBUG_LOG_RATE` environment variable, to sample debug logs at 5% rate then set `SAMPLE_DEBUG_LOG_RATE` to `0.05` ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-pattern-obfuscate` ## API ```js const obfuscatedWrap = require('@dazn/lambda-powertools-pattern-obfuscated') module.exports.handler = obfuscatedWrap.obfuscaterPattern(['Records.*.firstName', 'Records.*.lastName'], async (event, context) => { return 42 }) ``` # `lambda-powertools-firehose-client` Firehose client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * auto-injects correlation IDs into Firehose records when you call `putRecord` or `putRecordBatch` (only JSON payloads are supported currently) * direct replacement for `AWS.Firehose` client ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-firehose-client` ## API It's exactly the same as the Kinesis client from the AWS SDK. ```js const Firehose = require('@dazn/lambda-powertools-firehose-client') const publishEvent = async () => { const putRecordReq = { DeliveryStreamName: 'lambda-powertools-demo', PartitionKey: uuid(), Data: JSON.stringify({ message: 'hello firehose' }) } await Firehose.putRecord(putRecordReq).promise() } const publishEvents = async () => { const putRecordBatchReq = { DeliveryStreamName: 'lambda-powertools-demo', Records: [ { PartitionKey: uuid(), Data: JSON.stringify({ message: 'hello kinesis' }) }, { PartitionKey: uuid(), Data: JSON.stringify({ message: 'hello lambda-powertools' }) } ] } await Firehose.putRecordBatch(putRecordBatchReq).promise() } ``` # lambda-powertools-middleware-correlation-ids A [Middy](https://github.com/middyjs/middy) middleware that extracts correlation IDs from the invocation event and stores them with the `@dazn/lambda-powertools-correlation-ids` package. Main features: * stores correlation IDs with the `@dazn/lambda-powertools-correlation-ids` package * supports API Gateway events (HTTP headers) * supports SNS events (message attributes) * supports SQS events (message attributes with and without raw message delivery when published from SNS) * supports Kinesis events (see below for more details) * supports direct invocations and Step Function tasks (the middleware looks for a `__context__` property in the JSON event) * initializes correlation IDs using the Lambda request ID * captures anything with prefix `x-correlation-` * captures `User-Agent` from API Gateway events * captures or initializes the `debug-log-enabled` decision based on configuration (see below) to ensure invocation follows upstream decision to enable debug logging for a small % of invocations ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-middleware-correlation-ids` Alternatively, if you use the template `@dazn/lambda-powertools-pattern-basic` then this would be configured for you. ## API Accepts a configuration object of the following shape: ```js { sampleDebugLogRate: double [between 0 and 1] } ``` ```js const middy = require('middy') const correlationIds = require('@dazn/lambda-powertools-middleware-correlation-ids') const handler = async (event, context) => { return 42 } module.exports = middy(handler) .use(correlationIds({ sampleDebugLogRate: 0.01 })) } ``` This middleware is often used alongside the `@dazn/lambda-powertools-middleware-sample-logging` middleware to implement sample logging. It's **recommended** that you use the `@dazn/lambda-powertools-pattern-basic` which configures both to enable debug logging at 1% of invocations. ## Logging and Forwarding correlation IDs **Note**: *this does not apply to `Kinesis` and `SQS` event sources, whose invocation events contain a batch of records and require different handling. Please see section below for more details.* Once you have wrapped your handler code, the correlation IDs would be automatically included when: * you log with the `@dazn/lambda-powertools-logger` logger * when you make HTTP requests with the `@dazn/lambda-powertools-http-client` HTTP client * when you interact with AWS with the various AWS SDK clients from this project If you're using these accompanying packages then correlation IDs would simply pass through your function. However, if you wish to augment existing correlation IDs then you can also add new correlation IDs to the mix with the `@dazn/lambda-powertools-correlation-ids` package. ## Logging and Forwarding correlation IDs for Kinesis and SQS events Because Kinesis and SQS both supports batching, it means that the invocation event can contain multiple records. Each of these records would have its own set of correlation IDs. To address this, this middleware would extract the correlation IDs for each record and make it available for you, as well as creating a dedicated logger instance for each record. How this happens depends on the event source. The HTTP client as well as AWS SDK clients from this project all have an override that lets you pass in these record-specific correlation IDs. ### Kinesis For kinesis events, the middleware would do the following for each record: 1. Unzip; base64 decode; parse as JSON; and extract correlation IDs from the record body (in the `__context__` property). 2. Create an instance of `CorrelationIds` (from the `@dazn/lambda-powertools-correlation-ids` package) to hold all the extracted correlation IDs. 3. Create an instance of `Log` (from the `@dazn/lambda-powertools-logger` package) with the extracted correlation IDs already baked in. 4. Attach the correlation IDs and logger to the parsed JSON object. 5. Include the parsed JSON objects from step 4. in `context.parsedKinesisEvents`. When using the HTTP client and AWS SDK clients from the powertools packages, you will need to include them as options or using one of the overload methods. For example: ![](docs/images/kinesis_middleware_illustrated.png) Here's an example function. ```javascript const Log = require('@dazn/lambda-powertools-logger') const SNS = require('@dazn/lambda-powertools-sns-client') const wrap = require('@dazn/lambda-powertools-pattern-basic') module.exports.handler = wrap(async (event, context) => { const events = context.parsedKinesisEvents // you can still use the global Log instance, it'll still have the request ID for // the current invocation, but it won't have any of the correlation IDs for the // individual records Log.debug('received Kinesis events', { count: events.length }) await Promise.all(events.map(evt => { // each parsed kinesis event has a `logger` attached to it, with the correlation IDs // specific for that record evt.logger.debug('publishing kinesis event as SNS message...') const req = { Message: JSON.stringify(evt), TopicArn: process.env.TOPIC_ARN } // the SNS client has an overload method for publish, which lets you pass the // correlation IDs specific to this parsed kinesis event return SNS.publishWithCorrelationIds(evt.correlationIds, req).promise() })) }) ``` ### SQS For SQS events, the middleware would do the following for each record: 1. Eextract correlation IDs from the message attribtues. 2. Create an instance of `CorrelationIds` (from the `@dazn/lambda-powertools-correlation-ids` package) to hold all the extracted correlation IDs. 3. Create an instance of `Log` (from the `@dazn/lambda-powertools-logger` package) with the extracted correlation IDs already baked in. 4. Attach the correlation IDs and logger to the original SQS record object. When using the HTTP client and AWS SDK clients from the powertools packages, you will need to include them as options or using one of the overload methods. This is similar to the Kinesis example above. Here's an example function: ```javascript const Log = require('@dazn/lambda-powertools-logger') const SNS = require('@dazn/lambda-powertools-sns-client') const wrap = require('@dazn/lambda-powertools-pattern-basic') module.exports.handler = wrap(async (event, context) => { // you can still use the global Log instance, it'll still have the request ID for // the current invocation, but it won't have any of the correlation IDs for the // individual records Log.debug('received SQS events', { count: event.Records.length }) await Promise.all(event.Records.map(record => { // each SQS record has a `logger` attached to it, with the correlation IDs // specific for that record record.logger.debug('publishing SQS record as SNS message...') const req = { Message: JSON.stringify(record), TopicArn: process.env.TOPIC_ARN } // the SNS client has an overload method for publish, which lets you pass the // correlation IDs specific to this SQS record return SNS.publishWithCorrelationIds(record.correlationIds, req).promise() })) }) ``` # lambda-powertools-middleware-obfuscater A [Middy](https://github.com/middyjs/middy) middleware that will enable debug logging for a configurable % of invocations. Defaults is 1%. Main features: * records an error log message with the invocation event as attribute when an invocation errors. These invocation errors may be obfuscated to avoid the leaking of Personal Identifiable Information. ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-middleware-obfuscater` Alternatively, if you use the template `@dazn/lambda-powertools-pattern-obfuscate` then this would be configured for you. ## API Accepts a configuration object of the following shape: ```js { obfuscationFilter: string array formatted like ["object.key.to.obfuscate"] } ``` ```js { Records: [ { firstName: "personal" secondName: "identifiable" email: "[email protected]" }, { firstName: "second" secondName: "personal" email: "[email protected]" } ] } // To filter the above object you would pass const obfuscationFilter = ["Records.*.firstName", "Records.*.secondName", "Records.*.email"] ``` The output would be... ```js { Records: [ { firstName: "********" secondName: "************" email: "******@**.**" }, { firstName: "******" secondName: "********" email: "******@**.**" } ] } ``` similarly, you can filter entire objects, for instance. ```js const obfuscationFilter = ["Records.*.personal"] { Records: [ { personal: { firstName: "********" secondName: "************" email: "******@**.**" } }. { personal: { firstName: "******" secondName: "********" email: "******@**.**", address: { postcode: "******", street: "* ****** ***", country: "**" }}} ] } ``` This will recursively filter every object and subobjects ```js const middy = require('middy') const obfuscatedLogging = require('@dazn/lambda-powertools-middleware-obfuscater') const handler = async (event, context) => { return 42 } module.exports = middy(handler) .use(obfuscatedLogging.obfuscaterMiddleware({ sampleRate: 0.01, obfuscationFilters: ["example.example"] })) } ``` This middleware is often used alongside the `@dazn/lambda-powertools-middleware-correlation-ids` middleware to implement sample logging. It's **recommended** that you use the `@dazn/lambda-powertools-pattern-obfuscate` which configures both to enable debug logging at 1% of invocations. # lambda-powertools-step-functions-client Step Functions (SFN) client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * auto-injects correlation IDs into the invocation payload when you call `startExecution` * direct replacement for `AWS.StepFunctions` client ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-step-functions-client` ## API It's exactly the same as the Step Functions (SFN) client from the AWS SDK. ```js const SFN = require('@dazn/lambda-powertools-step-functions-client') const publishMessage = async () => { const req = { stateMachineArn: 'my-state-machine', input: JSON.stringify({ message: 'hello sfn' }), name: 'test' } await SFN.startExecution(req).promise() } ``` # lambda-powertools-eventbridge-client EventBridge client wrapper that knows how to forward correlation IDs (captured via `@dazn/lambda-powertools-correlation-ids`). Main features: * auto-injects correlation IDs into the EventBridge events when you call `putEvents` * direct replacement for `AWS.EventBridge` client ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-eventbridge-client` ## API It's exactly the same as the EventBridge client from the AWS SDK. ```js const EventBridge = require('@dazn/lambda-powertools-eventbridge-client') const publishEvents = async () => { const putEventsReq = { Entries: [ { Source: "my-source", "Detail-Type": "my-type", Detail: JSON.stringify({ message: 'hello eventbridge' }) }, { Source: "my-source", "Detail-Type": "my-type", Detail: JSON.stringify({ message: 'hello lambda-powertools' }) } ] } await EventBridge.putEvents(putEventsReq).promise() } ``` # lambda-powertools-middleware-log-timeout A [Middy](https://github.com/middyjs/middy) middleware that will log a timeout error message **just before** the function actually times out. Main features: * records an error log message `invocation timed out` (with the invocation event as attribute) when an invocation times out ## Getting Started Install from NPM: `npm install @dazn/lambda-powertools-middleware-log-timeout` Alternatively, if you use the template `@dazn/lambda-powertools-pattern-basic` then this would be configured for you. ## API The middleware accepts an optional constructor parameter `thresholdMillis`, which is the number of millis before an invocation is timed out, that an error message is logged. `thresholdMillis` defaults to **10ms**. ```js const middy = require('middy') const logTimeout = require('@dazn/lambda-powertools-middleware-log-timeout') const handler = async (event, context) => { return 42 } module.exports = middy(handler) // or .use(logTimeout(50)) to log the timeout error message 50ms before invocation times out .use(logTimeout()) // defaults to 10ms } ``` It's **recommended** that you use the `@dazn/lambda-powertools-pattern-basic` which configures this middleware along with other useful middlewares.
mfornet_account-lookup-table
README.md config-overrides.js package.json public index.html manifest.json robots.txt src App.css App.js App.test.js index.css index.js logo.svg reportWebVitals.js setupTests.js
# Account Lookup Table View details of several lockup accounts in a single place. Similar to [Account-Lookup](https://near.github.io/account-lookup/). ## Roadmap - Improve UI - Error messages - (table) https://tailwindui.com/components/application-ui/lists/tables - Move to web4 - Versioned accounts in NEAR Page Account: https://explorer.mainnet.near.org/accounts/account_viewer.near Stable contracts: Don't have full access key, everything is on chain! No owner methods! Nightly contracts: Iterate fast, break fast - stable.v0-1.account_viewer.near - nightly.v0-1.account_viewer.near - Lockup factory Interface. To help people test lockups deployments and parameters. Will be useful - FAQ: Explain why this app is privacy preserving. - Host at near.page - RPC (This can be mitigated by rotating RPCs)
FroVolod_near-cli-rs-instance
.github workflows release.yml test.yml Cargo.toml Cross.toml README.md docs GUIDE.en.md GUIDE.ru.md README.en.md README.ru.md media view-account.svg src commands account add_key access_key_type mod.rs autogenerate_new_keypair mod.rs print_keypair_to_terminal mod.rs save_keypair_to_keychain mod.rs mod.rs use_manually_provided_seed_phrase mod.rs use_public_key mod.rs create_implicit_account mod.rs create_subaccount mod.rs delete_account mod.rs delete_key mod.rs import_account mod.rs list_keys mod.rs mod.rs view_account_summary mod.rs config add_connection mod.rs delete_connection mod.rs mod.rs contract call_function as_read_only mod.rs as_transaction mod.rs mod.rs deploy initialize_mode call_function_type mod.rs mod.rs mod.rs download_wasm mod.rs mod.rs mod.rs tokens mod.rs send_ft mod.rs send_near mod.rs send_nft mod.rs view_ft_balance mod.rs view_near_balance mod.rs view_nft_assets mod.rs transaction construct_transaction add_access_key access_key_type mod.rs mod.rs use_manually_provided_seed_phrase mod.rs use_public_key mod.rs call_function mod.rs create_subaccount mod.rs delete_access_key mod.rs delete_account mod.rs deploy_contract initialize_mode call_function_type mod.rs mod.rs mod.rs mod.rs stake_near_tokens mod.rs transfer_tokens mod.rs mod.rs view_status mod.rs common.rs config.rs main.rs network mod.rs network_for_transaction mod.rs network_view_at_block mod.rs transaction_signature_options mod.rs sign_with_keychain mod.rs sign_with_ledger mod.rs sign_with_private_key mod.rs types account_id.rs crypto_hash.rs mod.rs path_buf.rs public_key.rs secret_key.rs signature.rs slip10.rs url.rs vec_string.rs utils_command generate_keypair_subcommand mod.rs mod.rs these functions are used for offline mode
near-cli -------- near-cli is a command line utility for working with the Near Protocol blockchain. <p> <img src="docs/media/view-account.svg" alt="" width="1200"> </p> #### [README in English](docs/README.en.md) * [Usage](docs/README.en.md#usage) * [User Guide](docs/README.en.md#user-guide) * [Installation](docs/README.en.md#installation) * [Building](docs/README.en.md#building) #### [README на Русском (in Russian)](docs/README.ru.md) * [Применение](docs/README.ru.md#применение) * [Инструкция](docs/README.ru.md#инструкция) * [Установка](docs/README.ru.md#установка) * [Сборка](docs/README.ru.md#сборка)
gautamprikshit1_gifs-portal-near
.gitpod.yml README.md contract Cargo.toml README.md src lib.rs gif-collection-frontend README.md package.json public index.html manifest.json robots.txt src App.css App.js App.test.js index.css index.js logo.svg reportWebVitals.js setupTests.js utils collections.js config.js near.js webpack.config.js integration-tests rs Cargo.toml src tests.rs ts package.json src main.ava.ts package-lock.json package.json
# Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) near-blank-project Smart Contract ================== A [smart contract] written in [Rust] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install Rust with [correct target] Exploring The Code ================== 1. The main smart contract code lives in `src/lib.rs`. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard Rust tests using [cargo] with a `--nocapture` flag so that you can see any debug info you print to the console. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [Rust]: https://www.rust-lang.org/ [create-near-app]: https://github.com/near/create-near-app [correct target]: https://github.com/near/near-sdk-rs#pre-requisites [cargo]: https://doc.rust-lang.org/book/ch01-03-hello-cargo.html near-blank-project ================== This [React] app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/assets/js/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages
near_contract-wizard
.github ISSUE_TEMPLATE BOUNTY.yml .prettierrc.yml .vscode tasks.json README.md package-lock.json package.json src cli.ts lib.ts srcDoc.ts test.ts tests Cargo.toml tsconfig.json
# NEAR Contract Wizard ## Setup 1. Install dependencies: `npm install`. 2. Run `npm run prepare` to install commit hooks. 3. Run `npm run build:dev` to build the project. ## Generating code using the CLI There's a simple CLI script `src/cli.ts` that you can use to try out the code generation. ```bash npm run -s cli 'ft:{"name":"My Fungible Token","symbol":"MFT","decimals":24,"preMint":10}' > my_ft_contract.rs ``` ## Testing Run `npm run test` to run the tests. The tests work by generating the Rust code, writing it to `tests/src/lib.rs`, and then trying to compile it to WASM. Compilation failure = test failure. For this to work, you have to have the Rust build tools installed, as well as the `wasm32-unknown-unknown` target. You can install the WASM target with: ```bash rustup target add wasm32-unknown-unknown ``` ## Authors - Jacob Lindahl <[email protected]> [@sudo_build](https://twitter.com/sudo_build)
ilyasBozdemir_Web3-and-NEAR
NCD Practice-I starter--near-sdk-as README.md as-pect.config.js asconfig.json neardev dev-account.env node_modules .bin asb.cmd asbuild.cmd asc.cmd asinit.cmd asp.cmd aspect.cmd assemblyscript-build.cmd mkdirp.cmd near-vm-as.cmd near-vm.cmd nearley-railroad.cmd nearley-test.cmd nearley-unparse.cmd nearleyc.cmd rimraf.cmd wasm-opt.cmd @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts node_modules .bin asc.cmd asinit.cmd package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js node_modules .bin asc.cmd asinit.cmd package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js node_modules .bin asc.cmd asinit.cmd package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js node_modules .bin nearley-railroad.cmd nearley-test.cmd nearley-unparse.cmd nearleyc.cmd package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md as-bignum .travis.yml README.md as-pect.config.js assembly __tests__ as-pect.d.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts index.js package.json tsconfig.json asbuild README.md dist cli.d.ts cli.js index.d.ts index.js main.d.ts main.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md as-pect.config.js assembly JSON.ts __tests__ as-pect.d.ts json-parse.spec.ts roundtrip.spec.ts to-string.spec.ts usage.spec.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md index.js package.json assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json node_modules .bin wasm-opt.cmd package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js node_modules .bin mkdirp.cmd rimraf.cmd package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js decamelize index.js package.json readme.md diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js find-up index.d.ts index.js package.json readme.md follow-redirects README.md http.js https.js index.js package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob README.md changelog.md common.js glob.js package.json sync.js has-flag index.d.ts index.js package.json readme.md hasurl README.md index.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md js-base64 LICENSE.md README.md base64.d.ts base64.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.sortby README.md index.js package.json long README.md dist long.js index.js package.json src long.js minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js node_modules .bin near-vm-as.cmd package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html node_modules .bin asb.cmd asbuild.cmd asc.cmd asinit.cmd asp.cmd aspect.cmd assemblyscript-build.cmd near-vm.cmd package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ exportAs.ts model.ts sentences.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json require-directory .travis.yml index.js package.json require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js node_modules .bin mkdirp.cmd package.json tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist decorator.d.ts decorator.js index.d.ts index.js mixin-tracking.d.ts mixin-tracking.js mixins.d.ts mixins.js proxy.d.ts proxy.js settings.d.ts settings.js types.d.ts types.js util.d.ts util.js package.json universal-url README.md browser.js index.js package.json visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts Practice-II starter--near-sdk-as README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts starter-near-sdk-as README.md as-pect.config.js asconfig.json neardev dev-account.env node_modules .bin asb.cmd asbuild.cmd asc.cmd asinit.cmd asp.cmd aspect.cmd assemblyscript-build.cmd mkdirp.cmd near-vm-as.cmd near-vm.cmd nearley-railroad.cmd nearley-test.cmd nearley-unparse.cmd nearleyc.cmd rimraf.cmd wasm-opt.cmd @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts node_modules .bin asc.cmd asinit.cmd package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js node_modules .bin asc.cmd asinit.cmd package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js node_modules .bin asc.cmd asinit.cmd package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js node_modules .bin nearley-railroad.cmd nearley-test.cmd nearley-unparse.cmd nearleyc.cmd package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md as-bignum .travis.yml README.md as-pect.config.js assembly __tests__ as-pect.d.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts index.js package.json tsconfig.json asbuild README.md dist cli.d.ts cli.js index.d.ts index.js main.d.ts main.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md as-pect.config.js assembly JSON.ts __tests__ as-pect.d.ts json-parse.spec.ts roundtrip.spec.ts to-string.spec.ts usage.spec.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md index.js package.json assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json node_modules .bin wasm-opt.cmd package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js node_modules .bin mkdirp.cmd rimraf.cmd package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js decamelize index.js package.json readme.md diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js find-up index.d.ts index.js package.json readme.md follow-redirects README.md http.js https.js index.js package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob README.md changelog.md common.js glob.js package.json sync.js has-flag index.d.ts index.js package.json readme.md hasurl README.md index.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md js-base64 LICENSE.md README.md base64.d.ts base64.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.sortby README.md index.js package.json long README.md dist long.js index.js package.json src long.js minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js node_modules .bin near-vm-as.cmd package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html node_modules .bin asb.cmd asbuild.cmd asc.cmd asinit.cmd asp.cmd aspect.cmd assemblyscript-build.cmd near-vm.cmd package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ exportAs.ts model.ts sentences.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json require-directory .travis.yml index.js package.json require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js node_modules .bin mkdirp.cmd package.json tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist decorator.d.ts decorator.js index.d.ts index.js mixin-tracking.d.ts mixin-tracking.js mixins.d.ts mixins.js proxy.d.ts proxy.js settings.d.ts settings.js types.d.ts types.js util.d.ts util.js package.json universal-url README.md browser.js index.js package.json visitor-as .github workflows test.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts | README.md
# set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. [![build status](https://secure.travis-ci.org/dankogai/js-base64.png)](http://travis-ci.org/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## HEADS UP In version 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see below). And since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/[email protected]/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IE11. Do the following in your shell. ```shell $ make base64.es5.js ``` assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) Like `chown -R`. Takes the same arguments as `fs.chown()` # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # ASBuild A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc. ## Usage ``` asb [entry file] [options] -- [args passed to asc] ``` ### Background AssemblyScript greater than v0.14.4 provides a `asconfig.json` configuration file that can be used to describe the options for building a project. ASBuild uses this and some defaults to create an easier CLI interface. ### Defaults #### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./test) Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## Work in progress --- ### WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(value: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde ([email protected]) - James Talmage ([email protected]) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) Shims used when bundling asc for browser usage. # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value: JSON.Value = changetype<JSON.Value>(valueOrNull); // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let stringValue: string = changetype<JSON.Str>(value).toString(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.toString(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## Work in progress --- ### WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [![build status](https://secure.travis-ci.org/dankogai/js-base64.png)](http://travis-ci.org/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## HEADS UP In version 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see below). And since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/[email protected]/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IE11. Do the following in your shell. ```shell $ make base64.es5.js ``` # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # ASBuild A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc. ## Usage ``` asb [entry file] [options] -- [args passed to asc] ``` ### Background AssemblyScript greater than v0.14.4 provides a `asconfig.json` configuration file that can be used to describe the options for building a project. ASBuild uses this and some defaults to create an easier CLI interface. ### Defaults #### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./test) # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(value: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function. # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <[email protected]> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 66 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/[email protected]/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <[email protected]> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/[email protected]/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value: JSON.Value = changetype<JSON.Value>(valueOrNull); // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let stringValue: string = changetype<JSON.Str>(value).toString(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.toString(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde ([email protected]) - James Talmage ([email protected]) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 66 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> # Web3-and-NEAR # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. # node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` Like `chown -R`. Takes the same arguments as `fs.chown()` # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) Shims used when bundling asc for browser usage. # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. # `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) # `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ```
nikoturin_near-roadNetworks
README.md contract README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts main.spec.ts as_types.d.ts index.ts tsconfig.json compile.js package-lock.json package.json docs foo.txt neardev dev-account.env shared-test-staging test.near.json shared-test test.near.json package.json src assets logo-black.svg logo-white.svg config.js global.css index.html index.js main.test.js utils.js wallet login index.html
Integrante Equipo 6: Ramsés Hernández. # near-roadNetworks POC para creación de SMART-CONTRACT's NEAR para cliente e interoperabilidad Breve contexto: Realizar cadena de bloques para integridad de información de cruces de casetas de cobro nacioanles entre operadoras, donde la información una vez realizado el cruce, el monto que le corresponda a la operadora pueda ser enviado a su infraestructra para pasar a procesos como la "contra-prestación", de acuerdo de los fee establecidos previamente; con la auditoria de montos, los datos permanecerán integros, de acuerdo a las normas establecidas por el ente regulador y con el apoyo de tecnologia blockchain a través de protocolo NEAR. Workflow draw.io: docs/ Mockups figma: docs/ Se realizó la carga de datos de inter-operabilidad, además de saldos simulados por incremento, es decir, se trabajará aún en el POC para la carga de SALDOS NEARS y la descarga de datos simulando distintas redes carreteras interactuando con wallet NEAR BC. Cuenta Master NEAR: gasram.testnet 1.- Se creo proyecto a través de NPX: npx create-near-app [options] new-near-bc 2.- Se realiza "near login" 3.- Se crea la subcuenta para signar contracto new-near-bc.gasram.testnet 4.- Se ejecuta yarn start para ejecutar y compilar. 5.- Se obtiene wasm en el siguiente path "out/main.wasm" 6.- Se ejecutaron las siguientes instrucciones de comando para deploy: near deploy --accountId new-near-bc.gasram.testnet --wasmFile out/main.wasm 7.- Testing realizado en el directorio contract/assembly código de prueba: __test__/main.spec.ts - yarn asp 8.- Se crearon las siguiente funciones: Funciones HeartBeat para validar comunicación. hearBeat: near call new-near-bc.gasram.testnet heartBeat '{"value":10}' --account-id gasram.testnet getHeartBeat: near view new-near-bc.gasram.testnet getHeartBeat '{}' Funciones para agregar mount, aunque en este caso solo es un número incremantal, se buscará obtener montos reales de las cuentas de clientes e inter-operabilidad. setAmount: near call new-near-bc.gasram.testnet setAmount '{"value":2000}' --account-id gasram.testnet getAmount: near view new-near-bc.gasram.testnet getAmount '{}' Funciones una vez validado el monto, se agregan las transacciones por cruce y/o caseta a la Blockchain para emitir los montos enter operadoras. setTransPass: near call new-near-bc.gasram.testnet setTransPass '{"dataRaw":"{Name:Ramses,Amount:2000,Operator:1}"}' --account-id gasram.testnet getTransPass: near view new-near-bc.gasram.testnet getTransPass '{"accountId":"gasram.testnet"}' Nota: está claro que aún falta realizar el POC a través de saldos derivado de NEAR, además de carga de registros reales a cada cruce, donde la blockchain podrá garantizar la integridad de los datos a las OPERADORAS e adminstradoras. new-near-bc Smart Contract ================== A [smart contract] written in [AssemblyScript] for an app initialized with [create-near-app] Quick Start =========== Before you compile this code, you will need to install [Node.js] ≥ 12 Exploring The Code ================== 1. The main smart contract code lives in `assembly/index.ts`. You can compile it with the `./compile` script. 2. Tests: You can run smart contract tests with the `./test` script. This runs standard AssemblyScript tests using [as-pect]. [smart contract]: https://docs.near.org/docs/develop/contracts/overview [AssemblyScript]: https://www.assemblyscript.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [as-pect]: https://www.npmjs.com/package/@as-pect/cli
luciotato_create-contract-cli
.eslintrc.json .vscode launch.json tasks.json README.md bin cli.js build.js dist lib Lexer Lexer.js Parser ASTBase.js CodeWriter.js Grammar.js Parser.js util CommandLineArgs.js ControlledError.js String.extensions.js UTF8FileReader.js UTF8FileWriter.js color.js logger.js mkPath.js main CLIOptions.js ContractAPI-producer.js create-contract-cli.js create-contract-cli model-TS CLIConfig.js CLIOptions.js ContractAPI.js ExtensionAPI.js tom.js util CommandLineArgs.js SpawnNearCli.js color.js saveConfig.js test expect.js test-deploy.js test-staking-pool.js test.ContractAPI.js test.Tokenizer.js test.js MAIN START TESTNET DEPLOY TESTS END TESTNET DEPLOY TESTS START PARSE TESTS END PARSE TESTS START dist main create-contract-cli TEST package-lock.json package.json res cli.js model-ES2018 .vscode tasks.json CLIConfig.js CLIOptions.js ContractAPI.js ExtensionAPI.js package-lock.json package.json tomES2018.js util CommandLineArgs.js SpawnNearCli.js color.js saveConfig.js packageES2018.json packageTYPEMOD.json test expected div-pool-API.js factory-API.js lockup-API.js multisig-API.js staking-pool-API.js swap-API.js vote-API.js rust NEARSwap src lib.rs div-pool src lib.rs lockup src lib.rs multisig src lib.rs staking-pool-factory src lib.rs staking-pool src lib.rs voting src lib.rs src lib Lexer Lexer.ts Parser ASTBase.ts CodeWriter.ts Grammar.ts Parser.ts util CommandLineArgs.ts ControlledError.ts String.extensions.ts UTF8FileReader.ts UTF8FileWriter.ts color.ts logger.ts mkPath.ts main CLIOptions.ts ContractAPI-producer.ts create-contract-cli.ts model-TS .vscode tasks.json CLIConfig.ts CLIOptions.ts ContractAPI.ts ExtensionAPI.ts package-lock.json package.json tom.ts tsconfig.json util CommandLineArgs.ts SpawnNearCli.ts color.ts saveConfig.ts test expect.ts test-debug.txt test-deploy.ts test-staking-pool.ts test.ContractAPI.ts test.Tokenizer.ts test.ts tsconfig.json
## CREATE-CONTRACT-CLI tool for NEAR Contracts ### What's this tool for? This tool can create a cli for any NEAR smart contract by parsing the contract code It works for any rust-coded contract [![asciicast](https://asciinema.org/a/364018.svg)](https://asciinema.org/a/364018) For example, let's create a cli for the staking-pool. I have a staking-pool deployed @luckystaker.stakehouse.betanet Let's create a cli to manage that contract from my account `> create-contract-cli --help` `> create-contract-cli lucky core-contracts/staking-pool --contractName luckystaker.stakehouse.betanet --accountId luciotato.betanet` ``` Creating dir lucky-cli......................................: OK Parsing core-contracts/staking-pool/src/lib.rs..............: OK Producing lucky-cli/ContractAPI.js..........................: OK Completing from create-contract-cli/model...................: OK ``` and.... **done!** We just parsed `core-contracts/staking-pool/scr/lib.rs` and created a new cli called "lucky" with commands to control a staking-pool contract The new cli is at ./lucky-cli and its nickname is "lucky" To see what the new cli can do type `lucky --help | more` ### Will it work for my contract? Yes! Just point it to your contract code! `> create-contract-cli myprecious myrepo/mycontract --contractName mycontract.accountId.near --accountId my.accountId.near` ### Shut up and take my money! How do I install it? ``` > git clone https://github.com/luciotato/create-contract-cli > cd create-contract-cli > npm link > cd .. > create-contract-cli --help ``` ### Prerequisites: * near-cli * nodejs v10+ To install prerequisites: You can use npm to install near-cli `> npm install -g near-cli` and you can check your node version ``` > node -v v12.x.y ``` If your version is <v10, you must install nodejs from [nodejs.org](nodejs.org) (windows/linux), or use [nvm](https://github.com/nvm-sh/nvm) (linux) to install node stable `> nvm install stable` ### Generated cli-tool Usage: #### JSON parameteres The cli parses command line arguments to create JSON parameters for the contract. You must: * Put spaces around { and } lucky withdraw { amount:10 } * Numbers are by default in NEAR, so they'll be converted to U128 yoctos before passing them to the contract. This means `lucky withdraw { amount: 10 }` will be converted to `near call lucky.near withdraw {amount:"100000000000000000000"}` * You can also use "**N**" to expressely indicate the amount is in NEAR lucky withdraw { amount:10N } * In some uncommon cases, you can use "**y**" to indicate you're stating yoctos, and the number will just be enclosed in quotes (It's uncommon to use yoctos to express parameters) lucky witdraw { amount:6500000000000000000000y } => call lucky.near withdraw {\"amount\":\"6500000000000000000000\"} * Because the default denomination is NEAR, you can state numbers with a decimal point and they will be converted to U128 Yoctos, that means multiplied by 1e24 and enclosed in quotes, so `amount:0.065` becomes `"amount":"6500000000000000000000"`. This is the default parameter convention lucky witdraw { amount: 0.065 } => call lucky.near withdraw {\"amount\":\"6500000000000000000000\"} * And finally, you can use "**i**" to indicate the number is an integer and should be sent as it is, not converted or enclosed in quotes lucky get_accounts { from_index: 1i, limit: 10i } => view lucky.near get_accounts {\"from_index\":1,\"limit\":10} * Note: Commas are optional lucky get_accounts { from_index:1i limit:10i } => view lucky.near get_accounts {\"from_index\":1,\"limit\":10} ### More Conversion Examples: --- `lucky stake { amount:10 }` or `lucky stake { amount:10N }` both execute: ``` near call lucky.near stake "{\"amount\":\"1000000000000000000000000\"}"` ``` --- `lucky stake { amount:0.0005 }`<br> or `lucky stake { amount:0.0005N }`<br> or `lucky stake { amount:500000000000000000000y }`<br> all of them execute: ``` near call luckystaker.near stake "{\"amount\":\"500000000000000000000\"}"` ``` --- `lucky get_accounts { from_index:1i limit: 10i }` executes: ``` near view luckystaker.near get_accounts "{\"from_index\":1,\"limit\":10}" ``` ## Caveats * Should work for any contract ... but Rust is specially hard to parse, if the tool can't parse your `lib.rs`, please report the issue [here](https://github.com/luciotato/create-contract-cli/issues) including some failing `lib.rs` sample code * Should work on Windows ## Road Map * Parse AssemblyScript contracts
L-tech_near-marketplace-frontend
.env README.md package.json public index.html manifest.json robots.txt src App.css App.js App.test.js components Wallet.js marketplace AddProduct.js Product.js Products.js utils Cover.js Loader.js Notifications.js index.css index.js reportWebVitals.js utils config.js marketplace.js near.js
# Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify)
kdbvier_Near-Nft-Contract
.vscode settings.json Cargo.toml README.md _rust_setup.sh build.sh nft-contract Cargo.toml src event.rs lib.rs package.json tests simulation_tests.rs test.js
# NFT Series Implementation ## Instructions `yarn && yarn test:deploy` #### Pre-reqs Rust, cargo, near-cli, etc... Everything should work if you have NEAR development env for Rust contracts set up. [Tests](test/api.test.js) [Contract](contract/src/lib.rs) ## Example Call ### Deploy ``` env NEAR_ENV=local near --keyPath ~/.near/localnet/validator_key.json deploy --accountId comic.test.near ``` ### NFT init ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId comic.test.near comic.test.near new_default_meta '{"owner_id":"comic.test.near", "treasury_id":"treasury.test.near"}' ``` ### NFT create series ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId comic.test.near comic.test.near nft_create_series '{"token_series_id":"1", "creator_id":"alice.test.near","token_metadata":{"title":"Naruto Shippuden ch.2: Menolong sasuke","media":"bafybeidzcan4nzcz7sczs4yzyxly4galgygnbjewipj6haco4kffoqpkiy", "reference":"bafybeicg4ss7qh5odijfn2eogizuxkrdh3zlv4eftcmgnljwu7dm64uwji", "copies": 100},"price":"1000000000000000000000000"}' --depositYocto 8540000000000000000000 ``` ### NFT create series with royalty ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId comic.test.near comic.test.near nft_create_series '{"token_series_id":"1","creator_id":"alice.test.near","token_metadata":{"title":"Naruto Shippuden ch.2: Menolong sasuke","media":"bafybeidzcan4nzcz7sczs4yzyxly4galgygnbjewipj6haco4kffoqpkiy", "reference":"bafybeicg4ss7qh5odijfn2eogizuxkrdh3zlv4eftcmgnljwu7dm64uwji", "copies": 100},"price":"1000000000000000000000000", "royalty":{"alice.test.near": 1000}}' --depositYocto 8540000000000000000000 ``` ### NFT transfer with payout ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId comic.test.near comic.test.near nft_transfer_payout '{"token_id":"10:1","receiver_id":"comic1.test.near","approval_id":"0","balance":"1000000000000000000000000", "max_len_payout": 10}' --depositYocto 1 ``` ### NFT buy ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId comic.test.near comic.test.near nft_buy '{"token_series_id":"1","receiver_id":"comic.test.near"}' --depositYocto 1011280000000000000000000 ``` ### NFT mint series (Creator only) ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId alice.test.near comic.test.near nft_mint '{"token_series_id":"1","receiver_id":"comic.test.near"}' --depositYocto 11280000000000000000000 ``` ### NFT transfer ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId comic.test.near comic.test.near nft_transfer '{"token_id":"1:1","receiver_id":"comic1.test.near"}' --depositYocto 1 ``` ### NFT set series non mintable (Creator only) ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId alice.test.near comic.test.near nft_set_series_non_mintable '{"token_series_id":"1"}' --depositYocto 1 ``` ### NFT set series price (Creator only) ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId alice.test.near comic.test.near nft_set_series_price '{"token_series_id":"1", "price": "2000000000000000000000000"}' --depositYocto 1 ``` ### NFT set series not for sale (Creator only) ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId alice.test.near comic.test.near nft_set_series_price '{"token_series_id":"1"}' --depositYocto 1 ``` ### NFT burn ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId comic.test.near comic.test.near nft_burn '{"token_id":"1:1"}' --depositYocto 1 ``` ### NFT approve ``` env NEAR_ENV=local near call --keyPath ~/.near/localnet/validator_key.json --accountId alice.test.near comic.test.near nft_approve '{"token_id":"1:10","account_id":"marketplace.test.near","msg":"{\"price\":\"3000000000000000000000000\",\"ft_token_id\":\"near\"}"}' --depositYocto 1320000000000000000000 ``` # Mint Bundle / Gacha ### Create mint bundle ``` create_mint_bundle '{"mint_bundle_id":"gacha-test","token_series_ids":["1","2","3"],"price":"0","limit_buy":1}' --depositYocto 8540000000000000000000 ``` ### Buy mint bundle ``` buy_mint_bundle '{"mint_bundle_id":"gacha-test","receiver_id":"cymac.testnet"}' --depositYocto 7680000000000000000000 ```
near_account-lookup-script
.github ISSUE_TEMPLATE BOUNTY.yml README.md package-lock.json package.json script.js
It is a helper script that produces CSV output with account owned balance, lockup locked balance, lockup total balance, and lockup liquid balance. You will need Node.js to run the script. Install project dependencies: ```sh npm install ``` Operate the script: 1. Edit `blockReference` value in `script.js` file 2. Run the script: ```sh node script.js 2>/dev/null >output.csv ``` 3. CSV output will be saved to `output.csv` file and errors will be ignored (remove `2>/dev/null` to see all the errors)
lin-crypto_near-nft-basic
.gitpod.yml Cargo.toml README.md integration-tests rs Cargo.toml src tests.rs ts package.json src main.ava.ts utils.ts nft Cargo.toml src lib.rs res README.md scripts build.bat build.sh flags.sh test-approval-receiver Cargo.toml src lib.rs test-token-receiver Cargo.toml src lib.rs
# NFT Basic on NEAR Protocol ## Prerequisites * Make sure Rust is installed per the prerequisites in [`near-sdk-rs`](https://github.com/near/near-sdk-rs). * Make sure [near-cli](https://github.com/near/near-cli) is installed. ## Build ```batch build.bat ``` ## Test ```bash cargo test -- --nocapture ``` # Folder that contains wasm files
imcsk8_chaverocoin
Cargo.toml README.md build.sh ft Cargo.toml src lib.rs test-contract-defi Cargo.toml src lib.rs tests sim main.rs no_macros.rs utils.rs with_macros.rs
# ChaveroCoin Your Friendly Near Super Duper Token Fungible Token (FT) =================== Example implementation of a [Fungible Token] contract which uses [near-contract-standards] and [simulation] tests. This is a contract-only example. [Fungible Token]: https://nomicon.io/Standards/FungibleToken/Core.html [near-contract-standards]: https://github.com/near/near-sdk-rs/tree/master/near-contract-standards [simulation]: https://github.com/near/near-sdk-rs/tree/master/near-sdk-sim Prerequisites ============= If you're using Gitpod, you can skip this step. 1. Make sure Rust is installed per the prerequisites in [`near-sdk-rs`](https://github.com/near/near-sdk-rs#pre-requisites) 2. Ensure `near-cli` is installed by running `near --version`. If not installed, install with: `npm install -g near-cli` ## Building To build run: ```bash ./build.sh ``` Using this contract =================== ### Quickest deploy You can build and deploy this smart contract to a development account. [Dev Accounts](https://docs.near.org/docs/concepts/account#dev-accounts) are auto-generated accounts to assist in developing and testing smart contracts. Please see the [Standard deploy](#standard-deploy) section for creating a more personalized account to deploy to. ```bash near dev-deploy --wasmFile res/fungible_token.wasm --helperUrl https://near-contract-helper.onrender.com ``` Behind the scenes, this is creating an account and deploying a contract to it. On the console, notice a message like: >Done deploying to dev-1234567890123 In this instance, the account is `dev-1234567890123`. A file has been created containing a key pair to the account, located at `neardev/dev-account`. To make the next few steps easier, we're going to set an environment variable containing this development account id and use that when copy/pasting commands. Run this command to the environment variable: ```bash source neardev/dev-account.env ``` You can tell if the environment variable is set correctly if your command line prints the account name after this command: ```bash echo $CONTRACT_NAME ``` The next command will initialize the contract using the `new` method: ```bash near call $CONTRACT_NAME new '{"owner_id": "'$CONTRACT_NAME'", "total_supply": "1000000000000000", "metadata": { "spec": "ft-1.0.0", "name": "Example Token Name", "symbol": "EXLT", "decimals": 8 }}' --accountId $CONTRACT_NAME ``` To get the fungible token metadata: ```bash near view $CONTRACT_NAME ft_metadata ``` ### Standard deploy This smart contract will get deployed to your NEAR account. For this example, please create a new NEAR account. Because NEAR allows the ability to upgrade contracts on the same account, initialization functions must be cleared. If you'd like to run this example on a NEAR account that has had prior contracts deployed, please use the `near-cli` command `near delete`, and then recreate it in Wallet. To create (or recreate) an account, please follow the directions on [NEAR Wallet](https://wallet.near.org/). Switch to `mainnet`. You can skip this step to use `testnet` as a default network. export NEAR_ENV=mainnet In the project root, log in to your newly created account with `near-cli` by following the instructions after this command: near login To make this tutorial easier to copy/paste, we're going to set an environment variable for your account id. In the below command, replace `MY_ACCOUNT_NAME` with the account name you just logged in with, including the `.near`: ID=MY_ACCOUNT_NAME You can tell if the environment variable is set correctly if your command line prints the account name after this command: echo $ID Now we can deploy the compiled contract in this example to your account: near deploy --wasmFile res/fungible_token.wasm --accountId $ID FT contract should be initialized before usage. You can read more about metadata at ['nomicon.io'](https://nomicon.io/Standards/FungibleToken/Metadata.html#reference-level-explanation). Modify the parameters and create a token: near call $ID new '{"owner_id": "'$ID'", "total_supply": "1000000000000000", "metadata": { "spec": "ft-1.0.0", "name": "Example Token Name", "symbol": "EXLT", "decimals": 8 }}' --accountId $ID Get metadata: near view $ID ft_metadata Transfer Example --------------- Let's set up an account to transfer some tokens to. These account will be a sub-account of the NEAR account you logged in with. near create-account bob.$ID --masterAccount $ID --initialBalance 1 Add storage deposit for Bob's account: near call $ID storage_deposit '' --accountId bob.$ID --amount 0.00125 Check balance of Bob's account, it should be `0` for now: near view $ID ft_balance_of '{"account_id": "'bob.$ID'"}' Transfer tokens to Bob from the contract that minted these fungible tokens, exactly 1 yoctoNEAR of deposit should be attached: near call $ID ft_transfer '{"receiver_id": "'bob.$ID'", "amount": "19"}' --accountId $ID --amount 0.000000000000000000000001 Check the balance of Bob again with the command from before and it will now return `19`. ## Testing As with many Rust libraries and contracts, there are tests in the main fungible token implementation at `ft/src/lib.rs`. Additionally, this project has [simulation] tests in `tests/sim`. Simulation tests allow testing cross-contract calls, which is crucial to ensuring that the `ft_transfer_call` function works properly. These simulation tests are the reason this project has the file structure it does. Note that the root project has a `Cargo.toml` which sets it up as a workspace. `ft` and `test-contract-defi` are both small & focused contract projects, the latter only existing for simulation tests. The root project imports `near-sdk-sim` and tests interaction between these contracts. You can run all these tests with one command: ```bash cargo test ``` If you want to run only simulation tests, you can use `cargo test simulate`, since all the simulation tests include "simulate" in their names. ## Notes - The maximum balance value is limited by U128 (`2**128 - 1`). - JSON calls should pass U128 as a base-10 string. E.g. "100". - This does not include escrow functionality, as `ft_transfer_call` provides a superior approach. An escrow system can, of course, be added as a separate contract or additional functionality within this contract. ## No AssemblyScript? [near-contract-standards] is currently Rust-only. We strongly suggest using this library to create your own Fungible Token contract to ensure it works as expected. Someday NEAR core or community contributors may provide a similar library for AssemblyScript, at which point this example will be updated to include both a Rust and AssemblyScript version. ## Contributing When making changes to the files in `ft` or `test-contract-defi`, remember to use `./build.sh` to compile all contracts and copy the output to the `res` folder. If you forget this, **the simulation tests will not use the latest versions**. Note that if the `rust-toolchain` file in this repository changes, please make sure to update the `.gitpod.Dockerfile` to explicitly specify using that as default as well.
esaminu123_console-boilerplate-template-rs-local-copy-58
.eslintrc.yml .github ISSUE_TEMPLATE 01_BUG_REPORT.md 02_FEATURE_REQUEST.md 03_CODEBASE_IMPROVEMENT.md 04_SUPPORT_QUESTION.md config.yml PULL_REQUEST_TEMPLATE.md labels.yml workflows codeql.yml deploy-to-console.yml labels.yml lock.yml pr-labels.yml stale.yml .gitpod.yml README.md contract Cargo.toml README.md build.sh deploy.sh src lib.rs docs CODE_OF_CONDUCT.md CONTRIBUTING.md SECURITY.md frontend App.js assets global.css logo-black.svg logo-white.svg index.html index.js near-interface.js near-wallet.js package.json start.sh ui-components.js integration-tests Cargo.toml src tests.rs package.json
<h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
mohamedalichelbi_chluff-assistant
README.md index.js package-lock.json package.json
# chluff-assistant my personal Discord bot to help automating tasks **- Prefix:** "$mishu" and it refers to the Pinyin spelling of the word "secretary" in Mandarin (mì shū). **- Command structure:** Prefix + space + command name + space + command args **- Available commands:** 1. *ping* : (takes no arguments) classic ping-pong command, useful for debugging 2. *str_eth_price* : (takes no arguments) Fetches current Ethereum price from Uniswap (expressed in [DAI](https://makerdao.com/en/)) 3. *embed* : (takes text in Markdown format as argument) some Markdown functionalities on Discord are limited to Bots, this command allows Human users to leverage the Bot's ability to create Embeds by passing Markdown text as an argument, the bot then will create an Embed out of it. A good use case is creating Hyperlinks: Exp: $mishu embed [link text here](url here)
neararabic_friendbook-react
README.md package-lock.json package.json public index.html manifest.json robots.txt src App.css App.js App.test.js components Wallet.js messages CreateMessage.js Message.js MessagesList.js utils Cover.js Loader.js Notifications.js index.css index.js reportWebVitals.js setupTests.js utils config.js contractMethods.js near.js
### عن الريبو كتاب الأصدقاء هو واجهة أمامية للعقد الذكي المسمى كتاب الأصدقاء حيث يمكن لأي شخص بعد تسجيل الدخول باستخدام محفظة نير أن يقوم بكتابة رسالة إلى أي حساب آخر ### التقنيات المستخدمة - react - react-bootstrap لجعل المظهر أفضل - near-api-js لتواصل مع العقد الذكي - react-toastify من أجل الإشعارات بحال النجاح أو الفشل في إرسال الرسالة #### لاستخدام الواجهة - اعمل نسخ للريبو - نفذ التعليمة التالية : ``` npm install ``` - في ملف config.js قم بتغيير حساب العقد الذكي إلى الحساب الذي قمت بنشر الكود عليه - قم بتنفيذ التعليمة التالية لبدء البرنامج ``` npm start ``` - استمتع بتغييره ليناسبك ### صورة عن صفحة الدخول : ![image](https://user-images.githubusercontent.com/11816618/160421321-4710102b-2af5-4d37-8d0a-34f4cb46956a.png) ### صورة عن الصفحة الرئيسية للبرنامج ![image](https://user-images.githubusercontent.com/11816618/160421068-3fc96688-653a-4122-826e-568ec4617178.png) # Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify)
pmarangone_near-examples-extended
README.md versioned_extended Cargo.toml README.md build.sh src balances.rs contracts.rs lib.rs
# near-examples-extended # Versioned Implements a versioned Struct. Base code taken from https://github.com/near/near-sdk-rs/tree/4.0.0-pre.9/examples/versioned
kharioki_Todos-Crud
README.md todo-contract as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts index.spec.ts as_types.d.ts index.ts model.ts tsconfig.json index.js neardev dev-account.env package-lock.json package.json tests index.js todos-web README.md methods.md package.json public index.html manifest.json robots.txt src App.css App.js components CreateTodo.js Todo.js TodoList.js config.js index.css index.js logo.svg reportWebVitals.js setupTests.js webpack.config.js
# Todos-Crud A standard CRUD app on the blockchain. It includes: - a smart contract that manages the data - a web app that allows users to interact with the smart contract # Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in your browser. The page will reload when you make changes.\ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `npm run build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify)
NearDeFi_burrow-api
.eslintrc.json README.md lib config.ts index.ts interfaces.ts utils.ts next.config.js package-lock.json package.json pages api ip.ts is-blocked.ts rewards.ts transactions [account].ts count [account].ts styles globals.css tsconfig.json
## Burrow API First, run the development server: ```bash npm run dev # or yarn dev ``` Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. ### Count transactions ``` http://localhost:3000/api/transactions/count/account.near ``` Output: ```JSON { "count": 1234 } ``` ### List transactions ``` http://localhost:3000/api/transactions/account.near ``` Optional parameters: ``` http://localhost:3000/api/transactions/thankyouser.near?limit=3&offset=0 ``` Output: ```JSON [ { "block_timestamp": "1661976362981090829", "receipt_id": "25ZLqJFdzoLmS2z16vyFnAC1LXxFLWmDZcgZSTR6tU5q", "status": "SUCCESS", "event": { "event": "repay", "data": [ { "account_id": "thankyouser.near", "amount": "27353874074849452841", "token_id": "aaaaaa20d9e0e2461697782ef11675f668207961.factory.bridge.near" } ] } }, { "block_timestamp": "1661976357278500499", "receipt_id": "9WKwqCUgFXptfhd6PGokYaYVfCS1L1DsDuBjvT48pAxy", "status": "SUCCESS", "event": { "event": "deposit", "data": [ { "account_id": "thankyouser.near", "amount": "27353873584136855096", "token_id": "aaaaaa20d9e0e2461697782ef11675f668207961.factory.bridge.near" } ] } }, { "block_timestamp": "1661976315116575206", "receipt_id": "4PJpML1taQ6rymAn1rXksQHz2xXEheCxAgG1GSdQU4vx", "status": "SUCCESS", "event": { "event": "withdraw_succeeded", "data": [ { "account_id": "thankyouser.near", "amount": "283452591261908519592090", "token_id": "wrap.near" } ] } } ] ```
n1arash_AnoNear
README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ```
Learn-NEAR-Club_book-review
.eslintrc.yml .github dependabot.yml workflows deploy.yml tests.yml .gitpod.yml .travis.yml README-Gitpod.md README.md as-pect.config.js asconfig.json assembly __tests__ as-pect.d.ts guestbook.spec.ts as_types.d.ts main.ts model.ts tsconfig.json babel.config.js neardev shared-test-staging test.near.json shared-test test.near.json package-lock.json package.json scripts init.sh run.sh src App.js actions application.js config.js index.html index.js reducers application.js rootReducer.js store.js tests integration App-integration.test.js ui App-ui.test.js
Near Book Review (refered from "Guest book" project) Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you have Node.js ≥ 12 installed (https://nodejs.org), then use it to install [yarn]: `npm install --global yarn` (or just `npm i -g yarn`) 2. Run the local development server: `yarn && yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Running `yarn dev` will tell you the URL you can visit in your browser to see the app. Exploring The Code ================== 1. The backend code lives in the `/assembly` folder. This code gets deployed to the NEAR blockchain when you run `yarn deploy:contract`. This sort of code-that-runs-on-a-blockchain is called a "smart contract" – [learn more about NEAR smart contracts][smart contract docs]. 2. The frontend code lives in the `/src` folder. [/src/index.html](/src/index.html) is a great place to start exploring. Note that it loads in `/src/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and backend. The backend code gets tested with the [asp] command for running the backend AssemblyScript tests, and [jest] for running frontend tests. You can run both of these at once with `yarn test`. Both contract and client-side code will auto-reload as you change source files. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contracts get deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli -------------------------- You need near-cli installed globally. Here's how: npm install --global near-cli This will give you the `near` [CLI] tool. Ensure that it's installed with: near --version Step 1: Create an account for the contract ------------------------------------------ Visit [NEAR Wallet] and make a new account. You'll be deploying these smart contracts to this new account. Now authorize NEAR CLI for this new account, and follow the instructions it gives you: near login Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'your-account-here!' Step 3: change remote URL if you cloned this repo ------------------------- Unless you forked this repository you will need to change the remote URL to a repo that you have commit access to. This will allow auto deployment to Github Pages from the command line. 1) go to GitHub and create a new repository for this project 2) open your terminal and in the root of this project enter the following: $ `git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY.git` Step 4: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contracts to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. [NEAR]: https://nearprotocol.com/ [yarn]: https://yarnpkg.com/ [AssemblyScript]: https://docs.assemblyscript.org/ [React]: https://reactjs.org [smart contract docs]: https://docs.nearprotocol.com/docs/roles/developer/contracts/assemblyscript [asp]: https://www.npmjs.com/package/@as-pect/cli [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.nearprotocol.com/docs/concepts/account [NEAR Wallet]: https://wallet.nearprotocol.com [near-cli]: https://github.com/nearprotocol/near-cli [CLI]: https://www.w3schools.com/whatis/whatis_cli.asp [create-near-app]: https://github.com/nearprotocol/create-near-app [gh-pages]: https://github.com/tschaub/gh-pages
kimchi9199_Blockchain-hw-day2
Cargo.toml README.md build.sh src ft_contract.rs lib.rs order.rs tests integration_test.rs
# Blockchain-hw-day2
omarr45_NEAR-quotes-fe
README.md firebase.json package-lock.json package.json public index.html manifest.json robots.txt src App.css App.js components Form.css Quotes.css SignIn.css config.js index.css index.js
### Frontend repo for [The Quotes Notebooks](https://github.com/omarr45/NEAR-quotes) done with NEAR protocol ![Screenshot](https://user-images.githubusercontent.com/58887202/179410852-5aa8966a-e2e9-45e1-8fda-afc04474e9cb.png)
joe-rlo_near-nft-degen
.eslintrc.yml .github dependabot.yml workflows deploy.yml tests.yml .gitpod.yml .travis.yml README-Gitpod.md README.md babel.config.js package-lock.json package.json src config.js index.html index.js tests integration App-integration.test.js ui App-ui.test.js
NEAR NFT Degen ========== This is a proof of concept game that became readydegen.one Uses the old Tradeport API (needs updating) so demo will likely not work. Feel free to explore but will not be updating this.
kmbtjs_Prep-for-NCD
README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
# `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068)
esaminu_test-rs-boilerplate-4
.eslintrc.yml .github ISSUE_TEMPLATE 01_BUG_REPORT.md 02_FEATURE_REQUEST.md 03_CODEBASE_IMPROVEMENT.md 04_SUPPORT_QUESTION.md config.yml PULL_REQUEST_TEMPLATE.md labels.yml workflows codeql.yml deploy-to-console.yml labels.yml lock.yml pr-labels.yml stale.yml .gitpod.yml README.md contract Cargo.toml README.md build.sh deploy.sh src lib.rs docs CODE_OF_CONDUCT.md CONTRIBUTING.md SECURITY.md frontend App.js assets global.css logo-black.svg logo-white.svg index.html index.js near-interface.js near-wallet.js package.json start.sh ui-components.js integration-tests Cargo.toml src tests.rs package.json
<h1 align="center"> <a href="https://github.com/near/boilerplate-template-rs"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_light.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> <img alt="" src="https://raw.githubusercontent.com/near/boilerplate-template-rs/main/docs/images/pagoda_logo_dark.png"> </picture> </a> </h1> <div align="center"> Rust Boilerplate Template <br /> <br /> <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a> · <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a> . <a href="https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a> </div> <div align="center"> <br /> [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) [![code with love by near](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-near-ff1414.svg?style=flat-square)](https://github.com/near) </div> <details open="open"> <summary>Table of Contents</summary> - [About](#about) - [Built With](#built-with) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Installation](#installation) - [Usage](#usage) - [Roadmap](#roadmap) - [Support](#support) - [Project assistance](#project-assistance) - [Contributing](#contributing) - [Authors & contributors](#authors--contributors) - [Security](#security) </details> --- ## About This project is created for easy-to-start as a React + Rust skeleton template in the Pagoda Gallery. It was initialized with [create-near-app]. Clone it and start to build your own gallery project! ### Built With [create-near-app], [amazing-github-template](https://github.com/dec0dOS/amazing-github-template) Getting Started ================== ### Prerequisites Make sure you have a [current version of Node.js](https://nodejs.org/en/about/releases/) installed – we are targeting versions `16+`. Read about other [prerequisites](https://docs.near.org/develop/prerequisites) in our docs. ### Installation Install all dependencies: npm install Build your contract: npm run build Deploy your contract to TestNet with a temporary dev account: npm run deploy Usage ===== Test your contract: npm test Start your frontend: npm start Exploring The Code ================== 1. The smart-contract code lives in the `/contract` folder. See the README there for more info. In blockchain apps the smart contract is the "backend" of your app. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/index.js`, this is your entrypoint to learn how the frontend connects to the NEAR blockchain. 3. Test your contract: `npm test`, this will run the tests in `integration-tests` directory. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `npm run deploy`, your smart contract gets deployed to the live NEAR TestNet with a temporary dev account. When you're ready to make it permanent, here's how: Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `npm install`, but for best ergonomics you may want to install it globally: npm install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: deploy the contract --------------------------- Use the CLI to deploy the contract to TestNet with your account ID. Replace `PATH_TO_WASM_FILE` with the `wasm` that was generated in `contract` build directory. near deploy --accountId near-blank-project.YOUR-NAME.testnet --wasmFile PATH_TO_WASM_FILE Step 3: set contract name in your frontend code ----------------------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/concepts/basics/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages ## Roadmap See the [open issues](https://github.com/near/boilerplate-template-rs/issues) for a list of proposed features (and known issues). - [Top Feature Requests](https://github.com/near/boilerplate-template-rs/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Top Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction) - [Newest Bugs](https://github.com/near/boilerplate-template-rs/issues?q=is%3Aopen+is%3Aissue+label%3Abug) ## Support Reach out to the maintainer: - [GitHub issues](https://github.com/near/boilerplate-template-rs/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+) ## Project assistance If you want to say **thank you** or/and support active development of Rust Boilerplate Template: - Add a [GitHub Star](https://github.com/near/boilerplate-template-rs) to the project. - Tweet about the Rust Boilerplate Template. - Write interesting articles about the project on [Dev.to](https://dev.to/), [Medium](https://medium.com/) or your personal blog. Together, we can make Rust Boilerplate Template **better**! ## Contributing First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are **greatly appreciated**. Please read [our contribution guidelines](docs/CONTRIBUTING.md), and thank you for being involved! ## Authors & contributors The original setup of this repository is by [Dmitriy Sheleg](https://github.com/shelegdmitriy). For a full list of all authors and contributors, see [the contributors page](https://github.com/near/boilerplate-template-rs/contributors). ## Security Rust Boilerplate Template follows good practices of security, but 100% security cannot be assured. Rust Boilerplate Template is provided **"as is"** without any **warranty**. Use at your own risk. _For more information and to report security issues, please refer to our [security documentation](docs/SECURITY.md)._ # Hello NEAR Contract The smart contract exposes two methods to enable storing and retrieving a greeting in the NEAR network. ```rust const DEFAULT_GREETING: &str = "Hello"; #[near_bindgen] #[derive(BorshDeserialize, BorshSerialize)] pub struct Contract { greeting: String, } impl Default for Contract { fn default() -> Self { Self{greeting: DEFAULT_GREETING.to_string()} } } #[near_bindgen] impl Contract { // Public: Returns the stored greeting, defaulting to 'Hello' pub fn get_greeting(&self) -> String { return self.greeting.clone(); } // Public: Takes a greeting, such as 'howdy', and records it pub fn set_greeting(&mut self, greeting: String) { // Record a log permanently to the blockchain! log!("Saving greeting {}", greeting); self.greeting = greeting; } } ``` <br /> # Quickstart 1. Make sure you have installed [rust](https://rust.org/). 2. Install the [`NEAR CLI`](https://github.com/near/near-cli#setup) <br /> ## 1. Build and Deploy the Contract You can automatically compile and deploy the contract in the NEAR testnet by running: ```bash ./deploy.sh ``` Once finished, check the `neardev/dev-account` file to find the address in which the contract was deployed: ```bash cat ./neardev/dev-account # e.g. dev-1659899566943-21539992274727 ``` <br /> ## 2. Retrieve the Greeting `get_greeting` is a read-only method (aka `view` method). `View` methods can be called for **free** by anyone, even people **without a NEAR account**! ```bash # Use near-cli to get the greeting near view <dev-account> get_greeting ``` <br /> ## 3. Store a New Greeting `set_greeting` changes the contract's state, for which it is a `change` method. `Change` methods can only be invoked using a NEAR account, since the account needs to pay GAS for the transaction. ```bash # Use near-cli to set a new greeting near call <dev-account> set_greeting '{"message":"howdy"}' --accountId <dev-account> ``` **Tip:** If you would like to call `set_greeting` using your own account, first login into NEAR using: ```bash # Use near-cli to login your NEAR account near login ``` and then use the logged account to sign the transaction: `--accountId <your-account>`.
phongnguyen2012_stakingpool
Cargo.toml README.md build.sh createtoken.sh src account.rs config.rs core_impl.rs enumeration.rs internal.rs lib.rs utils.rs t2 200k stakingcontract.sh
Build ./build.sh Run FT contract ./createtoken.sh Run Staking contract ./stakingcontract.sh
PlayibleClub_playible-near-interface
.babelrc.json .github workflows deploy.yml .idea codeStyles Project.xml codeStyleConfig.xml inspectionProfiles Project_Default.xml misc.xml modules.xml runConfigurations.xml vcs.xml .vscode settings.json README.md SUMMARY.md apollo-client.ts band datasource Readme.md datasource.py datasource_env.py requirements.txt components navbars NavigationList.ts data constants actions.ts address.ts gasFees.ts nearContracts.ts sportConstants.ts statNames.ts status.ts statusMessage.ts index.ts teams.json deploy.sh global.d.ts interfaces account.ts message.ts jest.config.js mocks fileMock.js next-env.d.ts next.config.js package.json postcss.config.js public images dai.svg icons Arrow.svg Clubhouse.svg FantasyAverageScore.svg Filter.svg Hamburger.svg Home.svg Mint.svg My_Packs.svg My_Squad.svg Play.svg PlayibleIconNavBar.svg Rank.svg Reward.svg Search.svg Wallet.svg usdc.svg usdt.svg vercel.svg redux admin adminSlice.ts athlete athleteSlice.ts sportSlice.ts store.ts teamSlice.ts reducers assets index.ts external fantasy index.ts index.ts playible assets index.ts collection index.ts index.ts salesOrder index.ts wallet index.ts index.ts store.ts s3config.ts styles Header.module.css Home.module.css globals.css tailwind.config.js tests components ConnectButton.spec.js DialogButton.spec.js Header.spec.js Input.spec.js container ConnectWallet.spec.js index.spec.js redux actions walletActions.spec.js reducers walletReducer.spec.js setup.js setupAfterEnv.js utils index.js tsconfig.json utils address helper.ts admin index.ts athlete helper.ts position.ts date helper.ts game helper.ts general index.ts mutations index.ts near helper.ts index.ts playible index.ts queries index.ts statsperform index.ts wallet index.ts
# Overview Fantasy Investar will combine fantasy sports, NFT technology and Anchor Protocol, to make sports investing possible. ## Brief Description Fantasy Investar will provide the 50+ million fantasy sports users around the world the ability to invest into athletes, sharing in their athletes’ success as they benefit from on-field performances. This will be achieved by allowing users to invest into NFT athlete tokens, providing users a sense of ownership of their athletes’ successes. We will be continually increasing the utility of these tokens to further enhance their value, as well as the engagement users will have with their chosen sport and athletes. Our approach will integrate a fantasy sports scoring model to distribute rewards to the best performing athletes, as well as rewarding individual achievements \(MVP, ROTY etc\). Users will be able to use their tokens to form teams to compete in daily, weekly and seasonal challenges to win prize pools. Fantasy Investar will enable sports fans to become their own player scout, assessing prospects and investing early to sell for a profit as demand increases. ## Learn More Fantasy Investar will be built on the Terra blockchain, integrating Anchor Protocol as a key component to the success of the business model. Funds generated within Fantasy Investar will be deposited into Anchor Protocol, generating yield to be distributed amongst users, in the form of rewards. UST will be the currency used within the Fantasy Investar platform, users will need to create and connect their Terra wallet on the platform in order to use Fantasy Investar ## Deploy on Vercel The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js. Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details.
hryer_NCD.L1.sample--thanks
README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.say-anon-thanks.sh 2.say-thanks.sh README.md o-report.sh o-transfer.sh x-deploy.sh x-remove.sh src as-pect.d.ts as_types.d.ts thanks __tests__ README.md index.unit.spec.ts asconfig.json assembly index.ts models.ts tsconfig.json utils.ts testing.md
## Unit tests Unit tests can be run from the top level folder using the following command: ``` yarn test:unit ``` # Thanks Say "thanks!" to other students of the NCD by calling _their_ instance of this contract. You can optionally attach tokens to your message, or even leave an anonymous tip. Of course keep in mind that your signing account will be visible on the blockchain via NEAR Explorer even if you send an anonymous message. ## ⚠️ Warning Any content produced by NEAR, or developer resources that NEAR provides, are for educational and inspiration purposes only. NEAR does not encourage, induce or sanction the deployment of any such applications in violation of applicable laws or regulations. ## Contract ```ts // ------------------------------------ // contract initialization // ------------------------------------ /** * initialize contract with owner ID and other config data * * (note: this method is called "constructor" in the singleton contract code) */ function init(owner: AccountId, allow_anonymous: bool = true): void // ------------------------------------ // public methods // ------------------------------------ /** * give thanks to the owner of the contract * and optionally attach tokens */ function say(message: string, anonymous: bool): bool // ------------------------------------ // owner methods // ------------------------------------ /** * show all messages and users */ function list(): Array<Message> /** * generate a summary report */ function summarize(): Contract /** * transfer received funds to owner account */ function transfer(): void ``` ## Usage ### Development To deploy the contract for development, follow these steps: 1. clone this repo locally 2. run `./scripts/1.dev-deploy.sh` to deploy the contract (this uses `near dev-deploy`) **Your contract is now ready to use.** To use the contract you can do any of the following: _Public scripts_ ```sh 2.say-thanks.sh # post a message saying thank you, optionally attaching NEAR tokens 2.say-anon-thanks.sh # post an anonymous message (otherwise same as above) ``` _Owner scripts_ ```sh o-report.sh # generate a summary report of the contract state o-transfer.sh # transfer received funds to the owner account ``` ### Production It is recommended that you deploy the contract to a subaccount under your MainNet account to make it easier to identify you as the owner 1. clone this repo locally 2. run `./scripts/x-deploy.sh` to rebuild, deploy and initialize the contract to a target account requires the following environment variables - `NEAR_ENV`: Either `testnet` or `mainnet` - `OWNER`: The owner of the contract and the parent account. The contract will be deployed to `thanks.$OWNER` 3. run `./scripts/x-remove.sh` to delete the account requires the following environment variables - `NEAR_ENV`: Either `testnet` or `mainnet` - `OWNER`: The owner of the contract and the parent account. The contract will be deployed to `thanks.$OWNER` ## Setting up your terminal The scripts in this folder support a simple demonstration of the contract. It uses the following setup: ```txt ┌───────────────────────────────────────┬───────────────────────────────────────┐ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────────────────────────────────────┴───────────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _Owner scripts_ ```sh 1.dev-deploy.sh # cleanup, compile and deploy contract o-report.sh # generate a summary report of the contract state o-transfer.sh # transfer received funds to the owner account ``` _Public scripts_ ```sh 2.say-thanks.sh # post a message saying thank you, optionally attaching NEAR tokens 2.say-anon-thanks.sh # post an anonymous message (otherwise same as above) ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068)
gyan0890_NCD4-TheGivingFoundation
README.md as-pect.config.js asconfig.json neardev dev-account.env node_modules @as-pect assembly README.md assembly index.ts internal Actual.ts Expectation.ts Expected.ts Reflect.ts ReflectedValueType.ts Test.ts assert.ts call.ts comparison toIncludeComparison.ts toIncludeEqualComparison.ts log.ts noOp.ts package.json types as-pect.d.ts as-pect.portable.d.ts env.d.ts cli README.md init as-pect.config.js env.d.ts example.spec.ts init-types.d.ts portable-types.d.ts lib as-pect.cli.amd.d.ts as-pect.cli.amd.js help.d.ts help.js index.d.ts index.js init.d.ts init.js portable.d.ts portable.js run.d.ts run.js test.d.ts test.js types.d.ts types.js util CommandLineArg.d.ts CommandLineArg.js IConfiguration.d.ts IConfiguration.js asciiArt.d.ts asciiArt.js collectReporter.d.ts collectReporter.js getTestEntryFiles.d.ts getTestEntryFiles.js removeFile.d.ts removeFile.js strings.d.ts strings.js writeFile.d.ts writeFile.js worklets ICommand.d.ts ICommand.js compiler.d.ts compiler.js package.json core README.md lib as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js reporter CombinationReporter.d.ts CombinationReporter.js EmptyReporter.d.ts EmptyReporter.js IReporter.d.ts IReporter.js SummaryReporter.d.ts SummaryReporter.js VerboseReporter.d.ts VerboseReporter.js test IWarning.d.ts IWarning.js TestContext.d.ts TestContext.js TestNode.d.ts TestNode.js transform assemblyscript.d.ts assemblyscript.js createAddReflectedValueKeyValuePairsMember.d.ts createAddReflectedValueKeyValuePairsMember.js createGenericTypeParameter.d.ts createGenericTypeParameter.js createStrictEqualsMember.d.ts createStrictEqualsMember.js emptyTransformer.d.ts emptyTransformer.js hash.d.ts hash.js index.d.ts index.js util IAspectExports.d.ts IAspectExports.js IWriteable.d.ts IWriteable.js ReflectedValue.d.ts ReflectedValue.js TestNodeType.d.ts TestNodeType.js rTrace.d.ts rTrace.js stringifyReflectedValue.d.ts stringifyReflectedValue.js timeDifference.d.ts timeDifference.js wasmTools.d.ts wasmTools.js package.json csv-reporter index.ts lib as-pect.csv-reporter.amd.d.ts as-pect.csv-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json json-reporter index.ts lib as-pect.json-reporter.amd.d.ts as-pect.json-reporter.amd.js index.d.ts index.js package.json readme.md tsconfig.json snapshots __tests__ snapshot.spec.ts jest.config.js lib Snapshot.d.ts Snapshot.js SnapshotDiff.d.ts SnapshotDiff.js SnapshotDiffResult.d.ts SnapshotDiffResult.js as-pect.core.amd.d.ts as-pect.core.amd.js index.d.ts index.js parser grammar.d.ts grammar.js package.json src Snapshot.ts SnapshotDiff.ts SnapshotDiffResult.ts index.ts parser grammar.ts tsconfig.json @assemblyscript loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json ansi-regex index.d.ts index.js package.json readme.md ansi-styles index.d.ts index.js package.json readme.md as-bignum .travis.yml README.md as-pect.config.js assembly __tests__ as-pect.d.ts safe_u128.spec.as.ts u128.spec.as.ts u256.spec.as.ts utils.ts fixed fp128.ts fp256.ts index.ts safe fp128.ts fp256.ts types.ts globals.ts index.ts integer i128.ts i256.ts index.ts safe i128.ts i256.ts i64.ts index.ts u128.ts u256.ts u64.ts u128.ts u256.ts tsconfig.json utils.ts index.js package.json tsconfig.json asbuild README.md dist cli.d.ts cli.js index.d.ts index.js main.d.ts main.js index.js node_modules cliui CHANGELOG.md LICENSE.txt README.md index.js package.json wrap-ansi index.js package.json readme.md y18n CHANGELOG.md README.md index.js package.json yargs-parser CHANGELOG.md LICENSE.txt README.md index.js lib tokenize-arg-string.js package.json yargs CHANGELOG.md README.md build lib apply-extends.d.ts apply-extends.js argsert.d.ts argsert.js command.d.ts command.js common-types.d.ts common-types.js completion-templates.d.ts completion-templates.js completion.d.ts completion.js is-promise.d.ts is-promise.js levenshtein.d.ts levenshtein.js middleware.d.ts middleware.js obj-filter.d.ts obj-filter.js parse-command.d.ts parse-command.js process-argv.d.ts process-argv.js usage.d.ts usage.js validation.d.ts validation.js yargs.d.ts yargs.js yerror.d.ts yerror.js index.js locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json yargs.js package.json assemblyscript-json .eslintrc.js .travis.yml README.md as-pect.config.js assembly JSON.ts __tests__ as-pect.d.ts json-parse.spec.ts roundtrip.spec.ts to-string.spec.ts usage.spec.ts decoder.ts encoder.ts index.ts tsconfig.json util index.ts docs README.md classes decoderstate.md json.arr.md json.bool.md json.float.md json.integer.md json.null.md json.num.md json.obj.md json.str.md json.value.md jsondecoder.md jsonencoder.md jsonhandler.md throwingjsonhandler.md modules json.md index.js package.json assemblyscript README.md cli README.md asc.d.ts asc.js asc.json shim README.md fs.js path.js process.js transform.d.ts transform.js util colors.d.ts colors.js find.d.ts find.js mkdirp.d.ts mkdirp.js options.d.ts options.js utf8.d.ts utf8.js dist asc.js assemblyscript.d.ts assemblyscript.js sdk.js index.d.ts index.js lib loader README.md index.d.ts index.js package.json umd index.d.ts index.js package.json rtrace README.md bin rtplot.js index.d.ts index.js package.json umd index.d.ts index.js package.json package-lock.json package.json std README.md assembly.json assembly array.ts arraybuffer.ts atomics.ts bindings Date.ts Math.ts Reflect.ts asyncify.ts console.ts wasi.ts wasi_snapshot_preview1.ts wasi_unstable.ts builtins.ts compat.ts console.ts crypto.ts dataview.ts date.ts diagnostics.ts error.ts function.ts index.d.ts iterator.ts map.ts math.ts memory.ts number.ts object.ts polyfills.ts process.ts reference.ts regexp.ts rt.ts rt README.md common.ts index-incremental.ts index-minimal.ts index-stub.ts index.d.ts itcms.ts rtrace.ts stub.ts tcms.ts tlsf.ts set.ts shared feature.ts target.ts tsconfig.json typeinfo.ts staticarray.ts string.ts symbol.ts table.ts tsconfig.json typedarray.ts util casemap.ts error.ts hash.ts math.ts memory.ts number.ts sort.ts string.ts vector.ts wasi index.ts portable.json portable index.d.ts index.js types assembly index.d.ts package.json portable index.d.ts package.json tsconfig-base.json axios CHANGELOG.md README.md UPGRADE_GUIDE.md dist axios.js axios.min.js index.d.ts index.js lib adapters README.md http.js xhr.js axios.js cancel Cancel.js CancelToken.js isCancel.js core Axios.js InterceptorManager.js README.md buildFullPath.js createError.js dispatchRequest.js enhanceError.js mergeConfig.js settle.js transformData.js defaults.js helpers README.md bind.js buildURL.js combineURLs.js cookies.js deprecatedMethod.js isAbsoluteURL.js isURLSameOrigin.js normalizeHeaderName.js parseHeaders.js spread.js utils.js package.json balanced-match LICENSE.md README.md index.js package.json base-x LICENSE.md README.md package.json src index.d.ts index.js binary-install README.md example binary.js package.json run.js index.js package.json src binary.js binaryen README.md index.d.ts package-lock.json package.json wasm.d.ts bn.js CHANGELOG.md README.md lib bn.js package.json brace-expansion README.md index.js package.json bs58 CHANGELOG.md README.md index.js package.json camelcase index.d.ts index.js package.json readme.md chalk index.d.ts package.json readme.md source index.js templates.js util.js chownr README.md chownr.js package.json cliui CHANGELOG.md LICENSE.txt README.md build lib index.js string-utils.js package.json color-convert CHANGELOG.md README.md conversions.js index.js package.json route.js color-name README.md index.js package.json commander CHANGELOG.md Readme.md index.js package.json typings index.d.ts concat-map .travis.yml example map.js index.js package.json test map.js debug .coveralls.yml .travis.yml CHANGELOG.md README.md karma.conf.js node.js package.json src browser.js debug.js index.js node.js decamelize index.js package.json readme.md diff CONTRIBUTING.md README.md dist diff.js lib convert dmp.js xml.js diff array.js base.js character.js css.js json.js line.js sentence.js word.js index.es6.js index.js patch apply.js create.js merge.js parse.js util array.js distance-iterator.js params.js package.json release-notes.md runtime.js discontinuous-range .travis.yml README.md index.js package.json test main-test.js emoji-regex LICENSE-MIT.txt README.md es2015 index.js text.js index.d.ts index.js package.json text.js env-paths index.d.ts index.js package.json readme.md escalade dist index.js index.d.ts package.json readme.md sync index.d.ts index.js find-up index.d.ts index.js package.json readme.md follow-redirects README.md http.js https.js index.js package.json fs-minipass README.md index.js package.json fs.realpath README.md index.js old.js package.json get-caller-file LICENSE.md README.md index.d.ts index.js package.json glob README.md changelog.md common.js glob.js package.json sync.js has-flag index.d.ts index.js package.json readme.md hasurl README.md index.js package.json inflight README.md inflight.js package.json inherits README.md inherits.js inherits_browser.js package.json is-fullwidth-code-point index.d.ts index.js package.json readme.md js-base64 LICENSE.md README.md base64.d.ts base64.js package.json locate-path index.d.ts index.js package.json readme.md lodash.clonedeep README.md index.js package.json lodash.sortby README.md index.js package.json long README.md dist long.js index.js package.json src long.js minimatch README.md minimatch.js package.json minimist .travis.yml example parse.js index.js package.json test all_bool.js bool.js dash.js default_bool.js dotted.js kv_short.js long.js num.js parse.js parse_modified.js proto.js short.js stop_early.js unknown.js whitespace.js minipass README.md index.js package.json minizlib README.md constants.js index.js package.json mkdirp bin cmd.js usage.txt index.js package.json moo README.md moo.js package.json ms index.js license.md package.json readme.md near-mock-vm assembly __tests__ main.ts context.ts index.ts outcome.ts vm.ts bin bin.js package.json pkg near_mock_vm.d.ts near_mock_vm.js package.json vm dist cli.d.ts cli.js context.d.ts context.js index.d.ts index.js memory.d.ts memory.js runner.d.ts runner.js utils.d.ts utils.js index.js near-sdk-as as-pect.config.js as_types.d.ts asconfig.json asp.asconfig.json assembly __tests__ as-pect.d.ts assert.spec.ts avl-tree.spec.ts bignum.spec.ts contract.spec.ts contract.ts data.txt empty.ts generic.ts includeBytes.spec.ts main.ts max-heap.spec.ts model.ts near.spec.ts persistent-set.spec.ts promise.spec.ts rollback.spec.ts roundtrip.spec.ts runtime.spec.ts unordered-map.spec.ts util.ts utils.spec.ts as_types.d.ts bindgen.ts index.ts json.lib.ts tsconfig.json vm __tests__ vm.include.ts index.ts compiler.js imports.js package.json near-sdk-bindgen README.md assembly index.ts compiler.js dist JSONBuilder.d.ts JSONBuilder.js classExporter.d.ts classExporter.js index.d.ts index.js transformer.d.ts transformer.js typeChecker.d.ts typeChecker.js utils.d.ts utils.js index.js package.json near-sdk-core README.md asconfig.json assembly as_types.d.ts base58.ts base64.ts bignum.ts collections avlTree.ts index.ts maxHeap.ts persistentDeque.ts persistentMap.ts persistentSet.ts persistentUnorderedMap.ts persistentVector.ts util.ts contract.ts env env.ts index.ts runtime_api.ts index.ts logging.ts math.ts promise.ts storage.ts tsconfig.json util.ts docs assets css main.css js main.js search.json classes _sdk_core_assembly_collections_avltree_.avltree.html _sdk_core_assembly_collections_avltree_.avltreenode.html _sdk_core_assembly_collections_avltree_.childparentpair.html _sdk_core_assembly_collections_avltree_.nullable.html _sdk_core_assembly_collections_persistentdeque_.persistentdeque.html _sdk_core_assembly_collections_persistentmap_.persistentmap.html _sdk_core_assembly_collections_persistentset_.persistentset.html _sdk_core_assembly_collections_persistentunorderedmap_.persistentunorderedmap.html _sdk_core_assembly_collections_persistentvector_.persistentvector.html _sdk_core_assembly_contract_.context-1.html _sdk_core_assembly_contract_.contractpromise.html _sdk_core_assembly_contract_.contractpromiseresult.html _sdk_core_assembly_math_.rng.html _sdk_core_assembly_promise_.contractpromisebatch.html _sdk_core_assembly_storage_.storage-1.html globals.html index.html modules _sdk_core_assembly_base58_.base58.html _sdk_core_assembly_base58_.html _sdk_core_assembly_base64_.base64.html _sdk_core_assembly_base64_.html _sdk_core_assembly_collections_avltree_.html _sdk_core_assembly_collections_index_.collections.html _sdk_core_assembly_collections_index_.html _sdk_core_assembly_collections_persistentdeque_.html _sdk_core_assembly_collections_persistentmap_.html _sdk_core_assembly_collections_persistentset_.html _sdk_core_assembly_collections_persistentunorderedmap_.html _sdk_core_assembly_collections_persistentvector_.html _sdk_core_assembly_collections_util_.html _sdk_core_assembly_contract_.html _sdk_core_assembly_env_env_.env.html _sdk_core_assembly_env_env_.html _sdk_core_assembly_env_index_.html _sdk_core_assembly_env_runtime_api_.html _sdk_core_assembly_index_.html _sdk_core_assembly_logging_.html _sdk_core_assembly_logging_.logging.html _sdk_core_assembly_math_.html _sdk_core_assembly_math_.math.html _sdk_core_assembly_promise_.html _sdk_core_assembly_storage_.html _sdk_core_assembly_util_.html _sdk_core_assembly_util_.util.html package.json near-sdk-simulator __tests__ avl-tree-contract.spec.ts cross.spec.ts empty.spec.ts exportAs.spec.ts singleton-no-constructor.spec.ts singleton.spec.ts asconfig.js asconfig.json assembly __tests__ avlTreeContract.ts empty.ts exportAs.ts model.ts sentences.ts singleton-fail.ts singleton-no-constructor.ts singleton.ts words.ts as_types.d.ts tsconfig.json dist bin.d.ts bin.js context.d.ts context.js index.d.ts index.js runtime.d.ts runtime.js types.d.ts types.js utils.d.ts utils.js jest.config.js out assembly __tests__ exportAs.ts model.ts sentences.ts singleton-no-constructor.ts singleton.ts package.json src context.ts index.ts runtime.ts types.ts utils.ts tsconfig.json near-vm getBinary.js install.js package.json run.js uninstall.js nearley LICENSE.txt README.md bin nearley-railroad.js nearley-test.js nearley-unparse.js nearleyc.js lib compile.js generate.js lint.js nearley-language-bootstrapped.js nearley.js stream.js unparse.js package.json once README.md once.js package.json p-limit index.d.ts index.js package.json readme.md p-locate index.d.ts index.js package.json readme.md p-try index.d.ts index.js package.json readme.md path-exists index.d.ts index.js package.json readme.md path-is-absolute index.js package.json readme.md punycode LICENSE-MIT.txt README.md package.json punycode.es6.js punycode.js railroad-diagrams README.md example.html generator.html package.json railroad-diagrams.css railroad-diagrams.js railroad_diagrams.py randexp README.md lib randexp.js package.json require-directory .travis.yml index.js package.json require-main-filename CHANGELOG.md LICENSE.txt README.md index.js package.json ret README.md lib index.js positions.js sets.js types.js util.js package.json rimraf CHANGELOG.md README.md bin.js package.json rimraf.js safe-buffer README.md index.d.ts index.js package.json set-blocking CHANGELOG.md LICENSE.txt README.md index.js package.json string-width index.d.ts index.js package.json readme.md strip-ansi index.d.ts index.js package.json readme.md supports-color browser.js index.js package.json readme.md tar README.md index.js lib create.js extract.js get-write-flag.js header.js high-level-opt.js large-numbers.js list.js mkdir.js mode-fix.js pack.js parse.js path-reservations.js pax.js read-entry.js replace.js types.js unpack.js update.js warn-mixin.js winchars.js write-entry.js package.json tr46 LICENSE.md README.md index.js lib mappingTable.json regexes.js package.json ts-mixer CHANGELOG.md README.md dist decorator.d.ts decorator.js index.d.ts index.js mixin-tracking.d.ts mixin-tracking.js mixins.d.ts mixins.js proxy.d.ts proxy.js settings.d.ts settings.js types.d.ts types.js util.d.ts util.js package.json universal-url README.md browser.js index.js package.json visitor-as .travis.yml README.md as index.d.ts index.js asconfig.json dist astBuilder.d.ts astBuilder.js base.d.ts base.js baseTransform.d.ts baseTransform.js decorator.d.ts decorator.js examples capitalize.d.ts capitalize.js exportAs.d.ts exportAs.js functionCallTransform.d.ts functionCallTransform.js includeBytesTransform.d.ts includeBytesTransform.js list.d.ts list.js index.d.ts index.js path.d.ts path.js simpleParser.d.ts simpleParser.js transformer.d.ts transformer.js utils.d.ts utils.js visitor.d.ts visitor.js package.json tsconfig.json webidl-conversions LICENSE.md README.md lib index.js package.json whatwg-url LICENSE.txt README.md lib URL-impl.js URL.js URLSearchParams-impl.js URLSearchParams.js infra.js public-api.js url-state-machine.js urlencoded.js utils.js package.json which-module CHANGELOG.md README.md index.js package.json wrap-ansi index.js package.json readme.md wrappy README.md package.json wrappy.js y18n CHANGELOG.md README.md build lib cjs.js index.js platform-shims node.js package.json yallist README.md iterator.js package.json yallist.js yargs-parser CHANGELOG.md LICENSE.txt README.md browser.js build lib index.js string-utils.js tokenize-arg-string.js yargs-parser-types.js yargs-parser.js package.json yargs CHANGELOG.md README.md build lib argsert.js command.js completion-templates.js completion.js middleware.js parse-command.js typings common-types.js yargs-parser-types.js usage.js utils apply-extends.js is-promise.js levenshtein.js obj-filter.js process-argv.js set-blocking.js which-module.js validation.js yargs-factory.js yerror.js helpers index.js package.json locales be.json de.json en.json es.json fi.json fr.json hi.json hu.json id.json it.json ja.json ko.json nb.json nl.json nn.json pirate.json pl.json pt.json pt_BR.json ru.json th.json tr.json zh_CN.json zh_TW.json package.json | package.json scripts 1.init.sh 2.run.sh README.md src as-pect.d.ts as_types.d.ts ngo README.md __tests__ README.md index.unit.spec.ts asconfig.json assembly index.ts models.ts tsconfig.json utils.ts
# node-tar [![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar) [Fast](./benchmarks) and full-featured Tar for Node.js The API is designed to mimic the behavior of `tar(1)` on unix systems. If you are familiar with how tar works, most of this will hopefully be straightforward for you. If not, then hopefully this module can teach you useful unix skills that may come in handy someday :) ## Background A "tar file" or "tarball" is an archive of file system entries (directories, files, links, etc.) The name comes from "tape archive". If you run `man tar` on almost any Unix command line, you'll learn quite a bit about what it can do, and its history. Tar has 5 main top-level commands: * `c` Create an archive * `r` Replace entries within an archive * `u` Update entries within an archive (ie, replace if they're newer) * `t` List out the contents of an archive * `x` Extract an archive to disk The other flags and options modify how this top level function works. ## High-Level API These 5 functions are the high-level API. All of them have a single-character name (for unix nerds familiar with `tar(1)`) as well as a long name (for everyone else). All the high-level functions take the following arguments, all three of which are optional and may be omitted. 1. `options` - An optional object specifying various options 2. `paths` - An array of paths to add or extract 3. `callback` - Called when the command is completed, if async. (If sync or no file specified, providing a callback throws a `TypeError`.) If the command is sync (ie, if `options.sync=true`), then the callback is not allowed, since the action will be completed immediately. If a `file` argument is specified, and the command is async, then a `Promise` is returned. In this case, if async, a callback may be provided which is called when the command is completed. If a `file` option is not specified, then a stream is returned. For `create`, this is a readable stream of the generated archive. For `list` and `extract` this is a writable stream that an archive should be written into. If a file is not specified, then a callback is not allowed, because you're already getting a stream to work with. `replace` and `update` only work on existing archives, and so require a `file` argument. Sync commands without a file argument return a stream that acts on its input immediately in the same tick. For readable streams, this means that all of the data is immediately available by calling `stream.read()`. For writable streams, it will be acted upon as soon as it is provided, but this can be at any time. ### Warnings and Errors Tar emits warnings and errors for recoverable and unrecoverable situations, respectively. In many cases, a warning only affects a single entry in an archive, or is simply informing you that it's modifying an entry to comply with the settings provided. Unrecoverable warnings will always raise an error (ie, emit `'error'` on streaming actions, throw for non-streaming sync actions, reject the returned Promise for non-streaming async operations, or call a provided callback with an `Error` as the first argument). Recoverable errors will raise an error only if `strict: true` is set in the options. Respond to (recoverable) warnings by listening to the `warn` event. Handlers receive 3 arguments: - `code` String. One of the error codes below. This may not match `data.code`, which preserves the original error code from fs and zlib. - `message` String. More details about the error. - `data` Metadata about the error. An `Error` object for errors raised by fs and zlib. All fields are attached to errors raisd by tar. Typically contains the following fields, as relevant: - `tarCode` The tar error code. - `code` Either the tar error code, or the error code set by the underlying system. - `file` The archive file being read or written. - `cwd` Working directory for creation and extraction operations. - `entry` The entry object (if it could be created) for `TAR_ENTRY_INFO`, `TAR_ENTRY_INVALID`, and `TAR_ENTRY_ERROR` warnings. - `header` The header object (if it could be created, and the entry could not be created) for `TAR_ENTRY_INFO` and `TAR_ENTRY_INVALID` warnings. - `recoverable` Boolean. If `false`, then the warning will emit an `error`, even in non-strict mode. #### Error Codes * `TAR_ENTRY_INFO` An informative error indicating that an entry is being modified, but otherwise processed normally. For example, removing `/` or `C:\` from absolute paths if `preservePaths` is not set. * `TAR_ENTRY_INVALID` An indication that a given entry is not a valid tar archive entry, and will be skipped. This occurs when: - a checksum fails, - a `linkpath` is missing for a link type, or - a `linkpath` is provided for a non-link type. If every entry in a parsed archive raises an `TAR_ENTRY_INVALID` error, then the archive is presumed to be unrecoverably broken, and `TAR_BAD_ARCHIVE` will be raised. * `TAR_ENTRY_ERROR` The entry appears to be a valid tar archive entry, but encountered an error which prevented it from being unpacked. This occurs when: - an unrecoverable fs error happens during unpacking, - an entry has `..` in the path and `preservePaths` is not set, or - an entry is extracting through a symbolic link, when `preservePaths` is not set. * `TAR_ENTRY_UNSUPPORTED` An indication that a given entry is a valid archive entry, but of a type that is unsupported, and so will be skipped in archive creation or extracting. * `TAR_ABORT` When parsing gzipped-encoded archives, the parser will abort the parse process raise a warning for any zlib errors encountered. Aborts are considered unrecoverable for both parsing and unpacking. * `TAR_BAD_ARCHIVE` The archive file is totally hosed. This can happen for a number of reasons, and always occurs at the end of a parse or extract: - An entry body was truncated before seeing the full number of bytes. - The archive contained only invalid entries, indicating that it is likely not an archive, or at least, not an archive this library can parse. `TAR_BAD_ARCHIVE` is considered informative for parse operations, but unrecoverable for extraction. Note that, if encountered at the end of an extraction, tar WILL still have extracted as much it could from the archive, so there may be some garbage files to clean up. Errors that occur deeper in the system (ie, either the filesystem or zlib) will have their error codes left intact, and a `tarCode` matching one of the above will be added to the warning metadata or the raised error object. Errors generated by tar will have one of the above codes set as the `error.code` field as well, but since errors originating in zlib or fs will have their original codes, it's better to read `error.tarCode` if you wish to see how tar is handling the issue. ### Examples The API mimics the `tar(1)` command line functionality, with aliases for more human-readable option and function names. The goal is that if you know how to use `tar(1)` in Unix, then you know how to use `require('tar')` in JavaScript. To replicate `tar czf my-tarball.tgz files and folders`, you'd do: ```js tar.c( { gzip: <true|gzip options>, file: 'my-tarball.tgz' }, ['some', 'files', 'and', 'folders'] ).then(_ => { .. tarball has been created .. }) ``` To replicate `tar cz files and folders > my-tarball.tgz`, you'd do: ```js tar.c( // or tar.create { gzip: <true|gzip options> }, ['some', 'files', 'and', 'folders'] ).pipe(fs.createWriteStream('my-tarball.tgz')) ``` To replicate `tar xf my-tarball.tgz` you'd do: ```js tar.x( // or tar.extract( { file: 'my-tarball.tgz' } ).then(_=> { .. tarball has been dumped in cwd .. }) ``` To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`: ```js fs.createReadStream('my-tarball.tgz').pipe( tar.x({ strip: 1, C: 'some-dir' // alias for cwd:'some-dir', also ok }) ) ``` To replicate `tar tf my-tarball.tgz`, do this: ```js tar.t({ file: 'my-tarball.tgz', onentry: entry => { .. do whatever with it .. } }) ``` To replicate `cat my-tarball.tgz | tar t` do: ```js fs.createReadStream('my-tarball.tgz') .pipe(tar.t()) .on('entry', entry => { .. do whatever with it .. }) ``` To do anything synchronous, add `sync: true` to the options. Note that sync functions don't take a callback and don't return a promise. When the function returns, it's already done. Sync methods without a file argument return a sync stream, which flushes immediately. But, of course, it still won't be done until you `.end()` it. To filter entries, add `filter: <function>` to the options. Tar-creating methods call the filter with `filter(path, stat)`. Tar-reading methods (including extraction) call the filter with `filter(path, entry)`. The filter is called in the `this`-context of the `Pack` or `Unpack` stream object. The arguments list to `tar t` and `tar x` specify a list of filenames to extract or list, so they're equivalent to a filter that tests if the file is in the list. For those who _aren't_ fans of tar's single-character command names: ``` tar.c === tar.create tar.r === tar.replace (appends to archive, file is required) tar.u === tar.update (appends if newer, file is required) tar.x === tar.extract tar.t === tar.list ``` Keep reading for all the command descriptions and options, as well as the low-level API that they are built on. ### tar.c(options, fileList, callback) [alias: tar.create] Create a tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Write the tarball archive to the specified filename. If this is specified, then the callback will be fired when the file has been written, and a promise will be returned that resolves when the file is written. If a filename is not specified, then a Readable Stream will be returned which will emit the file data. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. If this is set, and a file is not provided, then the resulting stream will already have the data ready to `read` or `emit('data')` as soon as you request it. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `mode` The mode to set on the created file archive - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. ### tar.x(options, fileList, callback) [alias: tar.extract] Extract a tarball archive. The `fileList` is an array of paths to extract from the tarball. If no paths are provided, then all the entries are extracted. If the archive is gzipped, then tar will detect this and unzip it. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. Most extraction errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then the extraction will fail completely. The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. [Alias: `C`] - `file` The archive file to extract. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Create files and directories synchronously. - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. [Alias: `keep-newer`, `keep-newer-files`] - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. [Alias: `k`, `keep-existing`] - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. [Alias: `P`] - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. [Alias: `U`] - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. [Alias: `strip-components`, `stripComponents`] - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. [Alias: `p`] - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. [Alias: `m`, `no-mtime`] - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. The following options are mostly internal, but can be modified in some advanced use cases, such as re-using caches between runs. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync extractions. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### tar.t(options, fileList, callback) [alias: tar.list] List the contents of a tarball archive. The `fileList` is an array of paths to list from the tarball. If no paths are provided, then all the entries are listed. If the archive is gzipped, then tar will detect this and unzip it. Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'` events. (If you want to get actual readable entries, use the `tar.Parse` class instead.) The following options are supported: - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. [Alias: `C`] - `file` The archive file to list. If not specified, then a Writable stream is returned where the archive data should be written. [Alias: `f`] - `sync` Read the specified file synchronously. (This has no effect when a file option isn't specified, because entries are emitted as fast as they are parsed from the stream anyway.) - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. This is important for when both `file` and `sync` are set, because it will be called synchronously. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noResume` By default, `entry` streams are resumed immediately after the call to `onentry`. Set `noResume: true` to suppress this behavior. Note that by opting into this, the stream will never complete until the entry data is consumed. ### tar.u(options, fileList, callback) [alias: tar.update] Add files to an archive if they are newer than the entry already in the tarball archive. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ### tar.r(options, fileList, callback) [alias: tar.replace] Add files to an existing archive. Because later entries override earlier entries, this effectively replaces any existing entries. The `fileList` is an array of paths to add to the tarball. Adding a directory also adds its children recursively. An entry in `fileList` that starts with an `@` symbol is a tar archive whose entries will be added. To add a file that starts with `@`, prepend it with `./`. The following options are supported: - `file` Required. Write the tarball archive to the specified filename. [Alias: `f`] - `sync` Act synchronously. If this is set, then any provided file will be fully written after the call to `tar.c`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for adding entries to the archive. Defaults to `process.cwd()`. [Alias: `C`] - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` [Alias: `z`] - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. [Alias: `P`] - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. [Alias: `n`] - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. [Alias: `L`, `h`] - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. [Alias: `m`, `no-mtime`] - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. ## Low-Level API ### class tar.Pack A readable tar stream. Has all the standard readable stream interface stuff. `'data'` and `'end'` events, `read()` method, `pause()` and `resume()`, etc. #### constructor(options) The following options are supported: - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `strict` Treat warnings as crash-worthy errors. Default false. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `prefix` A path portion to prefix onto the entries in the archive. - `gzip` Set to any truthy value to create a gzipped archive, or an object with settings for `zlib.Gzip()` - `filter` A function that gets called with `(path, stat)` for each entry being added. Return `true` to add the entry to the archive, or `false` to omit it. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `readdirCache` A Map object that caches calls to `readdir`. - `jobs` A number specifying how many concurrent jobs to run. Defaults to 4. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 16 MB. - `noDirRecurse` Do not recursively archive the contents of directories. - `follow` Set to true to pack the targets of symbolic links. Without this option, symbolic links are archived as such. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `mtime` Set to a `Date` object to force a specific `mtime` for everything added to the archive. Overridden by `noMtime`. #### add(path) Adds an entry to the archive. Returns the Pack stream. #### write(path) Adds an entry to the archive. Returns true if flushed. #### end() Finishes the archive. ### class tar.Pack.Sync Synchronous version of `tar.Pack`. ### class tar.Unpack A writable stream that unpacks a tar archive onto the file system. All the normal writable stream stuff is supported. `write()` and `end()` methods, `'drain'` events, etc. Note that all directories that are created will be forced to be writable, readable, and listable by their owner, to avoid cases where a directory prevents extraction of child entries by virtue of its mode. `'close'` is emitted when it's done writing stuff to the file system. Most unpack errors will cause a `warn` event to be emitted. If the `cwd` is missing, or not a directory, then an error will be emitted. #### constructor(options) - `cwd` Extract files relative to the specified directory. Defaults to `process.cwd()`. If provided, this must exist and must be a directory. - `filter` A function that gets called with `(path, entry)` for each entry being unpacked. Return `true` to unpack the entry from the archive, or `false` to skip it. - `newer` Set to true to keep the existing file on disk if it's newer than the file in the archive. - `keep` Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. - `preservePaths` Allow absolute paths, paths containing `..`, and extracting through symbolic links. By default, `/` is stripped from absolute paths, `..` paths are not extracted, and any file whose location would be modified by a symbolic link is not extracted. - `unlink` Unlink files before creating them. Without this option, tar overwrites existing files, which preserves existing hardlinks. With this option, existing hardlinks will be broken, as will any symlink that would affect the location of an extracted file. - `strip` Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after applying the filter, but before security checks. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `umask` Filter the modes of entries like `process.umask()`. - `dmode` Default mode for directories - `fmode` Default mode for files - `dirCache` A Map object of which directories exist. - `maxMetaEntrySize` The maximum size of meta entries that is supported. Defaults to 1 MB. - `preserveOwner` If true, tar will set the `uid` and `gid` of extracted entries to the `uid` and `gid` fields in the archive. This defaults to true when run as root, and false otherwise. If false, then files and directories will be set with the owner and group of the user running the process. This is similar to `-p` in `tar(1)`, but ACLs and other system-specific data is never unpacked in this implementation, and modes are set by default already. - `win32` True if on a windows platform. Causes behavior where filenames containing `<|>?` chars are converted to windows-compatible values while being unpacked. - `uid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified user id, regardless of the `uid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `gid` option. - `gid` Set to a number to force ownership of all extracted files and folders, and all implicitly created directories, to be owned by the specified group id, regardless of the `gid` field in the archive. Cannot be used along with `preserveOwner`. Requires also setting a `uid` option. - `noMtime` Set to true to omit writing `mtime` value for extracted entries. - `transform` Provide a function that takes an `entry` object, and returns a stream, or any falsey value. If a stream is provided, then that stream's data will be written instead of the contents of the archive entry. If a falsey value is provided, then the entry is written to disk as normal. (To exclude items from extraction, use the `filter` option described above.) - `strict` Treat warnings as crash-worthy errors. Default false. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") ### class tar.Unpack.Sync Synchronous version of `tar.Unpack`. Note that using an asynchronous stream type with the `transform` option will cause undefined behavior in sync unpack streams. [MiniPass](http://npm.im/minipass)-based streams are designed for this use case. ### class tar.Parse A writable stream that parses a tar archive stream. All the standard writable stream stuff is supported. If the archive is gzipped, then tar will detect this and unzip it. Emits `'entry'` events with `tar.ReadEntry` objects, which are themselves readable streams that you can pipe wherever. Each `entry` will not emit until the one before it is flushed through, so make sure to either consume the data (with `on('data', ...)` or `.pipe(...)`) or throw it away with `.resume()` to keep the stream flowing. #### constructor(options) Returns an event emitter that emits `entry` events with `tar.ReadEntry` objects. The following options are supported: - `strict` Treat warnings as crash-worthy errors. Default false. - `filter` A function that gets called with `(path, entry)` for each entry being listed. Return `true` to emit the entry from the archive, or `false` to skip it. - `onentry` A function that gets called with `(entry)` for each entry that passes the filter. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") #### abort(error) Stop all parsing activities. This is called when there are zlib errors. It also emits an unrecoverable warning with the error provided. ### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being read out of a tar archive. It has the following fields: - `extended` The extended metadata object provided to the constructor. - `globalExtended` The global extended metadata object provided to the constructor. - `remain` The number of bytes remaining to be written into the stream. - `blockRemain` The number of 512-byte blocks remaining to be written into the stream. - `ignore` Whether this entry should be ignored. - `meta` True if this represents metadata about the next entry, false if it represents a filesystem object. - All the fields from the header, extended header, and global extended header are added to the ReadEntry object. So it has `path`, `type`, `size, `mode`, and so on. #### constructor(header, extended, globalExtended) Create a new ReadEntry object with the specified header, extended header, and global extended header values. ### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass) A representation of an entry that is being written from the file system into a tar archive. Emits data for the Header, and for the Pax Extended Header if one is required, as well as any body data. Creating a WriteEntry for a directory does not also create WriteEntry objects for all of the directory contents. It has the following fields: - `path` The path field that will be written to the archive. By default, this is also the path from the cwd to the file system object. - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `myuid` If supported, the uid of the user running the current process. - `myuser` The `env.USER` string if set, or `''`. Set as the entry `uname` field if the file's `uid` matches `this.myuid`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/` and filenames containing the windows-compatible forms of `<|>?:` characters are converted to actual `<|>?:` characters in the archive. - `noPax` Suppress pax extended headers. Note that this means that long paths and linkpaths will be truncated, and large or negative numeric values may be interpreted incorrectly. - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. #### constructor(path, options) `path` is the path of the entry as it is written in the archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `maxReadSize` The maximum buffer size for `fs.read()` operations. Defaults to 1 MB. - `linkCache` A Map object containing the device and inode value for any file whose nlink is > 1, to identify hard links. - `statCache` A Map object that caches calls `lstat`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `cwd` The current working directory for creating the archive. Defaults to `process.cwd()`. - `absolute` The absolute path to the entry on the filesystem. By default, this is `path.resolve(this.cwd, this.path)`, but it can be overridden explicitly. - `strict` Treat warnings as crash-worthy errors. Default false. - `win32` True if on a windows platform. Causes behavior where paths replace `\` with `/`. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. - `umask` Set to restrict the modes on the entries in the archive, somewhat like how umask works on file creation. Defaults to `process.umask()` on unix systems, or `0o22` on Windows. #### warn(message, data) If strict, emit an error with the provided message. Othewise, emit a `'warn'` event with the provided message and data. ### class tar.WriteEntry.Sync Synchronous version of tar.WriteEntry ### class tar.WriteEntry.Tar A version of tar.WriteEntry that gets its data from a tar.ReadEntry instead of from the filesystem. #### constructor(readEntry, options) `readEntry` is the entry being read out of another archive. The following options are supported: - `portable` Omit metadata that is system-specific: `ctime`, `atime`, `uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note that `mtime` is still included, because this is necessary for other time-based operations. Additionally, `mode` is set to a "reasonable default" for most unix systems, based on a `umask` value of `0o22`. - `preservePaths` Allow absolute paths. By default, `/` is stripped from absolute paths. - `strict` Treat warnings as crash-worthy errors. Default false. - `onwarn` A function that will get called with `(code, message, data)` for any warnings encountered. (See "Warnings and Errors") - `noMtime` Set to true to omit writing `mtime` values for entries. Note that this prevents using other mtime-based features like `tar.update` or the `keepNewer` option with the resulting tar archive. ### class tar.Header A class for reading and writing header blocks. It has the following fields: - `nullBlock` True if decoding a block which is entirely composed of `0x00` null bytes. (Useful because tar files are terminated by at least 2 null blocks.) - `cksumValid` True if the checksum in the header is valid, false otherwise. - `needPax` True if the values, as encoded, will require a Pax extended header. - `path` The path of the entry. - `mode` The 4 lowest-order octal digits of the file mode. That is, read/write/execute permissions for world, group, and owner, and the setuid, setgid, and sticky bits. - `uid` Numeric user id of the file owner - `gid` Numeric group id of the file owner - `size` Size of the file in bytes - `mtime` Modified time of the file - `cksum` The checksum of the header. This is generated by adding all the bytes of the header block, treating the checksum field itself as all ascii space characters (that is, `0x20`). - `type` The human-readable name of the type of entry this represents, or the alphanumeric key if unknown. - `typeKey` The alphanumeric key for the type of entry this header represents. - `linkpath` The target of Link and SymbolicLink entries. - `uname` Human-readable user name of the file owner - `gname` Human-readable group name of the file owner - `devmaj` The major portion of the device number. Always `0` for files, directories, and links. - `devmin` The minor portion of the device number. Always `0` for files, directories, and links. - `atime` File access time. - `ctime` File change time. #### constructor(data, [offset=0]) `data` is optional. It is either a Buffer that should be interpreted as a tar Header starting at the specified offset and continuing for 512 bytes, or a data object of keys and values to set on the header object, and eventually encode as a tar Header. #### decode(block, offset) Decode the provided buffer starting at the specified offset. Buffer length must be greater than 512 bytes. #### set(data) Set the fields in the data object. #### encode(buffer, offset) Encode the header fields into the buffer at the specified offset. Returns `this.needPax` to indicate whether a Pax Extended Header is required to properly encode the specified data. ### class tar.Pax An object representing a set of key-value pairs in an Pax extended header entry. It has the following fields. Where the same name is used, they have the same semantics as the tar.Header field of the same name. - `global` True if this represents a global extended header, or false if it is for a single entry. - `atime` - `charset` - `comment` - `ctime` - `gid` - `gname` - `linkpath` - `mtime` - `path` - `size` - `uid` - `uname` - `dev` - `ino` - `nlink` #### constructor(object, global) Set the fields set in the object. `global` is a boolean that defaults to false. #### encode() Return a Buffer containing the header and body for the Pax extended header entry, or `null` if there is nothing to encode. #### encodeBody() Return a string representing the body of the pax extended header entry. #### encodeField(fieldName) Return a string representing the key/value encoding for the specified fieldName, or `''` if the field is unset. ### tar.Pax.parse(string, extended, global) Return a new Pax object created by parsing the contents of the string provided. If the `extended` object is set, then also add the fields from that object. (This is necessary because multiple metadata entries can occur in sequence.) ### tar.types A translation table for the `type` field in tar headers. #### tar.types.name.get(code) Get the human-readable name for a given alphanumeric code. #### tar.types.code.get(name) Get the alphanumeric code for a given human-readable name. long.js ======= A Long class for representing a 64 bit two's-complement integer value derived from the [Closure Library](https://github.com/google/closure-library) for stand-alone use and extended with unsigned support. [![Build Status](https://travis-ci.org/dcodeIO/long.js.svg)](https://travis-ci.org/dcodeIO/long.js) Background ---------- As of [ECMA-262 5th Edition](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), "all the positive and negative integers whose magnitude is no greater than 2<sup>53</sup> are representable in the Number type", which is "representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic". The [maximum safe integer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) in JavaScript is 2<sup>53</sup>-1. Example: 2<sup>64</sup>-1 is 1844674407370955**1615** but in JavaScript it evaluates to 1844674407370955**2000**. Furthermore, bitwise operators in JavaScript "deal only with integers in the range −2<sup>31</sup> through 2<sup>31</sup>−1, inclusive, or in the range 0 through 2<sup>32</sup>−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 2<sup>32</sup> integer values." In some use cases, however, it is required to be able to reliably work with and perform bitwise operations on the full 64 bits. This is where long.js comes into play. Usage ----- The class is compatible with CommonJS and AMD loaders and is exposed globally as `Long` if neither is available. ```javascript var Long = require("long"); var longVal = new Long(0xFFFFFFFF, 0x7FFFFFFF); console.log(longVal.toString()); ... ``` API --- ### Constructor * new **Long**(low: `number`, high: `number`, unsigned?: `boolean`)<br /> Constructs a 64 bit two's-complement integer, given its low and high 32 bit values as *signed* integers. See the from* functions below for more convenient ways of constructing Longs. ### Fields * Long#**low**: `number`<br /> The low 32 bits as a signed value. * Long#**high**: `number`<br /> The high 32 bits as a signed value. * Long#**unsigned**: `boolean`<br /> Whether unsigned or not. ### Constants * Long.**ZERO**: `Long`<br /> Signed zero. * Long.**ONE**: `Long`<br /> Signed one. * Long.**NEG_ONE**: `Long`<br /> Signed negative one. * Long.**UZERO**: `Long`<br /> Unsigned zero. * Long.**UONE**: `Long`<br /> Unsigned one. * Long.**MAX_VALUE**: `Long`<br /> Maximum signed value. * Long.**MIN_VALUE**: `Long`<br /> Minimum signed value. * Long.**MAX_UNSIGNED_VALUE**: `Long`<br /> Maximum unsigned value. ### Utility * Long.**isLong**(obj: `*`): `boolean`<br /> Tests if the specified object is a Long. * Long.**fromBits**(lowBits: `number`, highBits: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the 64 bit integer that comes by concatenating the given low and high bits. Each is assumed to use 32 bits. * Long.**fromBytes**(bytes: `number[]`, unsigned?: `boolean`, le?: `boolean`): `Long`<br /> Creates a Long from its byte representation. * Long.**fromBytesLE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its little endian byte representation. * Long.**fromBytesBE**(bytes: `number[]`, unsigned?: `boolean`): `Long`<br /> Creates a Long from its big endian byte representation. * Long.**fromInt**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given 32 bit integer value. * Long.**fromNumber**(value: `number`, unsigned?: `boolean`): `Long`<br /> Returns a Long representing the given value, provided that it is a finite number. Otherwise, zero is returned. * Long.**fromString**(str: `string`, unsigned?: `boolean`, radix?: `number`)<br /> Long.**fromString**(str: `string`, radix: `number`)<br /> Returns a Long representation of the given string, written using the specified radix. * Long.**fromValue**(val: `*`, unsigned?: `boolean`): `Long`<br /> Converts the specified value to a Long using the appropriate from* function for its type. ### Methods * Long#**add**(addend: `Long | number | string`): `Long`<br /> Returns the sum of this and the specified Long. * Long#**and**(other: `Long | number | string`): `Long`<br /> Returns the bitwise AND of this Long and the specified. * Long#**compare**/**comp**(other: `Long | number | string`): `number`<br /> Compares this Long's value with the specified's. Returns `0` if they are the same, `1` if the this is greater and `-1` if the given one is greater. * Long#**divide**/**div**(divisor: `Long | number | string`): `Long`<br /> Returns this Long divided by the specified. * Long#**equals**/**eq**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value equals the specified's. * Long#**getHighBits**(): `number`<br /> Gets the high 32 bits as a signed integer. * Long#**getHighBitsUnsigned**(): `number`<br /> Gets the high 32 bits as an unsigned integer. * Long#**getLowBits**(): `number`<br /> Gets the low 32 bits as a signed integer. * Long#**getLowBitsUnsigned**(): `number`<br /> Gets the low 32 bits as an unsigned integer. * Long#**getNumBitsAbs**(): `number`<br /> Gets the number of bits needed to represent the absolute value of this Long. * Long#**greaterThan**/**gt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than the specified's. * Long#**greaterThanOrEqual**/**gte**/**ge**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is greater than or equal the specified's. * Long#**isEven**(): `boolean`<br /> Tests if this Long's value is even. * Long#**isNegative**(): `boolean`<br /> Tests if this Long's value is negative. * Long#**isOdd**(): `boolean`<br /> Tests if this Long's value is odd. * Long#**isPositive**(): `boolean`<br /> Tests if this Long's value is positive. * Long#**isZero**/**eqz**(): `boolean`<br /> Tests if this Long's value equals zero. * Long#**lessThan**/**lt**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than the specified's. * Long#**lessThanOrEqual**/**lte**/**le**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value is less than or equal the specified's. * Long#**modulo**/**mod**/**rem**(divisor: `Long | number | string`): `Long`<br /> Returns this Long modulo the specified. * Long#**multiply**/**mul**(multiplier: `Long | number | string`): `Long`<br /> Returns the product of this and the specified Long. * Long#**negate**/**neg**(): `Long`<br /> Negates this Long's value. * Long#**not**(): `Long`<br /> Returns the bitwise NOT of this Long. * Long#**notEquals**/**neq**/**ne**(other: `Long | number | string`): `boolean`<br /> Tests if this Long's value differs from the specified's. * Long#**or**(other: `Long | number | string`): `Long`<br /> Returns the bitwise OR of this Long and the specified. * Long#**shiftLeft**/**shl**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits shifted to the left by the given amount. * Long#**shiftRight**/**shr**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits arithmetically shifted to the right by the given amount. * Long#**shiftRightUnsigned**/**shru**/**shr_u**(numBits: `Long | number | string`): `Long`<br /> Returns this Long with bits logically shifted to the right by the given amount. * Long#**subtract**/**sub**(subtrahend: `Long | number | string`): `Long`<br /> Returns the difference of this and the specified Long. * Long#**toBytes**(le?: `boolean`): `number[]`<br /> Converts this Long to its byte representation. * Long#**toBytesLE**(): `number[]`<br /> Converts this Long to its little endian byte representation. * Long#**toBytesBE**(): `number[]`<br /> Converts this Long to its big endian byte representation. * Long#**toInt**(): `number`<br /> Converts the Long to a 32 bit integer, assuming it is a 32 bit integer. * Long#**toNumber**(): `number`<br /> Converts the Long to a the nearest floating-point representation of this value (double, 53 bit mantissa). * Long#**toSigned**(): `Long`<br /> Converts this Long to signed. * Long#**toString**(radix?: `number`): `string`<br /> Converts the Long to a string written in the specified radix. * Long#**toUnsigned**(): `Long`<br /> Converts this Long to unsigned. * Long#**xor**(other: `Long | number | string`): `Long`<br /> Returns the bitwise XOR of this Long and the given one. Building -------- To build an UMD bundle to `dist/long.js`, run: ``` $> npm install $> npm run build ``` Running the [tests](./tests): ``` $> npm test ``` # lodash.clonedeep v4.5.0 The [lodash](https://lodash.com/) method `_.cloneDeep` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.clonedeep ``` In Node.js: ```js var cloneDeep = require('lodash.clonedeep'); ``` See the [documentation](https://lodash.com/docs#cloneDeep) or [package source](https://github.com/lodash/lodash/blob/4.5.0-npm-packages/lodash.clonedeep) for more details. # fs-minipass Filesystem streams based on [minipass](http://npm.im/minipass). 4 classes are exported: - ReadStream - ReadStreamSync - WriteStream - WriteStreamSync When using `ReadStreamSync`, all of the data is made available immediately upon consuming the stream. Nothing is buffered in memory when the stream is constructed. If the stream is piped to a writer, then it will synchronously `read()` and emit data into the writer as fast as the writer can consume it. (That is, it will respect backpressure.) If you call `stream.read()` then it will read the entire file and return the contents. When using `WriteStreamSync`, every write is flushed to the file synchronously. If your writes all come in a single tick, then it'll write it all out in a single tick. It's as synchronous as you are. The async versions work much like their node builtin counterparts, with the exception of introducing significantly less Stream machinery overhead. ## USAGE It's just streams, you pipe them or read() them or write() to them. ```js const fsm = require('fs-minipass') const readStream = new fsm.ReadStream('file.txt') const writeStream = new fsm.WriteStream('output.txt') writeStream.write('some file header or whatever\n') readStream.pipe(writeStream) ``` ## ReadStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `readSize` The size of reads to do, defaults to 16MB - `size` The size of the file, if known. Prevents zero-byte read() call at the end. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the file is done being read. ## WriteStream(path, options) Path string is required, but somewhat irrelevant if an open file descriptor is passed in as an option. Options: - `fd` Pass in a numeric file descriptor, if the file is already open. - `mode` The mode to create the file with. Defaults to `0o666`. - `start` The position in the file to start reading. If not specified, then the file will start writing at position zero, and be truncated by default. - `autoClose` Set to `false` to prevent the file descriptor from being closed when the stream is ended. - `flags` Flags to use when opening the file. Irrelevant if `fd` is passed in, since file won't be opened in that case. Defaults to `'a'` if a `pos` is specified, or `'w'` otherwise. <p align="center"> <img width="250" src="/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> [![Build Status][travis-image]][travis-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description : Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments. > <img width="400" src="/screen.png"> * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage : ### Simple Example ```javascript #!/usr/bin/env node const {argv} = require('yargs') if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node require('yargs') // eslint-disable-line .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ## Webpack See usage examples of yargs with webpack in [docs](/docs/webpack.md). ## Community : Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation : ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Contributing](/contributing.md) [travis-url]: https://travis-ci.org/yargs/yargs [travis-image]: https://img.shields.io/travis/yargs/yargs/master.svg [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc A JSON with color names and its values. Based on http://dev.w3.org/csswg/css-color/#named-colors. [![NPM](https://nodei.co/npm/color-name.png?mini=true)](https://nodei.co/npm/color-name/) ```js var colors = require('color-name'); colors.red //[255,0,0] ``` <a href="LICENSE"><img src="https://upload.wikimedia.org/wikipedia/commons/0/0c/MIT_logo.svg" width="120"/></a> Standard library ================ Standard library components for use with `tsc` (portable) and `asc` (assembly). Base configurations (.json) and definition files (.d.ts) are relevant to `tsc` only and not used by `asc`. # minimatch A minimal matching utility. [![Build Status](https://secure.travis-ci.org/isaacs/minimatch.svg)](http://travis-ci.org/isaacs/minimatch) This is the matching library used internally by npm. It works by converting glob expressions into JavaScript `RegExp` objects. ## Usage ```javascript var minimatch = require("minimatch") minimatch("bar.foo", "*.foo") // true! minimatch("bar.foo", "*.bar") // false! minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy! ``` ## Features Supports these glob features: * Brace Expansion * Extended glob matching * "Globstar" `**` matching See: * `man sh` * `man bash` * `man 3 fnmatch` * `man 5 gitignore` ## Minimatch Class Create a minimatch object by instantiating the `minimatch.Minimatch` class. ```javascript var Minimatch = require("minimatch").Minimatch var mm = new Minimatch(pattern, options) ``` ### Properties * `pattern` The original pattern the minimatch object represents. * `options` The options supplied to the constructor. * `set` A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern `{a,b/c}/d` would expand to a set of patterns like: [ [ a, d ] , [ b, c, d ] ] If a portion of the pattern doesn't have any "magic" in it (that is, it's something like `"foo"` rather than `fo*o?`), then it will be left as a string rather than converted to a regular expression. * `regexp` Created by the `makeRe` method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled. * `negate` True if the pattern is negated. * `comment` True if the pattern is a comment. * `empty` True if the pattern is `""`. ### Methods * `makeRe` Generate the `regexp` member if necessary, and return it. Will return `false` if the pattern is invalid. * `match(fname)` Return true if the filename matches the pattern, or false otherwise. * `matchOne(fileArray, patternArray, partial)` Take a `/`-split filename, and match it against a single row in the `regExpSet`. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls. All other methods are internal, and will be called as necessary. ### minimatch(path, pattern, options) Main export. Tests a path against the pattern using the options. ```javascript var isJS = minimatch(file, "*.js", { matchBase: true }) ``` ### minimatch.filter(pattern, options) Returns a function that tests its supplied argument, suitable for use with `Array.filter`. Example: ```javascript var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true})) ``` ### minimatch.match(list, pattern, options) Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself. ```javascript var javascripts = minimatch.match(fileList, "*.js", {matchBase: true})) ``` ### minimatch.makeRe(pattern, options) Make a regular expression object from the pattern. ## Options All options are `false` by default. ### debug Dump a ton of stuff to stderr. ### nobrace Do not expand `{a,b}` and `{1..3}` brace sets. ### noglobstar Disable `**` matching against multiple folder names. ### dot Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot. Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot` is set. ### noext Disable "extglob" style patterns like `+(a|b)`. ### nocase Perform a case-insensitive match. ### nonull When a match is not found by `minimatch.match`, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches. ### matchBase If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, `a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`. ### nocomment Suppress the behavior of treating `#` at the start of a pattern as a comment. ### nonegate Suppress the behavior of treating a leading `!` character as negation. ### flipNegate Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.) ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional. If the pattern starts with a `!` character, then it is negated. Set the `nonegate` flag to suppress this behavior, and treat leading `!` characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like `!(a|B)`. Multiple `!` characters at the start of a pattern will negate the pattern multiple times. If a pattern starts with `#`, then it is treated as a comment, and will not match anything. Use `\#` to match a literal `#` at the start of a line, or set the `nocomment` flag to suppress this behavior. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.1, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. If an escaped pattern has no matches, and the `nonull` flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, `minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. # Visitor utilities for AssemblyScript Compiler transformers ## Example ### List Fields The transformer: ```ts import { ClassDeclaration, FieldDeclaration, MethodDeclaration, } from "../../as"; import { ClassDecorator, registerDecorator } from "../decorator"; import { toString } from "../utils"; class ListMembers extends ClassDecorator { visitFieldDeclaration(node: FieldDeclaration): void { if (!node.name) console.log(toString(node) + "\n"); const name = toString(node.name); const _type = toString(node.type!); this.stdout.write(name + ": " + _type + "\n"); } visitMethodDeclaration(node: MethodDeclaration): void { const name = toString(node.name); if (name == "constructor") { return; } const sig = toString(node.signature); this.stdout.write(name + ": " + sig + "\n"); } visitClassDeclaration(node: ClassDeclaration): void { this.visit(node.members); } get name(): string { return "list"; } } export = registerDecorator(new ListMembers()); ``` assembly/foo.ts: ```ts @list class Foo { a: u8; b: bool; i: i32; } ``` And then compile with `--transform` flag: ``` asc assembly/foo.ts --transform ./dist/examples/list --noEmit ``` Which prints the following to the console: ``` a: u8 b: bool i: i32 ``` # binary-install Install .tar.gz binary applications via npm ## Usage This library provides a single class `Binary` that takes a download url and some optional arguments. You **must** provide either `name` or `installDirectory` when creating your `Binary`. | option | decription | | ---------------- | --------------------------------------------- | | name | The name of your binary | | installDirectory | A path to the directory to install the binary | If an `installDirectory` is not provided, the binary will be installed at your OS specific config directory. On MacOS it defaults to `~/Library/Preferences/${name}-nodejs` After your `Binary` has been created, you can run `.install()` to install the binary, and `.run()` to run it. ### Example This is meant to be used as a library - create your `Binary` with your desired options, then call `.install()` in the `postinstall` of your `package.json`, `.run()` in the `bin` section of your `package.json`, and `.uninstall()` in the `preuninstall` section of your `package.json`. See [this example project](/example) to see how to create an npm package that installs and runs a binary using the Github releases API. The AssemblyScript Runtime ========================== The runtime provides the functionality necessary to dynamically allocate and deallocate memory of objects, arrays and buffers, as well as collect garbage that is no longer used. The current implementation is either a Two-Color Mark & Sweep (TCMS) garbage collector that must be called manually when the execution stack is unwound or an Incremental Tri-Color Mark & Sweep (ITCMS) garbage collector that is fully automated with a shadow stack, implemented on top of a Two-Level Segregate Fit (TLSF) memory manager. It's not designed to be the fastest of its kind, but intentionally focuses on simplicity and ease of integration until we can replace it with the real deal, i.e. Wasm GC. Interface --------- ### Garbage collector / `--exportRuntime` * **__new**(size: `usize`, id: `u32` = 0): `usize`<br /> Dynamically allocates a GC object of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. GC-allocated objects cannot be used with `__realloc` and `__free`. * **__pin**(ptr: `usize`): `usize`<br /> Pins the object pointed to by `ptr` externally so it and its directly reachable members and indirectly reachable objects do not become garbage collected. * **__unpin**(ptr: `usize`): `void`<br /> Unpins the object pointed to by `ptr` externally so it can become garbage collected. * **__collect**(): `void`<br /> Performs a full garbage collection. ### Internals * **__alloc**(size: `usize`): `usize`<br /> Dynamically allocates a chunk of memory of at least the specified size and returns its address. Alignment is guaranteed to be 16 bytes to fit up to v128 values naturally. * **__realloc**(ptr: `usize`, size: `usize`): `usize`<br /> Dynamically changes the size of a chunk of memory, possibly moving it to a new address. * **__free**(ptr: `usize`): `void`<br /> Frees a dynamically allocated chunk of memory by its address. * **__renew**(ptr: `usize`, size: `usize`): `usize`<br /> Like `__realloc`, but for `__new`ed GC objects. * **__link**(parentPtr: `usize`, childPtr: `usize`, expectMultiple: `bool`): `void`<br /> Introduces a link from a parent object to a child object, i.e. upon `parent.field = child`. * **__visit**(ptr: `usize`, cookie: `u32`): `void`<br /> Concrete visitor implementation called during traversal. Cookie can be used to indicate one of multiple operations. * **__visit_globals**(cookie: `u32`): `void`<br /> Calls `__visit` on each global that is of a managed type. * **__visit_members**(ptr: `usize`, cookie: `u32`): `void`<br /> Calls `__visit` on each member of the object pointed to by `ptr`. * **__typeinfo**(id: `u32`): `RTTIFlags`<br /> Obtains the runtime type information for objects with the specified runtime id. Runtime type information is a set of flags indicating whether a type is managed, an array or similar, and what the relevant alignments when creating an instance externally are etc. * **__instanceof**(ptr: `usize`, classId: `u32`): `bool`<br /> Tests if the object pointed to by `ptr` is an instance of the specified class id. ITCMS / `--runtime incremental` ----- The Incremental Tri-Color Mark & Sweep garbage collector maintains a separate shadow stack of managed values in the background to achieve full automation. Maintaining another stack introduces some overhead compared to the simpler Two-Color Mark & Sweep garbage collector, but makes it independent of whether the execution stack is unwound or not when it is invoked, so the garbage collector can run interleaved with the program. There are several constants one can experiment with to tweak ITCMS's automation: * `--use ASC_GC_GRANULARITY=1024`<br /> How often to interrupt. The default of 1024 means "interrupt each 1024 bytes allocated". * `--use ASC_GC_STEPFACTOR=200`<br /> How long to interrupt. The default of 200% means "run at double the speed of allocations". * `--use ASC_GC_IDLEFACTOR=200`<br /> How long to idle. The default of 200% means "wait for memory to double before kicking in again". * `--use ASC_GC_MARKCOST=1`<br /> How costly it is to mark one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. * `--use ASC_GC_SWEEPCOST=10`<br /> How costly it is to sweep one object. Budget per interrupt is `GRANULARITY * STEPFACTOR / 100`. TCMS / `--runtime minimal` ---- If automation and low pause times aren't strictly necessary, using the Two-Color Mark & Sweep garbage collector instead by invoking collection manually at appropriate times when the execution stack is unwound may be more performant as it simpler and has less overhead. The execution stack is typically unwound when invoking the collector externally, at a place that is not indirectly called from Wasm. STUB / `--runtime stub` ---- The stub is a maximally minimal runtime substitute, consisting of a simple and fast bump allocator with no means of freeing up memory again, except when freeing the respective most recently allocated object on top of the bump. Useful where memory is not a concern, and/or where it is sufficient to destroy the whole module including any potential garbage after execution. See also: [Garbage collection](https://www.assemblyscript.org/garbage-collection.html) Browser-friendly inheritance fully compatible with standard node.js [inherits](http://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor). This package exports standard `inherits` from node.js `util` module in node environment, but also provides alternative browser-friendly implementation through [browser field](https://gist.github.com/shtylman/4339901). Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of `util`. It also has a shim for old browsers with no `Object.create` support. While keeping you sure you are using standard `inherits` implementation in node.js environment, it allows bundlers such as [browserify](https://github.com/substack/node-browserify) to not include full `util` package to your client code if all you need is just `inherits` function. It worth, because browser shim for `util` package is large and `inherits` is often the single function you need from it. It's recommended to use this package instead of `require('util').inherits` for any code that has chances to be used not only in node.js but in browser too. ## usage ```js var inherits = require('inherits'); // then use exactly as the standard one ``` ## note on version ~1.0 Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js `inherits`. If you are using version ~1.0 and planning to switch to ~2.0, be careful: * new version uses `super_` instead of `super` for referencing superclass * new version overwrites current prototype while old one preserves any existing fields on it # yallist Yet Another Linked List There are many doubly-linked list implementations like it, but this one is mine. For when an array would be too big, and a Map can't be iterated in reverse order. [![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist) ## basic usage ```javascript var yallist = require('yallist') var myList = yallist.create([1, 2, 3]) myList.push('foo') myList.unshift('bar') // of course pop() and shift() are there, too console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo'] myList.forEach(function (k) { // walk the list head to tail }) myList.forEachReverse(function (k, index, list) { // walk the list tail to head }) var myDoubledList = myList.map(function (k) { return k + k }) // now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo'] // mapReverse is also a thing var myDoubledListReverse = myList.mapReverse(function (k) { return k + k }) // ['foofoo', 6, 4, 2, 'barbar'] var reduced = myList.reduce(function (set, entry) { set += entry return set }, 'start') console.log(reduced) // 'startfoo123bar' ``` ## api The whole API is considered "public". Functions with the same name as an Array method work more or less the same way. There's reverse versions of most things because that's the point. ### Yallist Default export, the class that holds and manages a list. Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list. The Array-ish methods all act like you'd expect. No magic length, though, so if you change that it won't automatically prune or add empty spots. ### Yallist.create(..) Alias for Yallist function. Some people like factories. #### yallist.head The first node in the list #### yallist.tail The last node in the list #### yallist.length The number of nodes in the list. (Change this at your peril. It is not magic like Array length.) #### yallist.toArray() Convert the list to an array. #### yallist.forEach(fn, [thisp]) Call a function on each item in the list. #### yallist.forEachReverse(fn, [thisp]) Call a function on each item in the list, in reverse order. #### yallist.get(n) Get the data at position `n` in the list. If you use this a lot, probably better off just using an Array. #### yallist.getReverse(n) Get the data at position `n`, counting from the tail. #### yallist.map(fn, thisp) Create a new Yallist with the result of calling the function on each item. #### yallist.mapReverse(fn, thisp) Same as `map`, but in reverse. #### yallist.pop() Get the data from the list tail, and remove the tail from the list. #### yallist.push(item, ...) Insert one or more items to the tail of the list. #### yallist.reduce(fn, initialValue) Like Array.reduce. #### yallist.reduceReverse Like Array.reduce, but in reverse. #### yallist.reverse Reverse the list in place. #### yallist.shift() Get the data from the list head, and remove the head from the list. #### yallist.slice([from], [to]) Just like Array.slice, but returns a new Yallist. #### yallist.sliceReverse([from], [to]) Just like yallist.slice, but the result is returned in reverse. #### yallist.toArray() Create an array representation of the list. #### yallist.toArrayReverse() Create a reversed array representation of the list. #### yallist.unshift(item, ...) Insert one or more items to the head of the list. #### yallist.unshiftNode(node) Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.) If the node belongs to a different list, then that list will remove it first. #### yallist.pushNode(node) Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.) If the node belongs to a list already, then that list will remove it first. #### yallist.removeNode(node) Remove a node from the list, preserving referential integrity of head and tail and other nodes. Will throw an error if you try to have a list remove a node that doesn't belong to it. ### Yallist.Node The class that holds the data and is actually the list. Call with `var n = new Node(value, previousNode, nextNode)` Note that if you do direct operations on Nodes themselves, it's very easy to get into weird states where the list is broken. Be careful :) #### node.next The next node in the list. #### node.prev The previous node in the list. #### node.value The data the node contains. #### node.list The list to which this node belongs. (Null if it does not belong to any list.) assemblyscript-json # assemblyscript-json ## Table of contents ### Namespaces - [JSON](modules/json.md) ### Classes - [DecoderState](classes/decoderstate.md) - [JSONDecoder](classes/jsondecoder.md) - [JSONEncoder](classes/jsonencoder.md) - [JSONHandler](classes/jsonhandler.md) - [ThrowingJSONHandler](classes/throwingjsonhandler.md) ## Setting up your terminal The scripts in this folder support a simple demonstration of the contract. It uses the following setup: ```txt ┌───────────────────────────────────────┬───────────────────────────────────────┐ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────────────────────────────────────┴───────────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands ```sh 1.init.sh # cleanup, compile and deploy contract 2.run.sh # call methods on the deployed contract ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linux-watch-command#191068) # minizlib A fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding. This module was created to serve the needs of [node-tar](http://npm.im/tar) and [minipass-fetch](http://npm.im/minipass-fetch). Brotli is supported in versions of node with a Brotli binding. ## How does this differ from the streams in `require('zlib')`? First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in `zlib` module. This is only streams. That being said, Minipass streams to make it fairly easy to use as one-liners: `new zlib.Deflate().end(data).read()` will return the deflate compressed result. This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib and Brotli operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral. Node's built in zlib streams are built on top of `stream.Transform`. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure. See [Minipass](http://npm.im/minipass) for more on the differences between Node.js core streams and Minipass streams, and the convenience methods provided by that class. ## Classes - Deflate - Inflate - Gzip - Gunzip - DeflateRaw - InflateRaw - Unzip - BrotliCompress (Node v10 and higher) - BrotliDecompress (Node v10 and higher) ## USAGE ```js const zlib = require('minizlib') const input = sourceOfCompressedData() const decode = new zlib.BrotliDecompress() const output = whereToWriteTheDecodedData() input.pipe(decode).pipe(output) ``` ## REPRODUCIBLE BUILDS To create reproducible gzip compressed files across different operating systems, set `portable: true` in the options. This causes minizlib to set the `OS` indicator in byte 9 of the extended gzip header to `0xFF` for 'unknown'. # debug [![Build Status](https://travis-ci.org/visionmedia/debug.svg?branch=master)](https://travis-ci.org/visionmedia/debug) [![Coverage Status](https://coveralls.io/repos/github/visionmedia/debug/badge.svg?branch=master)](https://coveralls.io/github/visionmedia/debug?branch=master) [![Slack](https://visionmedia-community-slackin.now.sh/badge.svg)](https://visionmedia-community-slackin.now.sh/) [![OpenCollective](https://opencollective.com/debug/backers/badge.svg)](#backers) [![OpenCollective](https://opencollective.com/debug/sponsors/badge.svg)](#sponsors) <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers. ## Installation ```bash $ npm install debug ``` ## Usage `debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole. Example [_app.js_](./examples/node/app.js): ```js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %o', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello\n'); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); ``` Example [_worker.js_](./examples/node/worker.js): ```js var a = require('debug')('worker:a') , b = require('debug')('worker:b'); function work() { a('doing lots of uninteresting work'); setTimeout(work, Math.random() * 1000); } work(); function workb() { b('doing some work'); setTimeout(workb, Math.random() * 2000); } workb(); ``` The `DEBUG` environment variable is then used to enable these based on space or comma-delimited names. Here are some examples: <img width="647" alt="screen shot 2017-08-08 at 12 53 04 pm" src="https://user-images.githubusercontent.com/71256/29091703-a6302cdc-7c38-11e7-8304-7c0b3bc600cd.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 38 pm" src="https://user-images.githubusercontent.com/71256/29091700-a62a6888-7c38-11e7-800b-db911291ca2b.png"> <img width="647" alt="screen shot 2017-08-08 at 12 53 25 pm" src="https://user-images.githubusercontent.com/71256/29091701-a62ea114-7c38-11e7-826a-2692bedca740.png"> #### Windows note On Windows the environment variable is set using the `set` command. ```cmd set DEBUG=*,-not_this ``` Note that PowerShell uses different syntax to set environment variables. ```cmd $env:DEBUG = "*,-not_this" ``` Then, run the program to be debugged as usual. ## Namespace Colors Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to. #### Node.js In Node.js, colors are enabled when stderr is a TTY. You also _should_ install the [`supports-color`](https://npmjs.org/supports-color) module alongside debug, otherwise debug will only use a small handful of basic colors. <img width="521" src="https://user-images.githubusercontent.com/71256/29092181-47f6a9e6-7c3a-11e7-9a14-1928d8a711cd.png"> #### Web Browser Colors are also enabled on "Web Inspectors" that understand the `%c` formatting option. These are WebKit web inspectors, Firefox ([since version 31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/)) and the Firebug plugin for Firefox (any version). <img width="524" src="https://user-images.githubusercontent.com/71256/29092033-b65f9f2e-7c39-11e7-8e32-f6f0d8e865c1.png"> ## Millisecond diff When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls. <img width="647" src="https://user-images.githubusercontent.com/71256/29091486-fa38524c-7c37-11e7-895f-e7ec8e1039b6.png"> When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below: <img width="647" src="https://user-images.githubusercontent.com/71256/29091956-6bd78372-7c39-11e7-8c55-c948396d6edd.png"> ## Conventions If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output. ## Wildcards The `*` character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with `DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do `DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`. You can also exclude specific debuggers by prefixing them with a "-" character. For example, `DEBUG=*,-connect:*` would include all debuggers except those starting with "connect:". ## Environment Variables When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging: | Name | Purpose | |-----------|-------------------------------------------------| | `DEBUG` | Enables/disables specific debugging namespaces. | | `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). | | `DEBUG_COLORS`| Whether or not to use colors in the debug output. | | `DEBUG_DEPTH` | Object inspection depth. | | `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. | __Note:__ The environment variables beginning with `DEBUG_` end up being converted into an Options object that gets used with `%o`/`%O` formatters. See the Node.js documentation for [`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options) for the complete list. ## Formatters Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting. Below are the officially supported formatters: | Formatter | Representation | |-----------|----------------| | `%O` | Pretty-print an Object on multiple lines. | | `%o` | Pretty-print an Object all on a single line. | | `%s` | String. | | `%d` | Number (both integer and float). | | `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. | | `%%` | Single percent sign ('%'). This does not consume an argument. | ### Custom formatters You can add custom formatters by extending the `debug.formatters` object. For example, if you wanted to add support for rendering a Buffer as hex with `%h`, you could do something like: ```js const createDebug = require('debug') createDebug.formatters.h = (v) => { return v.toString('hex') } // …elsewhere const debug = createDebug('foo') debug('this is hex: %h', new Buffer('hello world')) // foo this is hex: 68656c6c6f20776f726c6421 +0ms ``` ## Browser Support You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify), or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest), if you don't want to build it yourself. Debug's enable state is currently persisted by `localStorage`. Consider the situation shown below where you have `worker:a` and `worker:b`, and wish to debug both. You can enable this using `localStorage.debug`: ```js localStorage.debug = 'worker:*' ``` And then refresh the page. ```js a = debug('worker:a'); b = debug('worker:b'); setInterval(function(){ a('doing some work'); }, 1000); setInterval(function(){ b('doing some work'); }, 1200); ``` ## Output streams By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method: Example [_stdout.js_](./examples/node/stdout.js): ```js var debug = require('debug'); var error = debug('app:error'); // by default stderr is used error('goes to stderr!'); var log = debug('app:log'); // set this namespace to log via console.log log.log = console.log.bind(console); // don't forget to bind to console! log('goes to stdout'); error('still goes to stderr!'); // set all output to go via console.info // overrides all per-namespace log settings debug.log = console.info.bind(console); error('now goes to stdout via console.info'); log('still goes to stdout, but via console.info now'); ``` ## Checking whether a debug target is enabled After you've created a debug instance, you can determine whether or not it is enabled by checking the `enabled` property: ```javascript const debug = require('debug')('http'); if (debug.enabled) { // do stuff... } ``` You can also manually toggle this property to force the debug instance to be enabled or disabled. ## Authors - TJ Holowaychuk - Nathan Rajlich - Andrew Rhyne ## Backers Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)] <a href="https://opencollective.com/debug/backer/0/website" target="_blank"><img src="https://opencollective.com/debug/backer/0/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/1/website" target="_blank"><img src="https://opencollective.com/debug/backer/1/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/2/website" target="_blank"><img src="https://opencollective.com/debug/backer/2/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/3/website" target="_blank"><img src="https://opencollective.com/debug/backer/3/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/4/website" target="_blank"><img src="https://opencollective.com/debug/backer/4/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/5/website" target="_blank"><img src="https://opencollective.com/debug/backer/5/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/6/website" target="_blank"><img src="https://opencollective.com/debug/backer/6/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/7/website" target="_blank"><img src="https://opencollective.com/debug/backer/7/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/8/website" target="_blank"><img src="https://opencollective.com/debug/backer/8/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/9/website" target="_blank"><img src="https://opencollective.com/debug/backer/9/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/10/website" target="_blank"><img src="https://opencollective.com/debug/backer/10/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/11/website" target="_blank"><img src="https://opencollective.com/debug/backer/11/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/12/website" target="_blank"><img src="https://opencollective.com/debug/backer/12/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/13/website" target="_blank"><img src="https://opencollective.com/debug/backer/13/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/14/website" target="_blank"><img src="https://opencollective.com/debug/backer/14/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/15/website" target="_blank"><img src="https://opencollective.com/debug/backer/15/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/16/website" target="_blank"><img src="https://opencollective.com/debug/backer/16/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/17/website" target="_blank"><img src="https://opencollective.com/debug/backer/17/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/18/website" target="_blank"><img src="https://opencollective.com/debug/backer/18/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/19/website" target="_blank"><img src="https://opencollective.com/debug/backer/19/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/20/website" target="_blank"><img src="https://opencollective.com/debug/backer/20/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/21/website" target="_blank"><img src="https://opencollective.com/debug/backer/21/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/22/website" target="_blank"><img src="https://opencollective.com/debug/backer/22/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/23/website" target="_blank"><img src="https://opencollective.com/debug/backer/23/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/24/website" target="_blank"><img src="https://opencollective.com/debug/backer/24/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/25/website" target="_blank"><img src="https://opencollective.com/debug/backer/25/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/26/website" target="_blank"><img src="https://opencollective.com/debug/backer/26/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/27/website" target="_blank"><img src="https://opencollective.com/debug/backer/27/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/28/website" target="_blank"><img src="https://opencollective.com/debug/backer/28/avatar.svg"></a> <a href="https://opencollective.com/debug/backer/29/website" target="_blank"><img src="https://opencollective.com/debug/backer/29/avatar.svg"></a> ## Sponsors Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)] <a href="https://opencollective.com/debug/sponsor/0/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/0/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/1/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/1/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/2/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/2/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/3/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/3/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/4/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/4/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/5/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/5/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/6/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/6/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/7/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/7/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/8/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/8/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/9/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/9/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/10/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/10/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/11/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/11/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/12/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/12/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/13/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/13/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/14/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/14/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/15/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/15/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/16/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/16/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/17/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/17/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/18/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/18/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/19/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/19/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/20/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/20/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/21/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/21/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/22/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/22/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/23/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/23/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/24/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/24/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/25/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/25/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/26/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/26/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/27/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/27/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/28/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/28/avatar.svg"></a> <a href="https://opencollective.com/debug/sponsor/29/website" target="_blank"><img src="https://opencollective.com/debug/sponsor/29/avatar.svg"></a> ## License (The MIT License) Copyright (c) 2014-2017 TJ Holowaychuk &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # tr46.js > An implementation of the [Unicode TR46 specification](http://unicode.org/reports/tr46/). ## Installation [Node.js](http://nodejs.org) `>= 6` is required. To install, type this at the command line: ```shell npm install tr46 ``` ## API ### `toASCII(domainName[, options])` Converts a string of Unicode symbols to a case-folded Punycode string of ASCII symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`processingOption`](#processingOption) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) * [`verifyDNSLength`](#verifyDNSLength) ### `toUnicode(domainName[, options])` Converts a case-folded Punycode string of ASCII symbols to a string of Unicode symbols. Available options: * [`checkBidi`](#checkBidi) * [`checkHyphens`](#checkHyphens) * [`checkJoiners`](#checkJoiners) * [`useSTD3ASCIIRules`](#useSTD3ASCIIRules) ## Options ### `checkBidi` Type: `Boolean` Default value: `false` When set to `true`, any bi-directional text within the input will be checked for validation. ### `checkHyphens` Type: `Boolean` Default value: `false` When set to `true`, the positions of any hyphen characters within the input will be checked for validation. ### `checkJoiners` Type: `Boolean` Default value: `false` When set to `true`, any word joiner characters within the input will be checked for validation. ### `processingOption` Type: `String` Default value: `"nontransitional"` When set to `"transitional"`, symbols within the input will be validated according to the older IDNA2003 protocol. When set to `"nontransitional"`, the current IDNA2008 protocol will be used. ### `useSTD3ASCIIRules` Type: `Boolean` Default value: `false` When set to `true`, input will be validated according to [STD3 Rules](http://unicode.org/reports/tr46/#STD3_Rules). ### `verifyDNSLength` Type: `Boolean` Default value: `false` When set to `true`, the length of each DNS label within the input will be checked for validation. # Sample This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage The goal of this repository is to make it as easy as possible to get started writing unit and simulation tests for AssemblyScript contracts built to work with NEAR Protocol. ## Usage ### Getting started 1. clone this repo to a local folder 2. run `yarn` 3. run `yarn test` ### Top-level `yarn` commands - run `yarn test` to run all tests - (!) be sure to run `yarn build:release` at least once before: - run `yarn test:unit` to run only unit tests - run `yarn test:simulate` to run only simulation tests - run `yarn build` to quickly verify build status - run `yarn clean` to clean up build folder ### Other documentation - Sample contract and test documentation - see `/src/sample/README` for contract interface - see `/src/sample/__tests__/README` for Sample unit testing details - Sample contract simulation tests - see `/simulation/README` for simulation testing ## The file system Please note that boilerplate project configuration files have been ommitted from the following lists for simplicity. ### Contracts and Unit Tests ```txt src ├── sample <-- sample contract │   ├── README.md │   ├── __tests__ │   │   ├── README.md │   │   └── index.unit.spec.ts │   └── assembly │   └── index.ts └── utils.ts <-- shared contract code ``` ### Helper Scripts ```txt scripts ├── 1.init.sh ├── 2.run.sh └── README.md <-- instructions ``` ![](cow.png) Moo! ==== Moo is a highly-optimised tokenizer/lexer generator. Use it to tokenize your strings, before parsing 'em with a parser like [nearley](https://github.com/hardmath123/nearley) or whatever else you're into. * [Fast](#is-it-fast) * [Convenient](#usage) * uses [Regular Expressions](#on-regular-expressions) * tracks [Line Numbers](#line-numbers) * handles [Keywords](#keywords) * supports [States](#states) * custom [Errors](#errors) * is even [Iterable](#iteration) * has no dependencies * 4KB minified + gzipped * Moo! Is it fast? ----------- Yup! Flying-cows-and-singed-steak fast. Moo is the fastest JS tokenizer around. It's **~2–10x** faster than most other tokenizers; it's a **couple orders of magnitude** faster than some of the slower ones. Define your tokens **using regular expressions**. Moo will compile 'em down to a **single RegExp for performance**. It uses the new ES6 **sticky flag** where possible to make things faster; otherwise it falls back to an almost-as-efficient workaround. (For more than you ever wanted to know about this, read [adventures in the land of substrings and RegExps](http://mrale.ph/blog/2016/11/23/making-less-dart-faster.html).) You _might_ be able to go faster still by writing your lexer by hand rather than using RegExps, but that's icky. Oh, and it [avoids parsing RegExps by itself](https://hackernoon.com/the-madness-of-parsing-real-world-javascript-regexps-d9ee336df983#.2l8qu3l76). Because that would be horrible. Usage ----- First, you need to do the needful: `$ npm install moo`, or whatever will ship this code to your computer. Alternatively, grab the `moo.js` file by itself and slap it into your web page via a `<script>` tag; moo is completely standalone. Then you can start roasting your very own lexer/tokenizer: ```js const moo = require('moo') let lexer = moo.compile({ WS: /[ \t]+/, comment: /\/\/.*?$/, number: /0|[1-9][0-9]*/, string: /"(?:\\["\\]|[^\n"\\])*"/, lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], NL: { match: /\n/, lineBreaks: true }, }) ``` And now throw some text at it: ```js lexer.reset('while (10) cows\nmoo') lexer.next() // -> { type: 'keyword', value: 'while' } lexer.next() // -> { type: 'WS', value: ' ' } lexer.next() // -> { type: 'lparen', value: '(' } lexer.next() // -> { type: 'number', value: '10' } // ... ``` When you reach the end of Moo's internal buffer, next() will return `undefined`. You can always `reset()` it and feed it more data when that happens. On Regular Expressions ---------------------- RegExps are nifty for making tokenizers, but they can be a bit of a pain. Here are some things to be aware of: * You often want to use **non-greedy quantifiers**: e.g. `*?` instead of `*`. Otherwise your tokens will be longer than you expect: ```js let lexer = moo.compile({ string: /".*"/, // greedy quantifier * // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo" "bar' } ``` Better: ```js let lexer = moo.compile({ string: /".*?"/, // non-greedy quantifier *? // ... }) lexer.reset('"foo" "bar"') lexer.next() // -> { type: 'string', value: 'foo' } lexer.next() // -> { type: 'space', value: ' ' } lexer.next() // -> { type: 'string', value: 'bar' } ``` * The **order of your rules** matters. Earlier ones will take precedence. ```js moo.compile({ identifier: /[a-z0-9]+/, number: /[0-9]+/, }).reset('42').next() // -> { type: 'identifier', value: '42' } moo.compile({ number: /[0-9]+/, identifier: /[a-z0-9]+/, }).reset('42').next() // -> { type: 'number', value: '42' } ``` * Moo uses **multiline RegExps**. This has a few quirks: for example, the **dot `/./` doesn't include newlines**. Use `[^]` instead if you want to match newlines too. * Since an excluding character ranges like `/[^ ]/` (which matches anything but a space) _will_ include newlines, you have to be careful not to include them by accident! In particular, the whitespace metacharacter `\s` includes newlines. Line Numbers ------------ Moo tracks detailed information about the input for you. It will track line numbers, as long as you **apply the `lineBreaks: true` option to any rules which might contain newlines**. Moo will try to warn you if you forget to do this. Note that this is `false` by default, for performance reasons: counting the number of lines in a matched token has a small cost. For optimal performance, only match newlines inside a dedicated token: ```js newline: {match: '\n', lineBreaks: true}, ``` ### Token Info ### Token objects (returned from `next()`) have the following attributes: * **`type`**: the name of the group, as passed to compile. * **`text`**: the string that was matched. * **`value`**: the string that was matched, transformed by your `value` function (if any). * **`offset`**: the number of bytes from the start of the buffer where the match starts. * **`lineBreaks`**: the number of line breaks found in the match. (Always zero if this rule has `lineBreaks: false`.) * **`line`**: the line number of the beginning of the match, starting from 1. * **`col`**: the column where the match begins, starting from 1. ### Value vs. Text ### The `value` is the same as the `text`, unless you provide a [value transform](#transform). ```js const moo = require('moo') const lexer = moo.compile({ ws: /[ \t]+/, string: {match: /"(?:\\["\\]|[^\n"\\])*"/, value: s => s.slice(1, -1)}, }) lexer.reset('"test"') lexer.next() /* { value: 'test', text: '"test"', ... } */ ``` ### Reset ### Calling `reset()` on your lexer will empty its internal buffer, and set the line, column, and offset counts back to their initial value. If you don't want this, you can `save()` the state, and later pass it as the second argument to `reset()` to explicitly control the internal state of the lexer. ```js    lexer.reset('some line\n') let info = lexer.save() // -> { line: 10 } lexer.next() // -> { line: 10 } lexer.next() // -> { line: 11 } // ... lexer.reset('a different line\n', info) lexer.next() // -> { line: 10 } ``` Keywords -------- Moo makes it convenient to define literals. ```js moo.compile({ lparen: '(', rparen: ')', keyword: ['while', 'if', 'else', 'moo', 'cows'], }) ``` It'll automatically compile them into regular expressions, escaping them where necessary. **Keywords** should be written using the `keywords` transform. ```js moo.compile({ IDEN: {match: /[a-zA-Z]+/, type: moo.keywords({ KW: ['while', 'if', 'else', 'moo', 'cows'], })}, SPACE: {match: /\s+/, lineBreaks: true}, }) ``` ### Why? ### You need to do this to ensure the **longest match** principle applies, even in edge cases. Imagine trying to parse the input `className` with the following rules: ```js keyword: ['class'], identifier: /[a-zA-Z]+/, ``` You'll get _two_ tokens — `['class', 'Name']` -- which is _not_ what you want! If you swap the order of the rules, you'll fix this example; but now you'll lex `class` wrong (as an `identifier`). The keywords helper checks matches against the list of keywords; if any of them match, it uses the type `'keyword'` instead of `'identifier'` (for this example). ### Keyword Types ### Keywords can also have **individual types**. ```js let lexer = moo.compile({ name: {match: /[a-zA-Z]+/, type: moo.keywords({ 'kw-class': 'class', 'kw-def': 'def', 'kw-if': 'if', })}, // ... }) lexer.reset('def foo') lexer.next() // -> { type: 'kw-def', value: 'def' } lexer.next() // space lexer.next() // -> { type: 'name', value: 'foo' } ``` You can use [itt](https://github.com/nathan/itt)'s iterator adapters to make constructing keyword objects easier: ```js itt(['class', 'def', 'if']) .map(k => ['kw-' + k, k]) .toObject() ``` States ------ Moo allows you to define multiple lexer **states**. Each state defines its own separate set of token rules. Your lexer will start off in the first state given to `moo.states({})`. Rules can be annotated with `next`, `push`, and `pop`, to change the current state after that token is matched. A "stack" of past states is kept, which is used by `push` and `pop`. * **`next: 'bar'`** moves to the state named `bar`. (The stack is not changed.) * **`push: 'bar'`** moves to the state named `bar`, and pushes the old state onto the stack. * **`pop: 1`** removes one state from the top of the stack, and moves to that state. (Only `1` is supported.) Only rules from the current state can be matched. You need to copy your rule into all the states you want it to be matched in. For example, to tokenize JS-style string interpolation such as `a${{c: d}}e`, you might use: ```js let lexer = moo.states({ main: { strstart: {match: '`', push: 'lit'}, ident: /\w+/, lbrace: {match: '{', push: 'main'}, rbrace: {match: '}', pop: true}, colon: ':', space: {match: /\s+/, lineBreaks: true}, }, lit: { interp: {match: '${', push: 'main'}, escape: /\\./, strend: {match: '`', pop: true}, const: {match: /(?:[^$`]|\$(?!\{))+/, lineBreaks: true}, }, }) // <= `a${{c: d}}e` // => strstart const interp lbrace ident colon space ident rbrace rbrace const strend ``` The `rbrace` rule is annotated with `pop`, so it moves from the `main` state into either `lit` or `main`, depending on the stack. Errors ------ If none of your rules match, Moo will throw an Error; since it doesn't know what else to do. If you prefer, you can have moo return an error token instead of throwing an exception. The error token will contain the whole of the rest of the buffer. ```js moo.compile({ // ... myError: moo.error, }) moo.reset('invalid') moo.next() // -> { type: 'myError', value: 'invalid', text: 'invalid', offset: 0, lineBreaks: 0, line: 1, col: 1 } moo.next() // -> undefined ``` You can have a token type that both matches tokens _and_ contains error values. ```js moo.compile({ // ... myError: {match: /[\$?`]/, error: true}, }) ``` ### Formatting errors ### If you want to throw an error from your parser, you might find `formatError` helpful. Call it with the offending token: ```js throw new Error(lexer.formatError(token, "invalid syntax")) ``` It returns a string with a pretty error message. ``` Error: invalid syntax at line 2 col 15: totally valid `syntax` ^ ``` Iteration --------- Iterators: we got 'em. ```js for (let here of lexer) { // here = { type: 'number', value: '123', ... } } ``` Create an array of tokens. ```js let tokens = Array.from(lexer); ``` Use [itt](https://github.com/nathan/itt)'s iteration tools with Moo. ```js for (let [here, next] = itt(lexer).lookahead()) { // pass a number if you need more tokens // enjoy! } ``` Transform --------- Moo doesn't allow capturing groups, but you can supply a transform function, `value()`, which will be called on the value before storing it in the Token object. ```js moo.compile({ STRING: [ {match: /"""[^]*?"""/, lineBreaks: true, value: x => x.slice(3, -3)}, {match: /"(?:\\["\\rn]|[^"\\])*?"/, lineBreaks: true, value: x => x.slice(1, -1)}, {match: /'(?:\\['\\rn]|[^'\\])*?'/, lineBreaks: true, value: x => x.slice(1, -1)}, ], // ... }) ``` Contributing ------------ Do check the [FAQ](https://github.com/tjvr/moo/issues?q=label%3Aquestion). Before submitting an issue, [remember...](https://github.com/tjvr/moo/blob/master/.github/CONTRIBUTING.md) # minipass A _very_ minimal implementation of a [PassThrough stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough) [It's very fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0) for objects, strings, and buffers. Supports pipe()ing (including multi-pipe() and backpressure transmission), buffering data until either a `data` event handler or `pipe()` is added (so you don't lose the first chunk), and most other cases where PassThrough is a good idea. There is a `read()` method, but it's much more efficient to consume data from this stream via `'data'` events or by calling `pipe()` into some other stream. Calling `read()` requires the buffer to be flattened in some cases, which requires copying memory. There is also no `unpipe()` method. Once you start piping, there is no stopping it! If you set `objectMode: true` in the options, then whatever is written will be emitted. Otherwise, it'll do a minimal amount of Buffer copying to ensure proper Streams semantics when `read(n)` is called. `objectMode` can also be set by doing `stream.objectMode = true`, or by writing any non-string/non-buffer data. `objectMode` cannot be set to false once it is set. This is not a `through` or `through2` stream. It doesn't transform the data, it just passes it right through. If you want to transform the data, extend the class, and override the `write()` method. Once you're done transforming the data however you want, call `super.write()` with the transform output. For some examples of streams that extend Minipass in various ways, check out: - [minizlib](http://npm.im/minizlib) - [fs-minipass](http://npm.im/fs-minipass) - [tar](http://npm.im/tar) - [minipass-collect](http://npm.im/minipass-collect) - [minipass-flush](http://npm.im/minipass-flush) - [minipass-pipeline](http://npm.im/minipass-pipeline) - [tap](http://npm.im/tap) - [tap-parser](http://npm.im/tap) - [treport](http://npm.im/tap) - [minipass-fetch](http://npm.im/minipass-fetch) - [pacote](http://npm.im/pacote) - [make-fetch-happen](http://npm.im/make-fetch-happen) - [cacache](http://npm.im/cacache) - [ssri](http://npm.im/ssri) - [npm-registry-fetch](http://npm.im/npm-registry-fetch) - [minipass-json-stream](http://npm.im/minipass-json-stream) - [minipass-sized](http://npm.im/minipass-sized) ## Differences from Node.js Streams There are several things that make Minipass streams different from (and in some ways superior to) Node.js core streams. Please read these caveats if you are familiar with noode-core streams and intend to use Minipass streams in your programs. ### Timing Minipass streams are designed to support synchronous use-cases. Thus, data is emitted as soon as it is available, always. It is buffered until read, but no longer. Another way to look at it is that Minipass streams are exactly as synchronous as the logic that writes into them. This can be surprising if your code relies on `PassThrough.write()` always providing data on the next tick rather than the current one, or being able to call `resume()` and not have the entire buffer disappear immediately. However, without this synchronicity guarantee, there would be no way for Minipass to achieve the speeds it does, or support the synchronous use cases that it does. Simply put, waiting takes time. This non-deferring approach makes Minipass streams much easier to reason about, especially in the context of Promises and other flow-control mechanisms. ### No High/Low Water Marks Node.js core streams will optimistically fill up a buffer, returning `true` on all writes until the limit is hit, even if the data has nowhere to go. Then, they will not attempt to draw more data in until the buffer size dips below a minimum value. Minipass streams are much simpler. The `write()` method will return `true` if the data has somewhere to go (which is to say, given the timing guarantees, that the data is already there by the time `write()` returns). If the data has nowhere to go, then `write()` returns false, and the data sits in a buffer, to be drained out immediately as soon as anyone consumes it. ### Hazards of Buffering (or: Why Minipass Is So Fast) Since data written to a Minipass stream is immediately written all the way through the pipeline, and `write()` always returns true/false based on whether the data was fully flushed, backpressure is communicated immediately to the upstream caller. This minimizes buffering. Consider this case: ```js const {PassThrough} = require('stream') const p1 = new PassThrough({ highWaterMark: 1024 }) const p2 = new PassThrough({ highWaterMark: 1024 }) const p3 = new PassThrough({ highWaterMark: 1024 }) const p4 = new PassThrough({ highWaterMark: 1024 }) p1.pipe(p2).pipe(p3).pipe(p4) p4.on('data', () => console.log('made it through')) // this returns false and buffers, then writes to p2 on next tick (1) // p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2) // p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3) // p4 returns false and buffers, pausing p3, then emits 'data' and 'drain' // on next tick (4) // p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and // 'drain' on next tick (5) // p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6) // p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next // tick (7) p1.write(Buffer.alloc(2048)) // returns false ``` Along the way, the data was buffered and deferred at each stage, and multiple event deferrals happened, for an unblocked pipeline where it was perfectly safe to write all the way through! Furthermore, setting a `highWaterMark` of `1024` might lead someone reading the code to think an advisory maximum of 1KiB is being set for the pipeline. However, the actual advisory buffering level is the _sum_ of `highWaterMark` values, since each one has its own bucket. Consider the Minipass case: ```js const m1 = new Minipass() const m2 = new Minipass() const m3 = new Minipass() const m4 = new Minipass() m1.pipe(m2).pipe(m3).pipe(m4) m4.on('data', () => console.log('made it through')) // m1 is flowing, so it writes the data to m2 immediately // m2 is flowing, so it writes the data to m3 immediately // m3 is flowing, so it writes the data to m4 immediately // m4 is flowing, so it fires the 'data' event immediately, returns true // m4's write returned true, so m3 is still flowing, returns true // m3's write returned true, so m2 is still flowing, returns true // m2's write returned true, so m1 is still flowing, returns true // No event deferrals or buffering along the way! m1.write(Buffer.alloc(2048)) // returns true ``` It is extremely unlikely that you _don't_ want to buffer any data written, or _ever_ buffer data that can be flushed all the way through. Neither node-core streams nor Minipass ever fail to buffer written data, but node-core streams do a lot of unnecessary buffering and pausing. As always, the faster implementation is the one that does less stuff and waits less time to do it. ### Immediately emit `end` for empty streams (when not paused) If a stream is not paused, and `end()` is called before writing any data into it, then it will emit `end` immediately. If you have logic that occurs on the `end` event which you don't want to potentially happen immediately (for example, closing file descriptors, moving on to the next entry in an archive parse stream, etc.) then be sure to call `stream.pause()` on creation, and then `stream.resume()` once you are ready to respond to the `end` event. ### Emit `end` When Asked One hazard of immediately emitting `'end'` is that you may not yet have had a chance to add a listener. In order to avoid this hazard, Minipass streams safely re-emit the `'end'` event if a new listener is added after `'end'` has been emitted. Ie, if you do `stream.on('end', someFunction)`, and the stream has already emitted `end`, then it will call the handler right away. (You can think of this somewhat like attaching a new `.then(fn)` to a previously-resolved Promise.) To prevent calling handlers multiple times who would not expect multiple ends to occur, all listeners are removed from the `'end'` event whenever it is emitted. ### Impact of "immediate flow" on Tee-streams A "tee stream" is a stream piping to multiple destinations: ```js const tee = new Minipass() t.pipe(dest1) t.pipe(dest2) t.write('foo') // goes to both destinations ``` Since Minipass streams _immediately_ process any pending data through the pipeline when a new pipe destination is added, this can have surprising effects, especially when a stream comes in from some other function and may or may not have data in its buffer. ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone src.pipe(dest2) // gets nothing! ``` The solution is to create a dedicated tee-stream junction that pipes to both locations, and then pipe to _that_ instead. ```js // Safe example: tee to both places const src = new Minipass() src.write('foo') const tee = new Minipass() tee.pipe(dest1) tee.pipe(dest2) src.pipe(tee) // tee gets 'foo', pipes to both locations ``` The same caveat applies to `on('data')` event listeners. The first one added will _immediately_ receive all of the data, leaving nothing for the second: ```js // WARNING! WILL LOSE DATA! const src = new Minipass() src.write('foo') src.on('data', handler1) // receives 'foo' right away src.on('data', handler2) // nothing to see here! ``` Using a dedicated tee-stream can be used in this case as well: ```js // Safe example: tee to both data handlers const src = new Minipass() src.write('foo') const tee = new Minipass() tee.on('data', handler1) tee.on('data', handler2) src.pipe(tee) ``` ## USAGE It's a stream! Use it like a stream and it'll most likely do what you want. ```js const Minipass = require('minipass') const mp = new Minipass(options) // optional: { encoding, objectMode } mp.write('foo') mp.pipe(someOtherStream) mp.end('bar') ``` ### OPTIONS * `encoding` How would you like the data coming _out_ of the stream to be encoded? Accepts any values that can be passed to `Buffer.toString()`. * `objectMode` Emit data exactly as it comes in. This will be flipped on by default if you write() something other than a string or Buffer at any point. Setting `objectMode: true` will prevent setting any encoding value. ### API Implements the user-facing portions of Node.js's `Readable` and `Writable` streams. ### Methods * `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the base Minipass class, the same data will come out.) Returns `false` if the stream will buffer the next write, or true if it's still in "flowing" mode. * `end([chunk, [encoding]], [callback])` - Signal that you have no more data to write. This will queue an `end` event to be fired when all the data has been consumed. * `setEncoding(encoding)` - Set the encoding for data coming of the stream. This can only be done once. * `pause()` - No more data for a while, please. This also prevents `end` from being emitted for empty streams until the stream is resumed. * `resume()` - Resume the stream. If there's data in the buffer, it is all discarded. Any buffered events are immediately emitted. * `pipe(dest)` - Send all output to the stream provided. There is no way to unpipe. When data is emitted, it is immediately written to any and all pipe destinations. * `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters. Some events are given special treatment, however. (See below under "events".) * `promise()` - Returns a Promise that resolves when the stream emits `end`, or rejects if the stream emits `error`. * `collect()` - Return a Promise that resolves on `end` with an array containing each chunk of data that was emitted, or rejects if the stream emits `error`. Note that this consumes the stream data. * `concat()` - Same as `collect()`, but concatenates the data into a single Buffer object. Will reject the returned promise if the stream is in objectMode, or if it goes into objectMode by the end of the data. * `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not provided, then consume all of it. If `n` bytes are not available, then it returns null. **Note** consuming streams in this way is less efficient, and can lead to unnecessary Buffer copying. * `destroy([er])` - Destroy the stream. If an error is provided, then an `'error'` event is emitted. If the stream has a `close()` method, and has not emitted a `'close'` event yet, then `stream.close()` will be called. Any Promises returned by `.promise()`, `.collect()` or `.concat()` will be rejected. After being destroyed, writing to the stream will emit an error. No more data will be emitted if the stream is destroyed, even if it was previously buffered. ### Properties * `bufferLength` Read-only. Total number of bytes buffered, or in the case of objectMode, the total number of objects. * `encoding` The encoding that has been set. (Setting this is equivalent to calling `setEncoding(enc)` and has the same prohibition against setting multiple times.) * `flowing` Read-only. Boolean indicating whether a chunk written to the stream will be immediately emitted. * `emittedEnd` Read-only. Boolean indicating whether the end-ish events (ie, `end`, `prefinish`, `finish`) have been emitted. Note that listening on any end-ish event will immediateyl re-emit it if it has already been emitted. * `writable` Whether the stream is writable. Default `true`. Set to `false` when `end()` * `readable` Whether the stream is readable. Default `true`. * `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written to the stream that have not yet been emitted. (It's probably a bad idea to mess with this.) * `pipes` A [yallist](http://npm.im/yallist) linked list of streams that this stream is piping into. (It's probably a bad idea to mess with this.) * `destroyed` A getter that indicates whether the stream was destroyed. * `paused` True if the stream has been explicitly paused, otherwise false. * `objectMode` Indicates whether the stream is in `objectMode`. Once set to `true`, it cannot be set to `false`. ### Events * `data` Emitted when there's data to read. Argument is the data to read. This is never emitted while not flowing. If a listener is attached, that will resume the stream. * `end` Emitted when there's no more data to read. This will be emitted immediately for empty streams when `end()` is called. If a listener is attached, and `end` was already emitted, then it will be emitted again. All listeners are removed when `end` is emitted. * `prefinish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'end'`. * `finish` An end-ish event that follows the same logic as `end` and is emitted in the same conditions where `end` is emitted. Emitted after `'prefinish'`. * `close` An indication that an underlying resource has been released. Minipass does not emit this event, but will defer it until after `end` has been emitted, since it throws off some stream libraries otherwise. * `drain` Emitted when the internal buffer empties, and it is again suitable to `write()` into the stream. * `readable` Emitted when data is buffered and ready to be read by a consumer. * `resume` Emitted when stream changes state from buffering to flowing mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event listener is added.) ### Static Methods * `Minipass.isStream(stream)` Returns `true` if the argument is a stream, and false otherwise. To be considered a stream, the object must be either an instance of Minipass, or an EventEmitter that has either a `pipe()` method, or both `write()` and `end()` methods. (Pretty much any stream in node-land will return `true` for this.) ## EXAMPLES Here are some examples of things you can do with Minipass streams. ### simple "are you done yet" promise ```js mp.promise().then(() => { // stream is finished }, er => { // stream emitted an error }) ``` ### collecting ```js mp.collect().then(all => { // all is an array of all the data emitted // encoding is supported in this case, so // so the result will be a collection of strings if // an encoding is specified, or buffers/objects if not. // // In an async function, you may do // const data = await stream.collect() }) ``` ### collecting into a single blob This is a bit slower because it concatenates the data into one chunk for you, but if you're going to do it yourself anyway, it's convenient this way: ```js mp.concat().then(onebigchunk => { // onebigchunk is a string if the stream // had an encoding set, or a buffer otherwise. }) ``` ### iteration You can iterate over streams synchronously or asynchronously in platforms that support it. Synchronous iteration will end when the currently available data is consumed, even if the `end` event has not been reached. In string and buffer mode, the data is concatenated, so unless multiple writes are occurring in the same tick as the `read()`, sync iteration loops will generally only have a single iteration. To consume chunks in this way exactly as they have been written, with no flattening, create the stream with the `{ objectMode: true }` option. ```js const mp = new Minipass({ objectMode: true }) mp.write('a') mp.write('b') for (let letter of mp) { console.log(letter) // a, b } mp.write('c') mp.write('d') for (let letter of mp) { console.log(letter) // c, d } mp.write('e') mp.end() for (let letter of mp) { console.log(letter) // e } for (let letter of mp) { console.log(letter) // nothing } ``` Asynchronous iteration will continue until the end event is reached, consuming all of the data. ```js const mp = new Minipass({ encoding: 'utf8' }) // some source of some data let i = 5 const inter = setInterval(() => { if (i --> 0) mp.write(Buffer.from('foo\n', 'utf8')) else { mp.end() clearInterval(inter) } }, 100) // consume the data with asynchronous iteration async function consume () { for await (let chunk of mp) { console.log(chunk) } return 'ok' } consume().then(res => console.log(res)) // logs `foo\n` 5 times, and then `ok` ``` ### subclass that `console.log()`s everything written into it ```js class Logger extends Minipass { write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } } someSource.pipe(new Logger()).pipe(someDest) ``` ### same thing, but using an inline anonymous class ```js // js classes are fun someSource .pipe(new (class extends Minipass { emit (ev, ...data) { // let's also log events, because debugging some weird thing console.log('EMIT', ev) return super.emit(ev, ...data) } write (chunk, encoding, callback) { console.log('WRITE', chunk, encoding) return super.write(chunk, encoding, callback) } end (chunk, encoding, callback) { console.log('END', chunk, encoding) return super.end(chunk, encoding, callback) } })) .pipe(someDest) ``` ### subclass that defers 'end' for some reason ```js class SlowEnd extends Minipass { emit (ev, ...args) { if (ev === 'end') { console.log('going to end, hold on a sec') setTimeout(() => { console.log('ok, ready to end now') super.emit('end', ...args) }, 100) } else { return super.emit(ev, ...args) } } } ``` ### transform that creates newline-delimited JSON ```js class NDJSONEncode extends Minipass { write (obj, cb) { try { // JSON.stringify can throw, emit an error on that return super.write(JSON.stringify(obj) + '\n', 'utf8', cb) } catch (er) { this.emit('error', er) } } end (obj, cb) { if (typeof obj === 'function') { cb = obj obj = undefined } if (obj !== undefined) { this.write(obj) } return super.end(cb) } } ``` ### transform that parses newline-delimited JSON ```js class NDJSONDecode extends Minipass { constructor (options) { // always be in object mode, as far as Minipass is concerned super({ objectMode: true }) this._jsonBuffer = '' } write (chunk, encoding, cb) { if (typeof chunk === 'string' && typeof encoding === 'string' && encoding !== 'utf8') { chunk = Buffer.from(chunk, encoding).toString() } else if (Buffer.isBuffer(chunk)) chunk = chunk.toString() } if (typeof encoding === 'function') { cb = encoding } const jsonData = (this._jsonBuffer + chunk).split('\n') this._jsonBuffer = jsonData.pop() for (let i = 0; i < jsonData.length; i++) { let parsed try { super.write(parsed) } catch (er) { this.emit('error', er) continue } } if (cb) cb() } } ``` # axios // helpers The modules found in `helpers/` should be generic modules that are _not_ specific to the domain logic of axios. These modules could theoretically be published to npm on their own and consumed by other modules or apps. Some examples of generic modules are things like: - Browser polyfills - Managing cookies - Parsing HTTP headers # <img src="./logo.png" alt="bn.js" width="160" height="160" /> > BigNum in pure javascript [![Build Status](https://secure.travis-ci.org/indutny/bn.js.png)](http://travis-ci.org/indutny/bn.js) ## Install `npm install --save bn.js` ## Usage ```js const BN = require('bn.js'); var a = new BN('dead', 16); var b = new BN('101010', 2); var res = a.add(b); console.log(res.toString(10)); // 57047 ``` **Note**: decimals are not supported in this library. ## Notation ### Prefixes There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name: * `i` - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costs * `u` - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like `mod()`. In such cases if the result will be negative - modulo will be added to the result to make it positive ### Postfixes * `n` - the argument of the function must be a plain JavaScript Number. Decimals are not supported. * `rn` - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported. ### Examples * `a.iadd(b)` - perform addition on `a` and `b`, storing the result in `a` * `a.umod(b)` - reduce `a` modulo `b`, returning positive value * `a.iushln(13)` - shift bits of `a` left by 13 ## Instructions Prefixes/postfixes are put in parens at the of the line. `endian` - could be either `le` (little-endian) or `be` (big-endian). ### Utilities * `a.clone()` - clone number * `a.toString(base, length)` - convert to base-string and pad with zeroes * `a.toNumber()` - convert to Javascript Number (limited to 53 bits) * `a.toJSON()` - convert to JSON compatible hex string (alias of `toString(16)`) * `a.toArray(endian, length)` - convert to byte `Array`, and optionally zero pad to length, throwing if already exceeding * `a.toArrayLike(type, endian, length)` - convert to an instance of `type`, which must behave like an `Array` * `a.toBuffer(endian, length)` - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: `a.toArrayLike(Buffer, endian, length)` * `a.bitLength()` - get number of bits occupied * `a.zeroBits()` - return number of less-significant consequent zero bits (example: `1010000` has 4 zero bits) * `a.byteLength()` - return number of bytes occupied * `a.isNeg()` - true if the number is negative * `a.isEven()` - no comments * `a.isOdd()` - no comments * `a.isZero()` - no comments * `a.cmp(b)` - compare numbers and return `-1` (a `<` b), `0` (a `==` b), or `1` (a `>` b) depending on the comparison result (`ucmp`, `cmpn`) * `a.lt(b)` - `a` less than `b` (`n`) * `a.lte(b)` - `a` less than or equals `b` (`n`) * `a.gt(b)` - `a` greater than `b` (`n`) * `a.gte(b)` - `a` greater than or equals `b` (`n`) * `a.eq(b)` - `a` equals `b` (`n`) * `a.toTwos(width)` - convert to two's complement representation, where `width` is bit width * `a.fromTwos(width)` - convert from two's complement representation, where `width` is the bit width * `BN.isBN(object)` - returns true if the supplied `object` is a BN.js instance * `BN.max(a, b)` - return `a` if `a` bigger than `b` * `BN.min(a, b)` - return `a` if `a` less than `b` ### Arithmetics * `a.neg()` - negate sign (`i`) * `a.abs()` - absolute value (`i`) * `a.add(b)` - addition (`i`, `n`, `in`) * `a.sub(b)` - subtraction (`i`, `n`, `in`) * `a.mul(b)` - multiply (`i`, `n`, `in`) * `a.sqr()` - square (`i`) * `a.pow(b)` - raise `a` to the power of `b` * `a.div(b)` - divide (`divn`, `idivn`) * `a.mod(b)` - reduct (`u`, `n`) (but no `umodn`) * `a.divmod(b)` - quotient and modulus obtained by dividing * `a.divRound(b)` - rounded division ### Bit operations * `a.or(b)` - or (`i`, `u`, `iu`) * `a.and(b)` - and (`i`, `u`, `iu`, `andln`) (NOTE: `andln` is going to be replaced with `andn` in future) * `a.xor(b)` - xor (`i`, `u`, `iu`) * `a.setn(b, value)` - set specified bit to `value` * `a.shln(b)` - shift left (`i`, `u`, `iu`) * `a.shrn(b)` - shift right (`i`, `u`, `iu`) * `a.testn(b)` - test if specified bit is set * `a.maskn(b)` - clear bits with indexes higher or equal to `b` (`i`) * `a.bincn(b)` - add `1 << b` to the number * `a.notn(w)` - not (for the width specified by `w`) (`i`) ### Reduction * `a.gcd(b)` - GCD * `a.egcd(b)` - Extended GCD results (`{ a: ..., b: ..., gcd: ... }`) * `a.invm(b)` - inverse `a` modulo `b` ## Fast reduction When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like [Montgomery multiplication][0], or using special algorithm for [Mersenne Prime][1]. ### Reduction context To enable this tricks one should create a reduction context: ```js var red = BN.red(num); ``` where `num` is just a BN instance. Or: ```js var red = BN.red(primeName); ``` Where `primeName` is either of these [Mersenne Primes][1]: * `'k256'` * `'p224'` * `'p192'` * `'p25519'` Or: ```js var red = BN.mont(num); ``` To reduce numbers with [Montgomery trick][0]. `.mont()` is generally faster than `.red(num)`, but slower than `BN.red(primeName)`. ### Converting numbers Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should: * Convert inputs to reducted ones * Operate on them in reduction context * Convert outputs back from the reduction context Here is how one may convert numbers to `red`: ```js var redA = a.toRed(red); ``` Where `red` is a reduction context created using instructions above Here is how to convert them back: ```js var a = redA.fromRed(); ``` ### Red instructions Most of the instructions from the very start of this readme have their counterparts in red context: * `a.redAdd(b)`, `a.redIAdd(b)` * `a.redSub(b)`, `a.redISub(b)` * `a.redShl(num)` * `a.redMul(b)`, `a.redIMul(b)` * `a.redSqr()`, `a.redISqr()` * `a.redSqrt()` - square root modulo reduction context's prime * `a.redInvm()` - modular inverse of the number * `a.redNeg()` * `a.redPow(b)` - modular exponentiation ### Number Size Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers. ## LICENSE This software is licensed under the MIT License. [0]: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication [1]: https://en.wikipedia.org/wiki/Mersenne_prime # y18n [![Build Status][travis-image]][travis-url] [![Coverage Status][coveralls-image]][coveralls-url] [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js var __ = require('y18n').__ console.log(__('my awesome string %s', 'foo')) ``` output: `my awesome string foo` _using tagged template literals_ ```js var __ = require('y18n').__ var str = 'foo' console.log(__`my awesome string ${str}`) ``` output: `my awesome string foo` _pluralization support:_ ```js var __n = require('y18n').__n console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')) ``` output: `2 fishes foo` ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## License ISC [travis-url]: https://travis-ci.org/yargs/y18n [travis-image]: https://img.shields.io/travis/yargs/y18n.svg [coveralls-url]: https://coveralls.io/github/yargs/y18n [coveralls-image]: https://img.shields.io/coveralls/yargs/y18n.svg [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # get-caller-file [![Build Status](https://travis-ci.org/stefanpenner/get-caller-file.svg?branch=master)](https://travis-ci.org/stefanpenner/get-caller-file) [![Build status](https://ci.appveyor.com/api/projects/status/ol2q94g1932cy14a/branch/master?svg=true)](https://ci.appveyor.com/project/embercli/get-caller-file/branch/master) This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8's stack trace at the time it is invoked. Inspired by http://stackoverflow.com/questions/13227489 *note: this relies on Node/V8 specific APIs, as such other runtimes may not work* ## Installation ```bash yarn add get-caller-file ``` ## Usage Given: ```js // ./foo.js const getCallerFile = require('get-caller-file'); module.exports = function() { return getCallerFile(); // figures out who called it }; ``` ```js // index.js const foo = require('./foo'); foo() // => /full/path/to/this/file/index.js ``` ## Options: * `getCallerFile(position = 2)`: where position is stack frame whos fileName we want. # assemblyscript-json ![npm version](https://img.shields.io/npm/v/assemblyscript-json) ![npm downloads per month](https://img.shields.io/npm/dm/assemblyscript-json) JSON encoder / decoder for AssemblyScript. Special thanks to https://github.com/MaxGraey/bignum.wasm for basic unit testing infra for AssemblyScript. ## Installation `assemblyscript-json` is available as a [npm package](https://www.npmjs.com/package/assemblyscript-json). You can install `assemblyscript-json` in your AssemblyScript project by running: `npm install --save assemblyscript-json` ## Usage ### Parsing JSON ```typescript import { JSON } from "assemblyscript-json"; // Parse an object using the JSON object let jsonObj: JSON.Obj = <JSON.Obj>(JSON.parse('{"hello": "world", "value": 24}')); // We can then use the .getX functions to read from the object if you know it's type // This will return the appropriate JSON.X value if the key exists, or null if the key does not exist let worldOrNull: JSON.Str | null = jsonObj.getString("hello"); // This will return a JSON.Str or null if (worldOrNull != null) { // use .valueOf() to turn the high level JSON.Str type into a string let world: string = worldOrNull.valueOf(); } let numOrNull: JSON.Num | null = jsonObj.getNum("value"); if (numOrNull != null) { // use .valueOf() to turn the high level JSON.Num type into a f64 let value: f64 = numOrNull.valueOf(); } // If you don't know the value type, get the parent JSON.Value let valueOrNull: JSON.Value | null = jsonObj.getValue("hello"); if (valueOrNull != null) { let value: JSON.Value = changetype<JSON.Value>(valueOrNull); // Next we could figure out what type we are if(value.isString) { // value.isString would be true, so we can cast to a string let stringValue: string = changetype<JSON.Str>(value).toString(); // Do something with string value } } ``` ### Encoding JSON ```typescript import { JSONEncoder } from "assemblyscript-json"; // Create encoder let encoder = new JSONEncoder(); // Construct necessary object encoder.pushObject("obj"); encoder.setInteger("int", 10); encoder.setString("str", ""); encoder.popObject(); // Get serialized data let json: Uint8Array = encoder.serialize(); // Or get serialized data as string let jsonString: string = encoder.toString(); assert(jsonString, '"obj": {"int": 10, "str": ""}'); // True! ``` ### Custom JSON Deserializers ```typescript import { JSONDecoder, JSONHandler } from "assemblyscript-json"; // Events need to be received by custom object extending JSONHandler. // NOTE: All methods are optional to implement. class MyJSONEventsHandler extends JSONHandler { setString(name: string, value: string): void { // Handle field } setBoolean(name: string, value: bool): void { // Handle field } setNull(name: string): void { // Handle field } setInteger(name: string, value: i64): void { // Handle field } setFloat(name: string, value: f64): void { // Handle field } pushArray(name: string): bool { // Handle array start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popArray(): void { // Handle array end } pushObject(name: string): bool { // Handle object start // true means that nested object needs to be traversed, false otherwise // Note that returning false means JSONDecoder.startIndex need to be updated by handler return true; } popObject(): void { // Handle object end } } // Create decoder let decoder = new JSONDecoder<MyJSONEventsHandler>(new MyJSONEventsHandler()); // Create a byte buffer of our JSON. NOTE: Deserializers work on UTF8 string buffers. let jsonString = '{"hello": "world"}'; let jsonBuffer = Uint8Array.wrap(String.UTF8.encode(jsonString)); // Parse JSON decoder.deserialize(jsonBuffer); // This will send events to MyJSONEventsHandler ``` Feel free to look through the [tests](https://github.com/nearprotocol/assemblyscript-json/tree/master/assembly/__tests__) for more usage examples. ## Reference Documentation Reference API Documentation can be found in the [docs directory](./docs). ## License [MIT](./LICENSE) # color-convert [![Build Status](https://travis-ci.org/Qix-/color-convert.svg?branch=master)](https://travis-ci.org/Qix-/color-convert) Color-convert is a color conversion library for JavaScript and node. It converts all ways between `rgb`, `hsl`, `hsv`, `hwb`, `cmyk`, `ansi`, `ansi16`, `hex` strings, and CSS `keyword`s (will round to closest): ```js var convert = require('color-convert'); convert.rgb.hsl(140, 200, 100); // [96, 48, 59] convert.keyword.rgb('blue'); // [0, 0, 255] var rgbChannels = convert.rgb.channels; // 3 var cmykChannels = convert.cmyk.channels; // 4 var ansiChannels = convert.ansi16.channels; // 1 ``` # Install ```console $ npm install color-convert ``` # API Simply get the property of the _from_ and _to_ conversion that you're looking for. All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on `.raw` to the function. All 'from' functions have a hidden property called `.channels` that indicates the number of channels the function expects (not including alpha). ```js var convert = require('color-convert'); // Hex to LAB convert.hex.lab('DEADBF'); // [ 76, 21, -2 ] convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ] // RGB to CMYK convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ] convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ] ``` ### Arrays All functions that accept multiple arguments also support passing an array. Note that this does **not** apply to functions that convert from a color that only requires one value (e.g. `keyword`, `ansi256`, `hex`, etc.) ```js var convert = require('color-convert'); convert.rgb.hex(123, 45, 67); // '7B2D43' convert.rgb.hex([123, 45, 67]); // '7B2D43' ``` ## Routing Conversions that don't have an _explicitly_ defined conversion (in [conversions.js](conversions.js)), but can be converted by means of sub-conversions (e.g. XYZ -> **RGB** -> CMYK), are automatically routed together. This allows just about any color model supported by `color-convert` to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> **LAB** -> **XYZ** -> **RGB** -> Hex). Keep in mind that extensive conversions _may_ result in a loss of precision, and exist only to be complete. For a list of "direct" (single-step) conversions, see [conversions.js](conversions.js). # Contribute If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request. # License Copyright &copy; 2011-2016, Heather Arthur and Josh Junon. Licensed under the [MIT License](LICENSE). ## Unit tests Unit tests can be run from the top level folder using the following command: ``` yarn test:unit ``` ### Tests for Contract in `index.unit.spec.ts` ``` [Describe]: Greeting [Success]: ✔ should respond to showYouKnow() [Success]: ✔ should respond to showYouKnow2() [Success]: ✔ should respond to sayHello() [Success]: ✔ should respond to sayMyName() [Success]: ✔ should respond to saveMyName() [Success]: ✔ should respond to saveMyMessage() [Success]: ✔ should respond to getAllMessages() [File]: src/sample/__tests__/index.unit.spec.ts [Groups]: 2 pass, 2 total [Result]: ✔ PASS [Snapshot]: 0 total, 0 added, 0 removed, 0 different [Summary]: 7 pass, 0 fail, 7 total [Time]: 19.164ms ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Result]: ✔ PASS [Files]: 1 total [Groups]: 2 count, 2 pass [Tests]: 7 pass, 0 fail, 7 total [Time]: 8217.768ms ✨ Done in 8.86s. ``` # fs.realpath A backwards-compatible fs.realpath for Node v6 and above In Node v6, the JavaScript implementation of fs.realpath was replaced with a faster (but less resilient) native implementation. That raises new and platform-specific errors and cannot handle long or excessively symlink-looping paths. This module handles those cases by detecting the new errors and falling back to the JavaScript implementation. On versions of Node prior to v6, it has no effect. ## USAGE ```js var rp = require('fs.realpath') // async version rp.realpath(someLongAndLoopingPath, function (er, real) { // the ELOOP was handled, but it was a bit slower }) // sync version var real = rp.realpathSync(someLongAndLoopingPath) // monkeypatch at your own risk! // This replaces the fs.realpath/fs.realpathSync builtins rp.monkeypatch() // un-do the monkeypatching rp.unmonkeypatch() ``` [![NPM registry](https://img.shields.io/npm/v/as-bignum.svg?style=for-the-badge)](https://www.npmjs.com/package/as-bignum)[![Build Status](https://img.shields.io/travis/com/MaxGraey/as-bignum/master?style=for-the-badge)](https://travis-ci.com/MaxGraey/as-bignum)[![NPM license](https://img.shields.io/badge/license-Apache%202.0-ba68c8.svg?style=for-the-badge)](LICENSE.md) ## Work in progress --- ### WebAssembly fixed length big numbers written on [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) Provide wide numeric types such as `u128`, `u256`, `i128`, `i256` and fixed points and also its arithmetic operations. Namespace `safe` contain equivalents with overflow/underflow traps. All kind of types pretty useful for economical and cryptographic usages and provide deterministic behavior. ### Install > yarn add as-bignum or > npm i as-bignum ### Usage via AssemblyScript ```ts import { u128 } from "as-bignum"; declare function logF64(value: f64): void; declare function logU128(hi: u64, lo: u64): void; var a = u128.One; var b = u128.from(-32); // same as u128.from<i32>(-32) var c = new u128(0x1, -0xF); var d = u128.from(0x0123456789ABCDEF); // same as u128.from<i64>(0x0123456789ABCDEF) var e = u128.from('0x0123456789ABCDEF01234567'); var f = u128.fromString('11100010101100101', 2); // same as u128.from('0b11100010101100101') var r = d / c + (b << 5) + e; logF64(r.as<f64>()); logU128(r.hi, r.lo); ``` ### Usage via JavaScript/Typescript ```ts TODO ``` ### List of types - [x] [`u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u128.ts) unsigned type (tested) - [ ] [`u256`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/u256.ts) unsigned type (very basic) - [ ] `i128` signed type - [ ] `i256` signed type --- - [x] [`safe.u128`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/integer/safe/u128.ts) unsigned type (tested) - [ ] `safe.u256` unsigned type - [ ] `safe.i128` signed type - [ ] `safe.i256` signed type --- - [ ] [`fp128<Q>`](https://github.com/MaxGraey/as-bignum/blob/master/assembly/fixed/fp128.ts) generic fixed point signed type٭ (very basic for now) - [ ] `fp256<Q>` generic fixed point signed type٭ --- - [ ] `safe.fp128<Q>` generic fixed point signed type٭ - [ ] `safe.fp256<Q>` generic fixed point signed type٭ ٭ _typename_ `Q` _is a type representing count of fractional bits_ # base-x [![NPM Package](https://img.shields.io/npm/v/base-x.svg?style=flat-square)](https://www.npmjs.org/package/base-x) [![Build Status](https://img.shields.io/travis/cryptocoinjs/base-x.svg?branch=master&style=flat-square)](https://travis-ci.org/cryptocoinjs/base-x) [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. **WARNING:** This module is **NOT RFC3548** compliant, it cannot be used for base16 (hex), base32, or base64 encoding in a standards compliant manner. ## Example Base58 ``` javascript var BASE58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' var bs58 = require('base-x')(BASE58) var decoded = bs58.decode('5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr') console.log(decoded) // => <Buffer 80 ed db dc 11 68 f1 da ea db d3 e4 4c 1e 3f 8f 5a 28 4c 20 29 f7 8a d2 6a f9 85 83 a4 99 de 5b 19> console.log(bs58.encode(decoded)) // => 5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr ``` ### Alphabets See below for a list of commonly recognized alphabets, and their respective base. Base | Alphabet ------------- | ------------- 2 | `01` 8 | `01234567` 11 | `0123456789a` 16 | `0123456789abcdef` 32 | `0123456789ABCDEFGHJKMNPQRSTVWXYZ` 32 | `ybndrfg8ejkmcpqxot1uwisza345h769` (z-base-32) 36 | `0123456789abcdefghijklmnopqrstuvwxyz` 58 | `123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz` 62 | `0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` 64 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/` 66 | `ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_.!~` ## How it works It encodes octet arrays by doing long divisions on all significant digits in the array, creating a representation of that number in the new base. Then for every leading zero in the input (not significant as a number) it will encode as a single leader character. This is the first in the alphabet and will decode as 8 bits. The other characters depend upon the base. For example, a base58 alphabet packs roughly 5.858 bits per character. This means the encoded string 000f (using a base16, 0-f alphabet) will actually decode to 4 bytes unlike a canonical hex encoding which uniformly packs 4 bits into each character. While unusual, this does mean that no padding is required and it works for bases like 43. ## LICENSE [MIT](LICENSE) A direct derivation of the base58 implementation from [`bitcoin/bitcoin`](https://github.com/bitcoin/bitcoin/blob/f1e2f2a85962c1664e4e55471061af0eaa798d40/src/base58.cpp), generalized for variable length alphabets. # require-main-filename [![Build Status](https://travis-ci.org/yargs/require-main-filename.png)](https://travis-ci.org/yargs/require-main-filename) [![Coverage Status](https://coveralls.io/repos/yargs/require-main-filename/badge.svg?branch=master)](https://coveralls.io/r/yargs/require-main-filename?branch=master) [![NPM version](https://img.shields.io/npm/v/require-main-filename.svg)](https://www.npmjs.com/package/require-main-filename) `require.main.filename` is great for figuring out the entry point for the current application. This can be combined with a module like [pkg-conf](https://www.npmjs.com/package/pkg-conf) to, _as if by magic_, load top-level configuration. Unfortunately, `require.main.filename` sometimes fails when an application is executed with an alternative process manager, e.g., [iisnode](https://github.com/tjanczuk/iisnode). `require-main-filename` is a shim that addresses this problem. ## Usage ```js var main = require('require-main-filename')() // use main as an alternative to require.main.filename. ``` ## License ISC Railroad-diagram Generator ========================== This is a small js library for generating railroad diagrams (like what [JSON.org](http://json.org) uses) using SVG. Railroad diagrams are a way of visually representing a grammar in a form that is more readable than using regular expressions or BNF. I think (though I haven't given it a lot of thought yet) that if it's easy to write a context-free grammar for the language, the corresponding railroad diagram will be easy as well. There are several railroad-diagram generators out there, but none of them had the visual appeal I wanted. [Here's an example of how they look!](http://www.xanthir.com/etc/railroad-diagrams/example.html) And [here's an online generator for you to play with and get SVG code from!](http://www.xanthir.com/etc/railroad-diagrams/generator.html) The library now exists in a Python port as well! See the information further down. Details ------- To use the library, just include the js and css files, and then call the Diagram() function. Its arguments are the components of the diagram (Diagram is a special form of Sequence). An alternative to Diagram() is ComplexDiagram() which is used to describe a complex type diagram. Components are either leaves or containers. The leaves: * Terminal(text) or a bare string - represents literal text * NonTerminal(text) - represents an instruction or another production * Comment(text) - a comment * Skip() - an empty line The containers: * Sequence(children) - like simple concatenation in a regex * Choice(index, children) - like | in a regex. The index argument specifies which child is the "normal" choice and should go in the middle * Optional(child, skip) - like ? in a regex. A shorthand for `Choice(1, [Skip(), child])`. If the optional `skip` parameter has the value `"skip"`, it instead puts the Skip() in the straight-line path, for when the "normal" behavior is to omit the item. * OneOrMore(child, repeat) - like + in a regex. The 'repeat' argument is optional, and specifies something that must go between the repetitions. * ZeroOrMore(child, repeat, skip) - like * in a regex. A shorthand for `Optional(OneOrMore(child, repeat))`. The optional `skip` parameter is identical to Optional(). For convenience, each component can be called with or without `new`. If called without `new`, the container components become n-ary; that is, you can say either `new Sequence([A, B])` or just `Sequence(A,B)`. After constructing a Diagram, call `.format(...padding)` on it, specifying 0-4 padding values (just like CSS) for some additional "breathing space" around the diagram (the paddings default to 20px). The result can either be `.toString()`'d for the markup, or `.toSVG()`'d for an `<svg>` element, which can then be immediately inserted to the document. As a convenience, Diagram also has an `.addTo(element)` method, which immediately converts it to SVG and appends it to the referenced element with default paddings. `element` defaults to `document.body`. Options ------- There are a few options you can tweak, at the bottom of the file. Just tweak either until the diagram looks like what you want. You can also change the CSS file - feel free to tweak to your heart's content. Note, though, that if you change the text sizes in the CSS, you'll have to go adjust the metrics for the leaf nodes as well. * VERTICAL_SEPARATION - sets the minimum amount of vertical separation between two items. Note that the stroke width isn't counted when computing the separation; this shouldn't be relevant unless you have a very small separation or very large stroke width. * ARC_RADIUS - the radius of the arcs used in the branching containers like Choice. This has a relatively large effect on the size of non-trivial diagrams. Both tight and loose values look good, depending on what you're going for. * DIAGRAM_CLASS - the class set on the root `<svg>` element of each diagram, for use in the CSS stylesheet. * STROKE_ODD_PIXEL_LENGTH - the default stylesheet uses odd pixel lengths for 'stroke'. Due to rasterization artifacts, they look best when the item has been translated half a pixel in both directions. If you change the styling to use a stroke with even pixel lengths, you'll want to set this variable to `false`. * INTERNAL_ALIGNMENT - when some branches of a container are narrower than others, this determines how they're aligned in the extra space. Defaults to "center", but can be set to "left" or "right". Caveats ------- At this early stage, the generator is feature-complete and works as intended, but still has several TODOs: * The font-sizes are hard-coded right now, and the font handling in general is very dumb - I'm just guessing at some metrics that are probably "good enough" rather than measuring things properly. Python Port ----------- In addition to the canonical JS version, the library now exists as a Python library as well. Using it is basically identical. The config variables are globals in the file, and so may be adjusted either manually or via tweaking from inside your program. The main difference from the JS port is how you extract the string from the Diagram. You'll find a `writeSvg(writerFunc)` method on `Diagram`, which takes a callback of one argument and passes it the string form of the diagram. For example, it can be used like `Diagram(...).writeSvg(sys.stdout.write)` to write to stdout. **Note**: the callback will be called multiple times as it builds up the string, not just once with the whole thing. If you need it all at once, consider something like a `StringIO` as an easy way to collect it into a single string. License ------- This document and all associated files in the github project are licensed under [CC0](http://creativecommons.org/publicdomain/zero/1.0/) ![](http://i.creativecommons.org/p/zero/1.0/80x15.png). This means you can reuse, remix, or otherwise appropriate this project for your own use **without restriction**. (The actual legal meaning can be found at the above link.) Don't ask me for permission to use any part of this project, **just use it**. I would appreciate attribution, but that is not required by the license. # cliui [![Build Status](https://travis-ci.org/yargs/cliui.svg)](https://travis-ci.org/yargs/cliui) [![Coverage Status](https://coveralls.io/repos/yargs/cliui/badge.svg?branch=)](https://coveralls.io/r/yargs/cliui?branch=) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) easily create complex multi-column command-line-interfaces. ## Example ```js var ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 2, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # AssemblyScript Rtrace A tiny utility to sanitize the AssemblyScript runtime. Records allocations and frees performed by the runtime and emits an error if something is off. Also checks for leaks. Instructions ------------ Compile your module that uses the full or half runtime with `-use ASC_RTRACE=1 --explicitStart` and include an instance of this module as the import named `rtrace`. ```js const rtrace = new Rtrace({ onerror(err, info) { // handle error }, oninfo(msg) { // print message, optional }, getMemory() { // obtain the module's memory, // e.g. with --explicitStart: return instance.exports.memory; } }); const { module, instance } = await WebAssembly.instantiate(..., rtrace.install({ ...imports... }) ); instance.exports._start(); ... if (rtrace.active) { let leakCount = rtr.check(); if (leakCount) { // handle error } } ``` Note that references in globals which are not cleared before collection is performed appear as leaks, including their inner members. A TypedArray would leak itself and its backing ArrayBuffer in this case for example. This is perfectly normal and clearing all globals avoids this. # universal-url [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Monitor][greenkeeper-image]][greenkeeper-url] > WHATWG [`URL`](https://developer.mozilla.org/en/docs/Web/API/URL) for Node & Browser. * For Node.js versions `>= 8`, the native implementation will be used. * For Node.js versions `< 8`, a [shim](https://npmjs.com/whatwg-url) will be used. * For web browsers without a native implementation, the same shim will be used. ## Installation [Node.js](http://nodejs.org/) `>= 6` is required. To install, type this at the command line: ```shell npm install universal-url ``` ## Usage ```js const {URL, URLSearchParams} = require('universal-url'); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` Global shim: ```js require('universal-url').shim(); const url = new URL('http://domain/'); const params = new URLSearchParams('?param=value'); ``` ## Browserify/etc The bundled file size of this library can be large for a web browser. If this is a problem, try using [universal-url-lite](https://npmjs.com/universal-url-lite) in your build as an alias for this module. [npm-image]: https://img.shields.io/npm/v/universal-url.svg [npm-url]: https://npmjs.org/package/universal-url [travis-image]: https://img.shields.io/travis/stevenvachon/universal-url.svg [travis-url]: https://travis-ci.org/stevenvachon/universal-url [greenkeeper-image]: https://badges.greenkeeper.io/stevenvachon/universal-url.svg [greenkeeper-url]: https://greenkeeper.io/ # wrappy Callback wrapping utility ## USAGE ```javascript var wrappy = require("wrappy") // var wrapper = wrappy(wrapperFunction) // make sure a cb is called only once // See also: http://npm.im/once for this specific use case var once = wrappy(function (cb) { var called = false return function () { if (called) return called = true return cb.apply(this, arguments) } }) function printBoo () { console.log('boo') } // has some rando property printBoo.iAmBooPrinter = true var onlyPrintOnce = once(printBoo) onlyPrintOnce() // prints 'boo' onlyPrintOnce() // does nothing // random property is retained! assert.equal(onlyPrintOnce.iAmBooPrinter, true) ``` <p align="center"> <a href="https://assemblyscript.org" target="_blank" rel="noopener"><img width="100" src="https://avatars1.githubusercontent.com/u/28916798?s=200&v=4" alt="AssemblyScript logo"></a> </p> <p align="center"> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3ATest"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Test/master?label=test&logo=github" alt="Test status" /></a> <a href="https://github.com/AssemblyScript/assemblyscript/actions?query=workflow%3APublish"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/assemblyscript/Publish/master?label=publish&logo=github" alt="Publish status" /></a> <a href="https://www.npmjs.com/package/assemblyscript"><img src="https://img.shields.io/npm/v/assemblyscript.svg?label=compiler&color=007acc&logo=npm" alt="npm compiler version" /></a> <a href="https://www.npmjs.com/package/@assemblyscript/loader"><img src="https://img.shields.io/npm/v/@assemblyscript/loader.svg?label=loader&color=007acc&logo=npm" alt="npm loader version" /></a> <a href="https://discord.gg/assemblyscript"><img src="https://img.shields.io/discord/721472913886281818.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2" alt="Discord online" /></a> </p> <p align="justify"><strong>AssemblyScript</strong> compiles a strict variant of <a href="http://www.typescriptlang.org">TypeScript</a> (basically JavaScript with types) to <a href="http://webassembly.org">WebAssembly</a> using <a href="https://github.com/WebAssembly/binaryen">Binaryen</a>. It generates lean and mean WebAssembly modules while being just an <code>npm install</code> away.</p> <h3 align="center"> <a href="https://assemblyscript.org">About</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/introduction.html">Introduction</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/quick-start.html">Quick&nbsp;start</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/examples.html">Examples</a> &nbsp;·&nbsp; <a href="https://assemblyscript.org/development.html">Development&nbsp;instructions</a> </h3> <br> <h2 align="center">Contributors</h2> <p align="center"> <a href="https://assemblyscript.org/#contributors"><img src="https://assemblyscript.org/contributors.svg" alt="Contributor logos" width="720" /></a> </p> <h2 align="center">Thanks to our sponsors!</h2> <p align="justify">Most of the core team members and most contributors do this open source work in their free time. If you use AssemblyScript for a serious task or plan to do so, and you'd like us to invest more time on it, <a href="https://opencollective.com/assemblyscript/donate" target="_blank" rel="noopener">please donate</a> to our <a href="https://opencollective.com/assemblyscript" target="_blank" rel="noopener">OpenCollective</a>. By sponsoring this project, your logo will show up below. Thank you so much for your support!</p> <p align="center"> <a href="https://assemblyscript.org/#sponsors"><img src="https://assemblyscript.org/sponsors.svg" alt="Sponsor logos" width="720" /></a> </p> # [nearley](http://nearley.js.org) ↗️ [![JS.ORG](https://img.shields.io/badge/js.org-nearley-ffb400.svg?style=flat-square)](http://js.org) [![npm version](https://badge.fury.io/js/nearley.svg)](https://badge.fury.io/js/nearley) nearley is a simple, fast and powerful parsing toolkit. It consists of: 1. [A powerful, modular DSL for describing languages](https://nearley.js.org/docs/grammar) 2. [An efficient, lightweight Earley parser](https://nearley.js.org/docs/parser) 3. [Loads of tools, editor plug-ins, and other goodies!](https://nearley.js.org/docs/tooling) nearley is a **streaming** parser with support for catching **errors** gracefully and providing _all_ parsings for **ambiguous** grammars. It is compatible with a variety of **lexers** (we recommend [moo](http://github.com/tjvr/moo)). It comes with tools for creating **tests**, **railroad diagrams** and **fuzzers** from your grammars, and has support for a variety of editors and platforms. It works in both node and the browser. Unlike most other parser generators, nearley can handle *any* grammar you can define in BNF (and more!). In particular, while most existing JS parsers such as PEGjs and Jison choke on certain grammars (e.g. [left recursive ones](http://en.wikipedia.org/wiki/Left_recursion)), nearley handles them easily and efficiently by using the [Earley parsing algorithm](https://en.wikipedia.org/wiki/Earley_parser). nearley is used by a wide variety of projects: - [artificial intelligence](https://github.com/ChalmersGU-AI-course/shrdlite-course-project) and - [computational linguistics](https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/useful_handouts) classes at universities; - [file format parsers](https://github.com/raymond-h/node-dmi); - [data-driven markup languages](https://github.com/idyll-lang/idyll-compiler); - [compilers for real-world programming languages](https://github.com/sizigi/lp5562); - and nearley itself! The nearley compiler is bootstrapped. nearley is an npm [staff pick](https://www.npmjs.com/package/npm-collection-staff-picks). ## Documentation Please visit our website https://nearley.js.org to get started! You will find a tutorial, detailed reference documents, and links to several real-world examples to get inspired. ## Contributing Please read [this document](.github/CONTRIBUTING.md) *before* working on nearley. If you are interested in contributing but unsure where to start, take a look at the issues labeled "up for grabs" on the issue tracker, or message a maintainer (@kach or @tjvr on Github). nearley is MIT licensed. A big thanks to Nathan Dinsmore for teaching me how to Earley, Aria Stewart for helping structure nearley into a mature module, and Robin Windels for bootstrapping the grammar. Additionally, Jacob Edelman wrote an experimental JavaScript parser with nearley and contributed ideas for EBNF support. Joshua T. Corbin refactored the compiler to be much, much prettier. Bojidar Marinov implemented postprocessors-in-other-languages. Shachar Itzhaky fixed a subtle bug with nullables. ## Citing nearley If you are citing nearley in academic work, please use the following BibTeX entry. ```bibtex @misc{nearley, author = "Kartik Chandra and Tim Radvan", title = "{nearley}: a parsing toolkit for {JavaScript}", year = {2014}, doi = {10.5281/zenodo.3897993}, url = {https://github.com/kach/nearley} } ``` Shims used when bundling asc for browser usage. [![build status](https://secure.travis-ci.org/dankogai/js-base64.png)](http://travis-ci.org/dankogai/js-base64) # base64.js Yet another [Base64] transcoder. [Base64]: http://en.wikipedia.org/wiki/Base64 ## HEADS UP In version 3.0 `js-base64` switch to ES2015 module so it is no longer compatible with legacy browsers like IE (see below). And since version 3.3 it is written in TypeScript. Now `base64.mjs` is compiled from `base64.ts` then `base64.js` is generated from `base64.mjs`. ## Install ```shell $ npm install --save js-base64 ``` ## Usage ### In Browser Locally… ```html <script src="base64.js"></script> ``` … or Directly from CDN. In which case you don't even need to install. ```html <script src="https://cdn.jsdelivr.net/npm/[email protected]/base64.min.js"></script> ``` This good old way loads `Base64` in the global context (`window`). Though `Base64.noConflict()` is made available, you should consider using ES6 Module to avoid tainting `window`. ### As an ES6 Module locally… ```javascript import { Base64 } from 'js-base64'; ``` ```javascript // or if you prefer no Base64 namespace import { encode, decode } from 'js-base64'; ``` or even remotely. ```html <script type="module"> // note jsdelivr.net does not automatically minify .mjs import { Base64 } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ```html <script type="module"> // or if you prefer no Base64 namespace import { encode, decode } from 'https://cdn.jsdelivr.net/npm/[email protected]/base64.mjs'; </script> ``` ### node.js (commonjs) ```javascript const {Base64} = require('js-base64'); ``` Unlike the case above, the global context is no longer modified. You can also use [esm] to `import` instead of `require`. [esm]: https://github.com/standard-things/esm ```javascript require=require('esm')(module); import {Base64} from 'js-base64'; ``` ## SYNOPSIS ```javascript let latin = 'dankogai'; let utf8 = '小飼弾' let u8s = new Uint8Array([100,97,110,107,111,103,97,105]); Base64.encode(latin); // ZGFua29nYWk= Base64.btoa(latin); // ZGFua29nYWk= Base64.btoa(utf8); // raises exception Base64.fromUint8Array(u8s); // ZGFua29nYWk= Base64.fromUint8Array(u8s, true); // ZGFua29nYW which is URI safe Base64.encode(utf8); // 5bCP6aO85by+ Base64.encode(utf8, true) // 5bCP6aO85by- Base64.encodeURI(utf8); // 5bCP6aO85by- ``` ```javascript Base64.decode( 'ZGFua29nYWk=');// dankogai Base64.atob( 'ZGFua29nYWk=');// dankogai Base64.atob( '5bCP6aO85by+');// '小飼弾' which is nonsense Base64.toUint8Array('ZGFua29nYWk=');// u8s above Base64.decode( '5bCP6aO85by+');// 小飼弾 // note .decodeURI() is unnecessary since it accepts both flavors Base64.decode( '5bCP6aO85by-');// 小飼弾 ``` ```javascript Base64.isValid(0); // false: 0 is not string Base64.isValid(''); // true: a valid Base64-encoded empty byte Base64.isValid('ZA=='); // true: a valid Base64-encoded 'd' Base64.isValid('Z A='); // true: whitespaces are okay Base64.isValid('ZA'); // true: padding ='s can be omitted Base64.isValid('++'); // true: can be non URL-safe Base64.isValid('--'); // true: or URL-safe Base64.isValid('+-'); // false: can't mix both ``` ### Built-in Extensions By default `Base64` leaves built-in prototypes untouched. But you can extend them as below. ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following 'dankogai'.toBase64(); // ZGFua29nYWk= '小飼弾'.toBase64(); // 5bCP6aO85by+ '小飼弾'.toBase64(true); // 5bCP6aO85by- '小飼弾'.toBase64URI(); // 5bCP6aO85by- ab alias of .toBase64(true) '小飼弾'.toBase64URL(); // 5bCP6aO85by- an alias of .toBase64URI() 'ZGFua29nYWk='.fromBase64(); // dankogai '5bCP6aO85by+'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.fromBase64(); // 小飼弾 '5bCP6aO85by-'.toUint8Array();// u8s above ``` ```javascript // you have to explicitly extend String.prototype Base64.extendString(); // once extended, you can do the following u8s.toBase64(); // 'ZGFua29nYWk=' u8s.toBase64URI(); // 'ZGFua29nYWk' u8s.toBase64URL(); // 'ZGFua29nYWk' an alias of .toBase64URI() ``` ```javascript // extend all at once Base64.extendBuiltins() ``` ## `.decode()` vs `.atob` (and `.encode()` vs `btoa()`) Suppose you have: ``` var pngBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="; ``` Which is a Base64-encoded 1x1 transparent PNG, **DO NOT USE** `Base64.decode(pngBase64)`.  Use `Base64.atob(pngBase64)` instead.  `Base64.decode()` decodes to UTF-8 string while `Base64.atob()` decodes to bytes, which is compatible to browser built-in `atob()` (Which is absent in node.js).  The same rule applies to the opposite direction. Or even better, `Base64.toUint8Array(pngBase64)`. ### If you really, really need an ES5 version You can transpiles to an ES5 that runs on IE11. Do the following in your shell. ```shell $ make base64.es5.js ``` # axios // core The modules found in `core/` should be modules that are specific to the domain logic of axios. These modules would most likely not make sense to be consumed outside of the axios module, as their logic is too specific. Some examples of core modules are: - Dispatching requests - Managing interceptors - Handling config [![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies) The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node. Install with `npm install rimraf`, or just drop rimraf.js somewhere. ## API `rimraf(f, [opts], callback)` The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with `opts.disableGlob` (defaults to `false`). This might be handy, for instance, if you have filenames that contain globbing wildcard characters. The callback will be called with an error if there is one. Certain errors are handled for you: * Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of `opts.maxBusyTries` times before giving up, adding 100ms of wait between each attempt. The default `maxBusyTries` is 3. * `ENOENT` - If the file doesn't exist, rimraf will return successfully, since your desired outcome is already the case. * `EMFILE` - Since `readdir` requires opening a file descriptor, it's possible to hit `EMFILE` if too many file descriptors are in use. In the sync case, there's nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to `opts.emfileWait` ms, which defaults to 1000. ## options * unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync In order to use a custom file system library, you can override specific fs functions on the options object. If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method. Sync methods are only relevant for `rimraf.sync()`, of course. For example: ```javascript var myCustomFS = require('some-custom-fs') rimraf('some-thing', myCustomFS, callback) ``` * maxBusyTries If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3. Only relevant for async usage. * emfileWait If an `EMFILE` error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000. If you repeatedly encounter `EMFILE` errors, then consider using [graceful-fs](http://npm.im/graceful-fs) in your program. Only relevant for async usage. * glob Set to `false` to disable [glob](http://npm.im/glob) pattern matching. Set to an object to pass options to the glob module. The default glob options are `{ nosort: true, silent: true }`. Glob version 6 is used in this module. Relevant for both sync and async usage. * disableGlob Set to any non-falsey value to disable globbing entirely. (Equivalent to setting `glob: false`.) ## rimraf.sync It can remove stuff synchronously, too. But that's not so good. Use the async API. It's better. ## CLI If installed with `npm install rimraf -g` it can be used as a global command `rimraf <path> [<path> ...]` which is useful for cross platform support. ## mkdirp If you need to create a directory recursively, check out [mkdirp](https://github.com/substack/node-mkdirp). ## Follow Redirects Drop-in replacement for Nodes `http` and `https` that automatically follows redirects. [![npm version](https://img.shields.io/npm/v/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) [![Build Status](https://travis-ci.org/follow-redirects/follow-redirects.svg?branch=master)](https://travis-ci.org/follow-redirects/follow-redirects) [![Coverage Status](https://coveralls.io/repos/follow-redirects/follow-redirects/badge.svg?branch=master)](https://coveralls.io/r/follow-redirects/follow-redirects?branch=master) [![Dependency Status](https://david-dm.org/follow-redirects/follow-redirects.svg)](https://david-dm.org/follow-redirects/follow-redirects) [![npm downloads](https://img.shields.io/npm/dm/follow-redirects.svg)](https://www.npmjs.com/package/follow-redirects) `follow-redirects` provides [request](https://nodejs.org/api/http.html#http_http_request_options_callback) and [get](https://nodejs.org/api/http.html#http_http_get_options_callback) methods that behave identically to those found on the native [http](https://nodejs.org/api/http.html#http_http_request_options_callback) and [https](https://nodejs.org/api/https.html#https_https_request_options_callback) modules, with the exception that they will seamlessly follow redirects. ```javascript var http = require('follow-redirects').http; var https = require('follow-redirects').https; http.get('http://bit.ly/900913', function (response) { response.on('data', function (chunk) { console.log(chunk); }); }).on('error', function (err) { console.error(err); }); ``` You can inspect the final redirected URL through the `responseUrl` property on the `response`. If no redirection happened, `responseUrl` is the original request URL. ```javascript https.request({ host: 'bitly.com', path: '/UHfDGO', }, function (response) { console.log(response.responseUrl); // 'http://duckduckgo.com/robots.txt' }); ``` ## Options ### Global options Global options are set directly on the `follow-redirects` module: ```javascript var followRedirects = require('follow-redirects'); followRedirects.maxRedirects = 10; followRedirects.maxBodyLength = 20 * 1024 * 1024; // 20 MB ``` The following global options are supported: - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. ### Per-request options Per-request options are set by passing an `options` object: ```javascript var url = require('url'); var followRedirects = require('follow-redirects'); var options = url.parse('http://bit.ly/900913'); options.maxRedirects = 10; http.request(options); ``` In addition to the [standard HTTP](https://nodejs.org/api/http.html#http_http_request_options_callback) and [HTTPS options](https://nodejs.org/api/https.html#https_https_request_options_callback), the following per-request options are supported: - `followRedirects` (default: `true`) – whether redirects should be followed. - `maxRedirects` (default: `21`) – sets the maximum number of allowed redirects; if exceeded, an error will be emitted. - `maxBodyLength` (default: 10MB) – sets the maximum size of the request body; if exceeded, an error will be emitted. - `agents` (default: `undefined`) – sets the `agent` option per protocol, since HTTP and HTTPS use different agents. Example value: `{ http: new http.Agent(), https: new https.Agent() }` - `trackRedirects` (default: `false`) – whether to store the redirected response details into the `redirects` array on the response object. ### Advanced usage By default, `follow-redirects` will use the Node.js default implementations of [`http`](https://nodejs.org/api/http.html) and [`https`](https://nodejs.org/api/https.html). To enable features such as caching and/or intermediate request tracking, you might instead want to wrap `follow-redirects` around custom protocol implementations: ```javascript var followRedirects = require('follow-redirects').wrap({ http: require('your-custom-http'), https: require('your-custom-https'), }); ``` Such custom protocols only need an implementation of the `request` method. ## Browserify Usage Due to the way `XMLHttpRequest` works, the `browserify` versions of `http` and `https` already follow redirects. If you are *only* targeting the browser, then this library has little value for you. If you want to write cross platform code for node and the browser, `follow-redirects` provides a great solution for making the native node modules behave the same as they do in browserified builds in the browser. To avoid bundling unnecessary code you should tell browserify to swap out `follow-redirects` with the standard modules when bundling. To make this easier, you need to change how you require the modules: ```javascript var http = require('follow-redirects/http'); var https = require('follow-redirects/https'); ``` You can then replace `follow-redirects` in your browserify configuration like so: ```javascript "browser": { "follow-redirects/http" : "http", "follow-redirects/https" : "https" } ``` The `browserify-http` module has not kept pace with node development, and no long behaves identically to the native module when running in the browser. If you are experiencing problems, you may want to check out [browserify-http-2](https://www.npmjs.com/package/http-browserify-2). It is more actively maintained and attempts to address a few of the shortcomings of `browserify-http`. In that case, your browserify config should look something like this: ```javascript "browser": { "follow-redirects/http" : "browserify-http-2/http", "follow-redirects/https" : "browserify-http-2/https" } ``` ## Contributing Pull Requests are always welcome. Please [file an issue](https://github.com/follow-redirects/follow-redirects/issues) detailing your proposal before you invest your valuable time. Additional features and bug fixes should be accompanied by tests. You can run the test suite locally with a simple `npm test` command. ## Debug Logging `follow-redirects` uses the excellent [debug](https://www.npmjs.com/package/debug) for logging. To turn on logging set the environment variable `DEBUG=follow-redirects` for debug output from just this module. When running the test suite it is sometimes advantageous to set `DEBUG=*` to see output from the express server as well. ## Authors - Olivier Lalonde ([email protected]) - James Talmage ([email protected]) - [Ruben Verborgh](https://ruben.verborgh.org/) ## License [https://github.com/follow-redirects/follow-redirects/blob/master/LICENSE](MIT License) bs58 ==== [![build status](https://travis-ci.org/cryptocoinjs/bs58.svg)](https://travis-ci.org/cryptocoinjs/bs58) JavaScript component to compute base 58 encoding. This encoding is typically used for crypto currencies such as Bitcoin. **Note:** If you're looking for **base 58 check** encoding, see: [https://github.com/bitcoinjs/bs58check](https://github.com/bitcoinjs/bs58check), which depends upon this library. Install ------- npm i --save bs58 API --- ### encode(input) `input` must be a [Buffer](https://nodejs.org/api/buffer.html) or an `Array`. It returns a `string`. **example**: ```js const bs58 = require('bs58') const bytes = Buffer.from('003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187', 'hex') const address = bs58.encode(bytes) console.log(address) // => 16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS ``` ### decode(input) `input` must be a base 58 encoded string. Returns a [Buffer](https://nodejs.org/api/buffer.html). **example**: ```js const bs58 = require('bs58') const address = '16UjcYNBG9GTK4uq2f7yYEbuifqCzoLMGS' const bytes = bs58.decode(address) console.log(out.toString('hex')) // => 003c176e659bea0f29a3e9bf7880c112b1b31b4dc826268187 ``` Hack / Test ----------- Uses JavaScript standard style. Read more: [![js-standard-style](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard) Credits ------- - [Mike Hearn](https://github.com/mikehearn) for original Java implementation - [Stefan Thomas](https://github.com/justmoon) for porting to JavaScript - [Stephan Pair](https://github.com/gasteve) for buffer improvements - [Daniel Cousens](https://github.com/dcousens) for cleanup and merging improvements from bitcoinjs-lib - [Jared Deckard](https://github.com/deckar01) for killing `bigi` as a dependency License ------- MIT discontinuous-range =================== ``` DiscontinuousRange(1, 10).subtract(4, 6); // [ 1-3, 7-10 ] ``` [![Build Status](https://travis-ci.org/dtudury/discontinuous-range.png)](https://travis-ci.org/dtudury/discontinuous-range) this is a pretty simple module, but it exists to service another project so this'll be pretty lacking documentation. reading the test to see how this works may help. otherwise, here's an example that I think pretty much sums it up ###Example ``` var all_numbers = new DiscontinuousRange(1, 100); var bad_numbers = DiscontinuousRange(13).add(8).add(60,80); var good_numbers = all_numbers.clone().subtract(bad_numbers); console.log(good_numbers.toString()); //[ 1-7, 9-12, 14-59, 81-100 ] var random_good_number = good_numbers.index(Math.floor(Math.random() * good_numbers.length)); ``` # whatwg-url whatwg-url is a full implementation of the WHATWG [URL Standard](https://url.spec.whatwg.org/). It can be used standalone, but it also exposes a lot of the internal algorithms that are useful for integrating a URL parser into a project like [jsdom](https://github.com/tmpvar/jsdom). ## Specification conformance whatwg-url is currently up to date with the URL spec up to commit [7ae1c69](https://github.com/whatwg/url/commit/7ae1c691c96f0d82fafa24c33aa1e8df9ffbf2bc). For `file:` URLs, whose [origin is left unspecified](https://url.spec.whatwg.org/#concept-url-origin), whatwg-url chooses to use a new opaque origin (which serializes to `"null"`). ## API ### The `URL` and `URLSearchParams` classes The main API is provided by the [`URL`](https://url.spec.whatwg.org/#url-class) and [`URLSearchParams`](https://url.spec.whatwg.org/#interface-urlsearchparams) exports, which follows the spec's behavior in all ways (including e.g. `USVString` conversion). Most consumers of this library will want to use these. ### Low-level URL Standard API The following methods are exported for use by places like jsdom that need to implement things like [`HTMLHyperlinkElementUtils`](https://html.spec.whatwg.org/#htmlhyperlinkelementutils). They mostly operate on or return an "internal URL" or ["URL record"](https://url.spec.whatwg.org/#concept-url) type. - [URL parser](https://url.spec.whatwg.org/#concept-url-parser): `parseURL(input, { baseURL, encodingOverride })` - [Basic URL parser](https://url.spec.whatwg.org/#concept-basic-url-parser): `basicURLParse(input, { baseURL, encodingOverride, url, stateOverride })` - [URL serializer](https://url.spec.whatwg.org/#concept-url-serializer): `serializeURL(urlRecord, excludeFragment)` - [Host serializer](https://url.spec.whatwg.org/#concept-host-serializer): `serializeHost(hostFromURLRecord)` - [Serialize an integer](https://url.spec.whatwg.org/#serialize-an-integer): `serializeInteger(number)` - [Origin](https://url.spec.whatwg.org/#concept-url-origin) [serializer](https://html.spec.whatwg.org/multipage/origin.html#ascii-serialisation-of-an-origin): `serializeURLOrigin(urlRecord)` - [Set the username](https://url.spec.whatwg.org/#set-the-username): `setTheUsername(urlRecord, usernameString)` - [Set the password](https://url.spec.whatwg.org/#set-the-password): `setThePassword(urlRecord, passwordString)` - [Cannot have a username/password/port](https://url.spec.whatwg.org/#cannot-have-a-username-password-port): `cannotHaveAUsernamePasswordPort(urlRecord)` - [Percent decode](https://url.spec.whatwg.org/#percent-decode): `percentDecode(buffer)` The `stateOverride` parameter is one of the following strings: - [`"scheme start"`](https://url.spec.whatwg.org/#scheme-start-state) - [`"scheme"`](https://url.spec.whatwg.org/#scheme-state) - [`"no scheme"`](https://url.spec.whatwg.org/#no-scheme-state) - [`"special relative or authority"`](https://url.spec.whatwg.org/#special-relative-or-authority-state) - [`"path or authority"`](https://url.spec.whatwg.org/#path-or-authority-state) - [`"relative"`](https://url.spec.whatwg.org/#relative-state) - [`"relative slash"`](https://url.spec.whatwg.org/#relative-slash-state) - [`"special authority slashes"`](https://url.spec.whatwg.org/#special-authority-slashes-state) - [`"special authority ignore slashes"`](https://url.spec.whatwg.org/#special-authority-ignore-slashes-state) - [`"authority"`](https://url.spec.whatwg.org/#authority-state) - [`"host"`](https://url.spec.whatwg.org/#host-state) - [`"hostname"`](https://url.spec.whatwg.org/#hostname-state) - [`"port"`](https://url.spec.whatwg.org/#port-state) - [`"file"`](https://url.spec.whatwg.org/#file-state) - [`"file slash"`](https://url.spec.whatwg.org/#file-slash-state) - [`"file host"`](https://url.spec.whatwg.org/#file-host-state) - [`"path start"`](https://url.spec.whatwg.org/#path-start-state) - [`"path"`](https://url.spec.whatwg.org/#path-state) - [`"cannot-be-a-base-URL path"`](https://url.spec.whatwg.org/#cannot-be-a-base-url-path-state) - [`"query"`](https://url.spec.whatwg.org/#query-state) - [`"fragment"`](https://url.spec.whatwg.org/#fragment-state) The URL record type has the following API: - [`scheme`](https://url.spec.whatwg.org/#concept-url-scheme) - [`username`](https://url.spec.whatwg.org/#concept-url-username) - [`password`](https://url.spec.whatwg.org/#concept-url-password) - [`host`](https://url.spec.whatwg.org/#concept-url-host) - [`port`](https://url.spec.whatwg.org/#concept-url-port) - [`path`](https://url.spec.whatwg.org/#concept-url-path) (as an array) - [`query`](https://url.spec.whatwg.org/#concept-url-query) - [`fragment`](https://url.spec.whatwg.org/#concept-url-fragment) - [`cannotBeABaseURL`](https://url.spec.whatwg.org/#url-cannot-be-a-base-url-flag) (as a boolean) These properties should be treated with care, as in general changing them will cause the URL record to be in an inconsistent state until the appropriate invocation of `basicURLParse` is used to fix it up. You can see examples of this in the URL Standard, where there are many step sequences like "4. Set context object’s url’s fragment to the empty string. 5. Basic URL parse _input_ with context object’s url as _url_ and fragment state as _state override_." In between those two steps, a URL record is in an unusable state. The return value of "failure" in the spec is represented by `null`. That is, functions like `parseURL` and `basicURLParse` can return _either_ a URL record _or_ `null`. ## Development instructions First, install [Node.js](https://nodejs.org/). Then, fetch the dependencies of whatwg-url, by running from this directory: npm install To run tests: npm test To generate a coverage report: npm run coverage To build and run the live viewer: npm run build npm run build-live-viewer Serve the contents of the `live-viewer` directory using any web server. ## Supporting whatwg-url The jsdom project (including whatwg-url) is a community-driven project maintained by a team of [volunteers](https://github.com/orgs/jsdom/people). You could support us by: - [Getting professional support for whatwg-url](https://tidelift.com/subscription/pkg/npm-whatwg-url?utm_source=npm-whatwg-url&utm_medium=referral&utm_campaign=readme) as part of a Tidelift subscription. Tidelift helps making open source sustainable for us while giving teams assurances for maintenance, licensing, and security. - Contributing directly to the project. Compiler frontend for node.js ============================= Usage ----- For an up to date list of available command line options, see: ``` $> asc --help ``` API --- The API accepts the same options as the CLI but also lets you override stdout and stderr and/or provide a callback. Example: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { asc.main([ "myModule.ts", "--binaryFile", "myModule.wasm", "--optimize", "--sourceMap", "--measure" ], { stdout: process.stdout, stderr: process.stderr }, function(err) { if (err) throw err; ... }); }); ``` Available command line options can also be obtained programmatically: ```js const options = require("assemblyscript/cli/asc.json"); ... ``` You can also compile a source string directly, for example in a browser environment: ```js const asc = require("assemblyscript/cli/asc"); asc.ready.then(() => { const { binary, text, stdout, stderr } = asc.compileString(`...`, { optimize: 2 }); }); ... ``` # Regular Expression Tokenizer Tokenizes strings that represent a regular expressions. [![Build Status](https://secure.travis-ci.org/fent/ret.js.svg)](http://travis-ci.org/fent/ret.js) [![Dependency Status](https://david-dm.org/fent/ret.js.svg)](https://david-dm.org/fent/ret.js) [![codecov](https://codecov.io/gh/fent/ret.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/ret.js) # Usage ```js var ret = require('ret'); var tokens = ret(/foo|bar/.source); ``` `tokens` will contain the following object ```js { "type": ret.types.ROOT "options": [ [ { "type": ret.types.CHAR, "value", 102 }, { "type": ret.types.CHAR, "value", 111 }, { "type": ret.types.CHAR, "value", 111 } ], [ { "type": ret.types.CHAR, "value", 98 }, { "type": ret.types.CHAR, "value", 97 }, { "type": ret.types.CHAR, "value", 114 } ] ] } ``` # Token Types `ret.types` is a collection of the various token types exported by ret. ### ROOT Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe `|` character. In that case, the token will have an `options` key that will be an array of arrays of tokens. If not, it will contain a `stack` key that is an array of tokens. ```js { "type": ret.types.ROOT, "stack": [token1, token2...], } ``` ```js { "type": ret.types.ROOT, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### GROUP Groups contain tokens that are inside of a parenthesis. If the group begins with `?` followed by another character, it's a special type of group. A ':' tells the group not to be remembered when `exec` is used. '=' means the previous token matches only if followed by this group, and '!' means the previous token matches only if NOT followed. Like root, it can contain an `options` key instead of `stack` if there is a pipe. ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "stack": [token1, token2...], } ``` ```js { "type": ret.types.GROUP, "remember" true, "followedBy": false, "notFollowedBy": false, "options" [ [token1, token2...], [othertoken1, othertoken2...] ... ], } ``` ### POSITION `\b`, `\B`, `^`, and `$` specify positions in the regexp. ```js { "type": ret.types.POSITION, "value": "^", } ``` ### SET Contains a key `set` specifying what tokens are allowed and a key `not` specifying if the set should be negated. A set can contain other sets, ranges, and characters. ```js { "type": ret.types.SET, "set": [token1, token2...], "not": false, } ``` ### RANGE Used in set tokens to specify a character range. `from` and `to` are character codes. ```js { "type": ret.types.RANGE, "from": 97, "to": 122, } ``` ### REPETITION ```js { "type": ret.types.REPETITION, "min": 0, "max": Infinity, "value": token, } ``` ### REFERENCE References a group token. `value` is 1-9. ```js { "type": ret.types.REFERENCE, "value": 1, } ``` ### CHAR Represents a single character token. `value` is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it's simpler to do it this way. ```js { "type": ret.types.CHAR, "value": 123, } ``` ## Errors ret.js will throw errors if given a string with an invalid regular expression. All possible errors are * Invalid group. When a group with an immediate `?` character is followed by an invalid character. It can only be followed by `!`, `=`, or `:`. Example: `/(?_abc)/` * Nothing to repeat. Thrown when a repetitional token is used as the first token in the current clause, as in right in the beginning of the regexp or group, or right after a pipe. Example: `/foo|?bar/`, `/{1,3}foo|bar/`, `/foo(+bar)/` * Unmatched ). A group was not opened, but was closed. Example: `/hello)2u/` * Unterminated group. A group was not closed. Example: `/(1(23)4/` * Unterminated character class. A custom character set was not closed. Example: `/[abc/` # Install npm install ret # Tests Tests are written with [vows](http://vowsjs.org/) ```bash npm test ``` # License MIT # which-module > Find the module object for something that was require()d [![Build Status](https://travis-ci.org/nexdrew/which-module.svg?branch=master)](https://travis-ci.org/nexdrew/which-module) [![Coverage Status](https://coveralls.io/repos/github/nexdrew/which-module/badge.svg?branch=master)](https://coveralls.io/github/nexdrew/which-module?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) Find the `module` object in `require.cache` for something that was `require()`d or `import`ed - essentially a reverse `require()` lookup. Useful for libs that want to e.g. lookup a filename for a module or submodule that it did not `require()` itself. ## Install and Usage ``` npm install --save which-module ``` ```js const whichModule = require('which-module') console.log(whichModule(require('something'))) // Module { // id: '/path/to/project/node_modules/something/index.js', // exports: [Function], // parent: ..., // filename: '/path/to/project/node_modules/something/index.js', // loaded: true, // children: [], // paths: [ '/path/to/project/node_modules/something/node_modules', // '/path/to/project/node_modules', // '/path/to/node_modules', // '/path/node_modules', // '/node_modules' ] } ``` ## API ### `whichModule(exported)` Return the [`module` object](https://nodejs.org/api/modules.html#modules_the_module_object), if any, that represents the given argument in the `require.cache`. `exported` can be anything that was previously `require()`d or `import`ed as a module, submodule, or dependency - which means `exported` is identical to the `module.exports` returned by this method. If `exported` did not come from the `exports` of a `module` in `require.cache`, then this method returns `null`. ## License ISC © Contributors # brace-expansion [Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html), as known from sh/bash, in JavaScript. [![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion) [![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion) [![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/) [![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion) ## Example ```js var expand = require('brace-expansion'); expand('file-{a,b,c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('-v{,,}') // => ['-v', '-v', '-v'] expand('file{0..2}.jpg') // => ['file0.jpg', 'file1.jpg', 'file2.jpg'] expand('file-{a..c}.jpg') // => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg'] expand('file{2..0}.jpg') // => ['file2.jpg', 'file1.jpg', 'file0.jpg'] expand('file{0..4..2}.jpg') // => ['file0.jpg', 'file2.jpg', 'file4.jpg'] expand('file-{a..e..2}.jpg') // => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg'] expand('file{00..10..5}.jpg') // => ['file00.jpg', 'file05.jpg', 'file10.jpg'] expand('{{A..C},{a..c}}') // => ['A', 'B', 'C', 'a', 'b', 'c'] expand('ppp{,config,oe{,conf}}') // => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf'] ``` ## API ```js var expand = require('brace-expansion'); ``` ### var expanded = expand(str) Return an array of all possible and valid expansions of `str`. If none are found, `[str]` is returned. Valid expansions are: ```js /^(.*,)+(.+)?$/ // {a,b,...} ``` A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` A numeric sequence from `x` to `y` inclusive, with optional increment. If `x` or `y` start with a leading `0`, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too. ```js /^-?\d+\.\.-?\d+(\.\.-?\d+)?$/ // {x..y[..incr]} ``` An alphabetic sequence from `x` to `y` inclusive, with optional increment. `x` and `y` must be exactly one character, and if given, `incr` must be a number. For compatibility reasons, the string `${` is not eligible for brace expansion. ## Installation With [npm](https://npmjs.org) do: ```bash npm install brace-expansion ``` ## Contributors - [Julian Gruber](https://github.com/juliangruber) - [Isaac Z. Schlueter](https://github.com/isaacs) ## Sponsors This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)! Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)! ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # ASBuild A simple build tool for [AssemblyScript](https://assemblyscript.org) projects, similar to `cargo`, etc. ## Usage ``` asb [entry file] [options] -- [args passed to asc] ``` ### Background AssemblyScript greater than v0.14.4 provides a `asconfig.json` configuration file that can be used to describe the options for building a project. ASBuild uses this and some defaults to create an easier CLI interface. ### Defaults #### Project structure ``` project/ package.json asconfig.json assembly/ index.ts build/ release/ project.wasm debug/ project.wasm ``` - If no entry file passed and no `entry` field is in `asconfig.json`, `project/assembly/index.ts` is assumed. - `asconfig.json` allows for options for different compile targets, e.g. release, debug, etc. `asc` defaults to the release target. - The default build directory is `./build`, and artifacts are placed at `./build/<target>/packageName.wasm`. ### Workspaces If a `workspace` field is added to a top level `asconfig.json` file, then each path in the array is built and placed into the top level `outDir`. For example, `asconfig.json`: ```json { "workspaces": ["a", "b"] } ``` Running `asb` in the directory below will use the top level build directory to place all the binaries. ``` project/ package.json asconfig.json a/ asconfig.json assembly/ index.ts b/ asconfig.json assembly/ index.ts build/ release/ a.wasm b.wasm debug/ a.wasm b.wasm ``` To see an example in action check out the [test workspace](./test) # jsdiff [![Build Status](https://secure.travis-ci.org/kpdecker/jsdiff.svg)](http://travis-ci.org/kpdecker/jsdiff) [![Sauce Test Status](https://saucelabs.com/buildstatus/jsdiff)](https://saucelabs.com/u/jsdiff) A javascript text differencing implementation. Based on the algorithm proposed in ["An O(ND) Difference Algorithm and its Variations" (Myers, 1986)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927). ## Installation ```bash npm install diff --save ``` ## API * `Diff.diffChars(oldStr, newStr[, options])` - diffs two blocks of text, comparing character by character. Returns a list of change objects (See below). Options * `ignoreCase`: `true` to ignore casing difference. Defaults to `false`. * `Diff.diffWords(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, ignoring whitespace. Returns a list of change objects (See below). Options * `ignoreCase`: Same as in `diffChars`. * `Diff.diffWordsWithSpace(oldStr, newStr[, options])` - diffs two blocks of text, comparing word by word, treating whitespace as significant. Returns a list of change objects (See below). * `Diff.diffLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line. Options * `ignoreWhitespace`: `true` to ignore leading and trailing whitespace. This is the same as `diffTrimmedLines` * `newlineIsToken`: `true` to treat newline characters as separate tokens. This allows for changes to the newline structure to occur independently of the line content and to be treated as such. In general this is the more human friendly form of `diffLines` and `diffLines` is better suited for patches and other computer friendly output. Returns a list of change objects (See below). * `Diff.diffTrimmedLines(oldStr, newStr[, options])` - diffs two blocks of text, comparing line by line, ignoring leading and trailing whitespace. Returns a list of change objects (See below). * `Diff.diffSentences(oldStr, newStr[, options])` - diffs two blocks of text, comparing sentence by sentence. Returns a list of change objects (See below). * `Diff.diffCss(oldStr, newStr[, options])` - diffs two blocks of text, comparing CSS tokens. Returns a list of change objects (See below). * `Diff.diffJson(oldObj, newObj[, options])` - diffs two JSON objects, comparing the fields defined on each. The order of fields, etc does not matter in this comparison. Returns a list of change objects (See below). * `Diff.diffArrays(oldArr, newArr[, options])` - diffs two arrays, comparing each item for strict equality (===). Options * `comparator`: `function(left, right)` for custom equality checks Returns a list of change objects (See below). * `Diff.createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Parameters: * `oldFileName` : String to be output in the filename section of the patch for the removals * `newFileName` : String to be output in the filename section of the patch for the additions * `oldStr` : Original string value * `newStr` : New string value * `oldHeader` : Additional information to include in the old file header * `newHeader` : Additional information to include in the new file header * `options` : An object with options. Currently, only `context` is supported and describes how many lines of context should be included. * `Diff.createPatch(fileName, oldStr, newStr, oldHeader, newHeader)` - creates a unified diff patch. Just like Diff.createTwoFilesPatch, but with oldFileName being equal to newFileName. * `Diff.structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options)` - returns an object with an array of hunk objects. This method is similar to createTwoFilesPatch, but returns a data structure suitable for further processing. Parameters are the same as createTwoFilesPatch. The data structure returned may look like this: ```js { oldFileName: 'oldfile', newFileName: 'newfile', oldHeader: 'header1', newHeader: 'header2', hunks: [{ oldStart: 1, oldLines: 3, newStart: 1, newLines: 3, lines: [' line2', ' line3', '-line4', '+line5', '\\ No newline at end of file'], }] } ``` * `Diff.applyPatch(source, patch[, options])` - applies a unified diff patch. Return a string containing new version of provided data. `patch` may be a string diff or the output from the `parsePatch` or `structuredPatch` methods. The optional `options` object may have the following keys: - `fuzzFactor`: Number of lines that are allowed to differ before rejecting a patch. Defaults to 0. - `compareLine(lineNumber, line, operation, patchContent)`: Callback used to compare to given lines to determine if they should be considered equal when patching. Defaults to strict equality but may be overridden to provide fuzzier comparison. Should return false if the lines should be rejected. * `Diff.applyPatches(patch, options)` - applies one or more patches. This method will iterate over the contents of the patch and apply to data provided through callbacks. The general flow for each patch index is: - `options.loadFile(index, callback)` is called. The caller should then load the contents of the file and then pass that to the `callback(err, data)` callback. Passing an `err` will terminate further patch execution. - `options.patched(index, content, callback)` is called once the patch has been applied. `content` will be the return value from `applyPatch`. When it's ready, the caller should call `callback(err)` callback. Passing an `err` will terminate further patch execution. Once all patches have been applied or an error occurs, the `options.complete(err)` callback is made. * `Diff.parsePatch(diffStr)` - Parses a patch into structured data Return a JSON object representation of the a patch, suitable for use with the `applyPatch` method. This parses to the same structure returned by `Diff.structuredPatch`. * `convertChangesToXML(changes)` - converts a list of changes to a serialized XML format All methods above which accept the optional `callback` method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the `callback` field in the `options` object. ### Change Objects Many of the methods above return change objects. These objects consist of the following fields: * `value`: Text content * `added`: True if the value was inserted into the new string * `removed`: True if the value was removed from the old string Note that some cases may omit a particular flag field. Comparison on the flag fields should always be done in a truthy or falsy manner. ## Examples Basic example in Node ```js require('colors'); const Diff = require('diff'); const one = 'beep boop'; const other = 'beep boob blah'; const diff = Diff.diffChars(one, other); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; process.stderr.write(part.value[color]); }); console.log(); ``` Running the above program should yield <img src="images/node_example.png" alt="Node Example"> Basic example in a web page ```html <pre id="display"></pre> <script src="diff.js"></script> <script> const one = 'beep boop', other = 'beep boob blah', color = ''; let span = null; const diff = Diff.diffChars(one, other), display = document.getElementById('display'), fragment = document.createDocumentFragment(); diff.forEach((part) => { // green for additions, red for deletions // grey for common parts const color = part.added ? 'green' : part.removed ? 'red' : 'grey'; span = document.createElement('span'); span.style.color = color; span.appendChild(document .createTextNode(part.value)); fragment.appendChild(span); }); display.appendChild(fragment); </script> ``` Open the above .html file in a browser and you should see <img src="images/web_example.png" alt="Node Example"> **[Full online demo](http://kpdecker.github.com/jsdiff)** ## Compatibility [![Sauce Test Status](https://saucelabs.com/browser-matrix/jsdiff.svg)](https://saucelabs.com/u/jsdiff) jsdiff supports all ES3 environments with some known issues on IE8 and below. Under these browsers some diff algorithms such as word diff and others may fail due to lack of support for capturing groups in the `split` operation. ## License See [LICENSE](https://github.com/kpdecker/jsdiff/blob/master/LICENSE). <p align="center"> <img width="250" src="https://raw.githubusercontent.com/yargs/yargs/master/yargs-logo.png"> </p> <h1 align="center"> Yargs </h1> <p align="center"> <b >Yargs be a node.js library fer hearties tryin' ter parse optstrings</b> </p> <br> ![ci](https://github.com/yargs/yargs/workflows/ci/badge.svg) [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Coverage][coverage-image]][coverage-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![Slack][slack-image]][slack-url] ## Description Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface. It gives you: * commands and (grouped) options (`my-program.js serve --port=5000`). * a dynamically generated help menu based on your arguments: ``` mocha [spec..] Run tests with Mocha Commands mocha inspect [spec..] Run tests with Mocha [default] mocha init <path> create a client-side Mocha setup at <path> Rules & Behavior --allow-uncaught Allow uncaught errors to propagate [boolean] --async-only, -A Require all tests to use a callback (async) or return a Promise [boolean] ``` * bash-completion shortcuts for commands and options. * and [tons more](/docs/api.md). ## Installation Stable version: ```bash npm i yargs ``` Bleeding edge version with the most recent features: ```bash npm i yargs@next ``` ## Usage ### Simple Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') const argv = yargs(hideBin(process.argv)).argv if (argv.ships > 3 && argv.distance < 53.5) { console.log('Plunder more riffiwobbles!') } else { console.log('Retreat from the xupptumblers!') } ``` ```bash $ ./plunder.js --ships=4 --distance=22 Plunder more riffiwobbles! $ ./plunder.js --ships 12 --distance 98.7 Retreat from the xupptumblers! ``` ### Complex Example ```javascript #!/usr/bin/env node const yargs = require('yargs/yargs') const { hideBin } = require('yargs/helpers') yargs(hideBin(process.argv)) .command('serve [port]', 'start the server', (yargs) => { yargs .positional('port', { describe: 'port to bind on', default: 5000 }) }, (argv) => { if (argv.verbose) console.info(`start server on :${argv.port}`) serve(argv.port) }) .option('verbose', { alias: 'v', type: 'boolean', description: 'Run with verbose logging' }) .argv ``` Run the example above with `--help` to see the help for the application. ## Supported Platforms ### TypeScript yargs has type definitions at [@types/yargs][type-definitions]. ``` npm i @types/yargs --save-dev ``` See usage examples in [docs](/docs/typescript.md). ### Deno As of `v16`, `yargs` supports [Deno](https://github.com/denoland/deno): ```typescript import yargs from 'https://deno.land/x/yargs/deno.ts' import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts' yargs(Deno.args) .command('download <files...>', 'download a list of files', (yargs: any) => { return yargs.positional('files', { describe: 'a list of files to do something with' }) }, (argv: Arguments) => { console.info(argv) }) .strictCommands() .demandCommand(1) .argv ``` ### ESM As of `v16`,`yargs` supports ESM imports: ```js import yargs from 'yargs' import { hideBin } from 'yargs/helpers' yargs(hideBin(process.argv)) .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => { console.info(argv) }) .demandCommand(1) .argv ``` ### Usage in Browser See examples of using yargs in the browser in [docs](/docs/browser.md). ## Community Having problems? want to contribute? join our [community slack](http://devtoolscommunity.herokuapp.com). ## Documentation ### Table of Contents * [Yargs' API](/docs/api.md) * [Examples](/docs/examples.md) * [Parsing Tricks](/docs/tricks.md) * [Stop the Parser](/docs/tricks.md#stop) * [Negating Boolean Arguments](/docs/tricks.md#negate) * [Numbers](/docs/tricks.md#numbers) * [Arrays](/docs/tricks.md#arrays) * [Objects](/docs/tricks.md#objects) * [Quotes](/docs/tricks.md#quotes) * [Advanced Topics](/docs/advanced.md) * [Composing Your App Using Commands](/docs/advanced.md#commands) * [Building Configurable CLI Apps](/docs/advanced.md#configuration) * [Customizing Yargs' Parser](/docs/advanced.md#customizing) * [Bundling yargs](/docs/bundling.md) * [Contributing](/contributing.md) ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). [npm-url]: https://www.npmjs.com/package/yargs [npm-image]: https://img.shields.io/npm/v/yargs.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: http://standardjs.com/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg [conventional-commits-url]: https://conventionalcommits.org/ [slack-image]: http://devtoolscommunity.herokuapp.com/badge.svg [slack-url]: http://devtoolscommunity.herokuapp.com [type-definitions]: https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/yargs [coverage-image]: https://img.shields.io/nycrc/yargs/yargs [coverage-url]: https://github.com/yargs/yargs/blob/master/.nycrc # emoji-regex [![Build status](https://travis-ci.org/mathiasbynens/emoji-regex.svg?branch=master)](https://travis-ci.org/mathiasbynens/emoji-regex) _emoji-regex_ offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard. This repository contains a script that generates this regular expression based on [the data from Unicode v12](https://github.com/mathiasbynens/unicode-12.0.0). Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard. ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install emoji-regex ``` In [Node.js](https://nodejs.org/): ```js const emojiRegex = require('emoji-regex'); // Note: because the regular expression has the global flag set, this module // exports a function that returns the regex rather than exporting the regular // expression itself, to make it impossible to (accidentally) mutate the // original regular expression. const text = ` \u{231A}: ⌚ default emoji presentation character (Emoji_Presentation) \u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji \u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base) \u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier `; const regex = emojiRegex(); let match; while (match = regex.exec(text)) { const emoji = match[0]; console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`); } ``` Console output: ``` Matched sequence ⌚ — code points: 1 Matched sequence ⌚ — code points: 1 Matched sequence ↔️ — code points: 2 Matched sequence ↔️ — code points: 2 Matched sequence 👩 — code points: 1 Matched sequence 👩 — code points: 1 Matched sequence 👩🏿 — code points: 2 Matched sequence 👩🏿 — code points: 2 ``` To match emoji in their textual representation as well (i.e. emoji that are not `Emoji_Presentation` symbols and that aren’t forced to render as emoji by a variation selector), `require` the other regex: ```js const emojiRegex = require('emoji-regex/text.js'); ``` Additionally, in environments which support ES2015 Unicode escapes, you may `require` ES2015-style versions of the regexes: ```js const emojiRegex = require('emoji-regex/es2015/index.js'); const emojiRegexText = require('emoji-regex/es2015/text.js'); ``` ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License _emoji-regex_ is available under the [MIT](https://mths.be/mit) license. # ts-mixer [version-badge]: https://badgen.net/npm/v/ts-mixer [version-link]: https://npmjs.com/package/ts-mixer [build-badge]: https://img.shields.io/github/workflow/status/tannerntannern/ts-mixer/ts-mixer%20CI [build-link]: https://github.com/tannerntannern/ts-mixer/actions [ts-versions]: https://badgen.net/badge/icon/3.8,3.9,4.0?icon=typescript&label&list=| [node-versions]: https://badgen.net/badge/node/10%2C12%2C14/blue/?list=| [![npm version][version-badge]][version-link] [![github actions][build-badge]][build-link] [![TS Versions][ts-versions]][build-link] [![Node.js Versions][node-versions]][build-link] [![Minified Size](https://badgen.net/bundlephobia/min/ts-mixer)](https://bundlephobia.com/result?p=ts-mixer) [![Conventional Commits](https://badgen.net/badge/conventional%20commits/1.0.0/yellow)](https://conventionalcommits.org) ## Overview `ts-mixer` brings mixins to TypeScript. "Mixins" to `ts-mixer` are just classes, so you already know how to write them, and you can probably mix classes from your favorite library without trouble. The mixin problem is more nuanced than it appears. I've seen countless code snippets that work for certain situations, but fail in others. `ts-mixer` tries to take the best from all these solutions while accounting for the situations you might not have considered. [Quick start guide](#quick-start) ### Features * mixes plain classes * mixes classes that extend other classes * mixes classes that were mixed with `ts-mixer` * supports static properties * supports protected/private properties (the popular function-that-returns-a-class solution does not) * mixes abstract classes (with caveats [[1](#caveats)]) * mixes generic classes (with caveats [[2](#caveats)]) * supports class, method, and property decorators (with caveats [[3, 6](#caveats)]) * mostly supports the complexity presented by constructor functions (with caveats [[4](#caveats)]) * comes with an `instanceof`-like replacement (with caveats [[5, 6](#caveats)]) * [multiple mixing strategies](#settings) (ES6 proxies vs hard copy) ### Caveats 1. Mixing abstract classes requires a bit of a hack that may break in future versions of TypeScript. See [mixing abstract classes](#mixing-abstract-classes) below. 2. Mixing generic classes requires a more cumbersome notation, but it's still possible. See [mixing generic classes](#mixing-generic-classes) below. 3. Using decorators in mixed classes also requires a more cumbersome notation. See [mixing with decorators](#mixing-with-decorators) below. 4. ES6 made it impossible to use `.apply(...)` on class constructors (or any means of calling them without `new`), which makes it impossible for `ts-mixer` to pass the proper `this` to your constructors. This may or may not be an issue for your code, but there are options to work around it. See [dealing with constructors](#dealing-with-constructors) below. 5. `ts-mixer` does not support `instanceof` for mixins, but it does offer a replacement. See the [hasMixin function](#hasmixin) for more details. 6. Certain features (specifically, `@decorator` and `hasMixin`) make use of ES6 `Map`s, which means you must either use ES6+ or polyfill `Map` to use them. If you don't need these features, you should be fine without. ## Quick Start ### Installation ``` $ npm install ts-mixer ``` or if you prefer [Yarn](https://yarnpkg.com): ``` $ yarn add ts-mixer ``` ### Basic Example ```typescript import { Mixin } from 'ts-mixer'; class Foo { protected makeFoo() { return 'foo'; } } class Bar { protected makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { public makeFooBar() { return this.makeFoo() + this.makeBar(); } } const fooBar = new FooBar(); console.log(fooBar.makeFooBar()); // "foobar" ``` ## Special Cases ### Mixing Abstract Classes Abstract classes, by definition, cannot be constructed, which means they cannot take on the type, `new(...args) => any`, and by extension, are incompatible with `ts-mixer`. BUT, you can "trick" TypeScript into giving you all the benefits of an abstract class without making it technically abstract. The trick is just some strategic `// @ts-ignore`'s: ```typescript import { Mixin } from 'ts-mixer'; // note that Foo is not marked as an abstract class class Foo { // @ts-ignore: "Abstract methods can only appear within an abstract class" public abstract makeFoo(): string; } class Bar { public makeBar() { return 'bar'; } } class FooBar extends Mixin(Foo, Bar) { // we still get all the benefits of abstract classes here, because TypeScript // will still complain if this method isn't implemented public makeFoo() { return 'foo'; } } ``` Do note that while this does work quite well, it is a bit of a hack and I can't promise that it will continue to work in future TypeScript versions. ### Mixing Generic Classes Frustratingly, it is _impossible_ for generic parameters to be referenced in base class expressions. No matter what, you will eventually run into `Base class expressions cannot reference class type parameters.` The way to get around this is to leverage [declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html), and a slightly different mixing function from ts-mixer: `mix`. It works exactly like `Mixin`, except it's a decorator, which means it doesn't affect the type information of the class being decorated. See it in action below: ```typescript import { mix } from 'ts-mixer'; class Foo<T> { public fooMethod(input: T): T { return input; } } class Bar<T> { public barMethod(input: T): T { return input; } } interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { } @mix(Foo, Bar) class FooBar<T1, T2> { public fooBarMethod(input1: T1, input2: T2) { return [this.fooMethod(input1), this.barMethod(input2)]; } } ``` Key takeaways from this example: * `interface FooBar<T1, T2> extends Foo<T1>, Bar<T2> { }` makes sure `FooBar` has the typing we want, thanks to declaration merging * `@mix(Foo, Bar)` wires things up "on the JavaScript side", since the interface declaration has nothing to do with runtime behavior. * The reason we have to use the `mix` decorator is that the typing produced by `Mixin(Foo, Bar)` would conflict with the typing of the interface. `mix` has no effect "on the TypeScript side," thus avoiding type conflicts. ### Mixing with Decorators Popular libraries such as [class-validator](https://github.com/typestack/class-validator) and [TypeORM](https://github.com/typeorm/typeorm) use decorators to add functionality. Unfortunately, `ts-mixer` has no way of knowing what these libraries do with the decorators behind the scenes. So if you want these decorators to be "inherited" with classes you plan to mix, you first have to wrap them with a special `decorate` function exported by `ts-mixer`. Here's an example using `class-validator`: ```typescript import { IsBoolean, IsIn, validate } from 'class-validator'; import { Mixin, decorate } from 'ts-mixer'; class Disposable { @decorate(IsBoolean()) // instead of @IsBoolean() isDisposed: boolean = false; } class Statusable { @decorate(IsIn(['red', 'green'])) // instead of @IsIn(['red', 'green']) status: string = 'green'; } class ExtendedObject extends Mixin(Disposable, Statusable) {} const extendedObject = new ExtendedObject(); extendedObject.status = 'blue'; validate(extendedObject).then(errors => { console.log(errors); }); ``` ### Dealing with Constructors As mentioned in the [caveats section](#caveats), ES6 disallowed calling constructor functions without `new`. This means that the only way for `ts-mixer` to mix instance properties is to instantiate each base class separately, then copy the instance properties into a common object. The consequence of this is that constructors mixed by `ts-mixer` will _not_ receive the proper `this`. **This very well may not be an issue for you!** It only means that your constructors need to be "mostly pure" in terms of how they handle `this`. Specifically, your constructors cannot produce [side effects](https://en.wikipedia.org/wiki/Side_effect_%28computer_science%29) involving `this`, _other than adding properties to `this`_ (the most common side effect in JavaScript constructors). If you simply cannot eliminate `this` side effects from your constructor, there is a workaround available: `ts-mixer` will automatically forward constructor parameters to a predesignated init function (`settings.initFunction`) if it's present on the class. Unlike constructors, functions can be called with an arbitrary `this`, so this predesignated init function _will_ have the proper `this`. Here's a basic example: ```typescript import { Mixin, settings } from 'ts-mixer'; settings.initFunction = 'init'; class Person { public static allPeople: Set<Person> = new Set(); protected init() { Person.allPeople.add(this); } } type PartyAffiliation = 'democrat' | 'republican'; class PoliticalParticipant { public static democrats: Set<PoliticalParticipant> = new Set(); public static republicans: Set<PoliticalParticipant> = new Set(); public party: PartyAffiliation; // note that these same args will also be passed to init function public constructor(party: PartyAffiliation) { this.party = party; } protected init(party: PartyAffiliation) { if (party === 'democrat') PoliticalParticipant.democrats.add(this); else PoliticalParticipant.republicans.add(this); } } class Voter extends Mixin(Person, PoliticalParticipant) {} const v1 = new Voter('democrat'); const v2 = new Voter('democrat'); const v3 = new Voter('republican'); const v4 = new Voter('republican'); ``` Note the above `.add(this)` statements. These would not work as expected if they were placed in the constructor instead, since `this` is not the same between the constructor and `init`, as explained above. ## Other Features ### hasMixin As mentioned above, `ts-mixer` does not support `instanceof` for mixins. While it is possible to implement [custom `instanceof` behavior](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/hasInstance), this library does not do so because it would require modifying the source classes, which is deliberately avoided. You can fill this missing functionality with `hasMixin(instance, mixinClass)` instead. See the below example: ```typescript import { Mixin, hasMixin } from 'ts-mixer'; class Foo {} class Bar {} class FooBar extends Mixin(Foo, Bar) {} const instance = new FooBar(); // doesn't work with instanceof... console.log(instance instanceof FooBar) // true console.log(instance instanceof Foo) // false console.log(instance instanceof Bar) // false // but everything works nicely with hasMixin! console.log(hasMixin(instance, FooBar)) // true console.log(hasMixin(instance, Foo)) // true console.log(hasMixin(instance, Bar)) // true ``` `hasMixin(instance, mixinClass)` will work anywhere that `instance instanceof mixinClass` works. Additionally, like `instanceof`, you get the same [type narrowing benefits](https://www.typescriptlang.org/docs/handbook/advanced-types.html#instanceof-type-guards): ```typescript if (hasMixin(instance, Foo)) { // inferred type of instance is "Foo" } if (hasMixin(instance, Bar)) { // inferred type of instance of "Bar" } ``` ## Settings ts-mixer has multiple strategies for mixing classes which can be configured by modifying `settings` from ts-mixer. For example: ```typescript import { settings, Mixin } from 'ts-mixer'; settings.prototypeStrategy = 'proxy'; // then use `Mixin` as normal... ``` ### `settings.prototypeStrategy` * Determines how ts-mixer will mix class prototypes together * Possible values: - `'copy'` (default) - Copies all methods from the classes being mixed into a new prototype object. (This will include all methods up the prototype chains as well.) This is the default for ES5 compatibility, but it has the downside of stale references. For example, if you mix `Foo` and `Bar` to make `FooBar`, then redefine a method on `Foo`, `FooBar` will not have the latest methods from `Foo`. If this is not a concern for you, `'copy'` is the best value for this setting. - `'proxy'` - Uses an ES6 Proxy to "soft mix" prototypes. Unlike `'copy'`, updates to the base classes _will_ be reflected in the mixed class, which may be desirable. The downside is that method access is not as performant, nor is it ES5 compatible. ### `settings.staticsStrategy` * Determines how static properties are inherited * Possible values: - `'copy'` (default) - Simply copies all properties (minus `prototype`) from the base classes/constructor functions onto the mixed class. Like `settings.prototypeStrategy = 'copy'`, this strategy also suffers from stale references, but shouldn't be a concern if you don't redefine static methods after mixing. - `'proxy'` - Similar to `settings.prototypeStrategy`, proxy's static method access to base classes. Has the same benefits/downsides. ### `settings.initFunction` * If set, `ts-mixer` will automatically call the function with this name upon construction * Possible values: - `null` (default) - disables the behavior - a string - function name to call upon construction * Read more about why you would want this in [dealing with constructors](#dealing-with-constructors) ### `settings.decoratorInheritance` * Determines how decorators are inherited from classes passed to `Mixin(...)` * Possible values: - `'deep'` (default) - Deeply inherits decorators from all given classes and their ancestors - `'direct'` - Only inherits decorators defined directly on the given classes - `'none'` - Skips decorator inheritance # Author Tanner Nielsen <[email protected]> * Website - [tannernielsen.com](http://tannernielsen.com) * Github - [tannerntannern](https://github.com/tannerntannern) # near-sdk-core This package contain a convenient interface for interacting with NEAR's host runtime. To see the functions that are provided by the host node see [`env.ts`](./assembly/env/env.ts). # yargs-parser ![ci](https://github.com/yargs/yargs-parser/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/yargs-parser) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js const argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```console $ node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js const argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```console { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js const parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## Deno Example As of `v19` `yargs-parser` supports [Deno](https://github.com/denoland/deno): ```typescript import parser from "https://deno.land/x/yargs_parser/deno.ts"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` ## ESM Example As of `v19` `yargs-parser` supports ESM (_both in Node.js and in the browser_): **Node.js:** ```js import parser from 'yargs-parser' const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) ``` **Browsers:** ```html <!doctype html> <body> <script type="module"> import parser from "https://unpkg.com/[email protected]/browser.js"; const argv = parser('--foo=99 --bar=9987930', { string: ['bar'] }) console.log(argv) </script> </body> ``` ## API ### parser(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```console $ node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```console $ node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```console $ node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```console $ node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```console $ node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```console $ node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```console $ node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```console $ node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### parse positional numbers * default: `true` * key: `parse-positional-numbers` Should positional keys that look like numbers be treated as such. ```console $ node example.js 99.3 { _: [99.3] } ``` _if disabled:_ ```console $ node example.js 99.3 { _: ['99.3'] } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```console $ node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```console $ node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```console $ node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```console $ node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```console $ node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```console $ node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```console $ node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```console $ node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```console $ node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```console $ node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```console $ node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```console $ node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```console $ node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```console $ node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```console $ node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```console $ node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```console $ node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # yargs-parser [![Build Status](https://travis-ci.org/yargs/yargs-parser.svg)](https://travis-ci.org/yargs/yargs-parser) [![NPM version](https://img.shields.io/npm/v/yargs-parser.svg)](https://www.npmjs.com/package/yargs-parser) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) The mighty option parser used by [yargs](https://github.com/yargs/yargs). visit the [yargs website](http://yargs.js.org/) for more examples, and thorough usage instructions. <img width="250" src="https://raw.githubusercontent.com/yargs/yargs-parser/master/yargs-logo.png"> ## Example ```sh npm i yargs-parser --save ``` ```js var argv = require('yargs-parser')(process.argv.slice(2)) console.log(argv) ``` ```sh node example.js --foo=33 --bar hello { _: [], foo: 33, bar: 'hello' } ``` _or parse a string!_ ```js var argv = require('yargs-parser')('--foo=99 --bar=33') console.log(argv) ``` ```sh { _: [], foo: 99, bar: 33 } ``` Convert an array of mixed types before passing to `yargs-parser`: ```js var parse = require('yargs-parser') parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of strings ``` ## API ### require('yargs-parser')(args, opts={}) Parses command line arguments returning a simple mapping of keys and values. **expects:** * `args`: a string or array of strings representing the options to parse. * `opts`: provide a set of hints indicating how `args` should be parsed: * `opts.alias`: an object representing the set of aliases for a key: `{alias: {foo: ['f']}}`. * `opts.array`: indicate that keys should be parsed as an array: `{array: ['foo', 'bar']}`.<br> Indicate that keys should be parsed as an array and coerced to booleans / numbers:<br> `{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}`. * `opts.boolean`: arguments should be parsed as booleans: `{boolean: ['x', 'y']}`. * `opts.coerce`: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:<br> `{coerce: {foo: function (arg) {return modifiedArg}}}`. * `opts.config`: indicate a key that represents a path to a configuration file (this file will be loaded and parsed). * `opts.configObjects`: configuration objects to parse, their properties will be set as arguments:<br> `{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}`. * `opts.configuration`: provide configuration options to the yargs-parser (see: [configuration](#configuration)). * `opts.count`: indicate a key that should be used as a counter, e.g., `-vvv` = `{v: 3}`. * `opts.default`: provide default values for keys: `{default: {x: 33, y: 'hello world!'}}`. * `opts.envPrefix`: environment variables (`process.env`) with the prefix provided should be parsed. * `opts.narg`: specify that a key requires `n` arguments: `{narg: {x: 2}}`. * `opts.normalize`: `path.normalize()` will be applied to values set to this key. * `opts.number`: keys should be treated as numbers. * `opts.string`: keys should be treated as strings (even if they resemble a number `-x 33`). **returns:** * `obj`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. ### require('yargs-parser').detailed(args, opts={}) Parses a command line string, returning detailed information required by the yargs engine. **expects:** * `args`: a string or array of strings representing options to parse. * `opts`: provide a set of hints indicating how `args`, inputs are identical to `require('yargs-parser')(args, opts={})`. **returns:** * `argv`: an object representing the parsed value of `args` * `key/value`: key value pairs for each argument and their aliases. * `_`: an array representing the positional arguments. * [optional] `--`: an array with arguments after the end-of-options flag `--`. * `error`: populated with an error object if an exception occurred during parsing. * `aliases`: the inferred list of aliases built by combining lists in `opts.alias`. * `newAliases`: any new aliases added via camel-case expansion: * `boolean`: `{ fooBar: true }` * `defaulted`: any new argument created by `opts.default`, no aliases included. * `boolean`: `{ foo: true }` * `configuration`: given by default settings and `opts.configuration`. <a name="configuration"></a> ### Configuration The yargs-parser applies several automated transformations on the keys provided in `args`. These features can be turned on and off using the `configuration` field of `opts`. ```js var parsed = parser(['--no-dice'], { configuration: { 'boolean-negation': false } }) ``` ### short option groups * default: `true`. * key: `short-option-groups`. Should a group of short-options be treated as boolean flags? ```sh node example.js -abc { _: [], a: true, b: true, c: true } ``` _if disabled:_ ```sh node example.js -abc { _: [], abc: true } ``` ### camel-case expansion * default: `true`. * key: `camel-case-expansion`. Should hyphenated arguments be expanded into camel-case aliases? ```sh node example.js --foo-bar { _: [], 'foo-bar': true, fooBar: true } ``` _if disabled:_ ```sh node example.js --foo-bar { _: [], 'foo-bar': true } ``` ### dot-notation * default: `true` * key: `dot-notation` Should keys that contain `.` be treated as objects? ```sh node example.js --foo.bar { _: [], foo: { bar: true } } ``` _if disabled:_ ```sh node example.js --foo.bar { _: [], "foo.bar": true } ``` ### parse numbers * default: `true` * key: `parse-numbers` Should keys that look like numbers be treated as such? ```sh node example.js --foo=99.3 { _: [], foo: 99.3 } ``` _if disabled:_ ```sh node example.js --foo=99.3 { _: [], foo: "99.3" } ``` ### boolean negation * default: `true` * key: `boolean-negation` Should variables prefixed with `--no` be treated as negations? ```sh node example.js --no-foo { _: [], foo: false } ``` _if disabled:_ ```sh node example.js --no-foo { _: [], "no-foo": true } ``` ### combine arrays * default: `false` * key: `combine-arrays` Should arrays be combined when provided by both command line arguments and a configuration file. ### duplicate arguments array * default: `true` * key: `duplicate-arguments-array` Should arguments be coerced into an array when duplicated: ```sh node example.js -x 1 -x 2 { _: [], x: [1, 2] } ``` _if disabled:_ ```sh node example.js -x 1 -x 2 { _: [], x: 2 } ``` ### flatten duplicate arrays * default: `true` * key: `flatten-duplicate-arrays` Should array arguments be coerced into a single array when duplicated: ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [1, 2, 3, 4] } ``` _if disabled:_ ```sh node example.js -x 1 2 -x 3 4 { _: [], x: [[1, 2], [3, 4]] } ``` ### greedy arrays * default: `true` * key: `greedy-arrays` Should arrays consume more than one positional argument following their flag. ```sh node example --arr 1 2 { _[], arr: [1, 2] } ``` _if disabled:_ ```sh node example --arr 1 2 { _[2], arr: [1] } ``` **Note: in `v18.0.0` we are considering defaulting greedy arrays to `false`.** ### nargs eats options * default: `false` * key: `nargs-eats-options` Should nargs consume dash options as well as positional arguments. ### negation prefix * default: `no-` * key: `negation-prefix` The prefix to use for negated boolean variables. ```sh node example.js --no-foo { _: [], foo: false } ``` _if set to `quux`:_ ```sh node example.js --quuxfoo { _: [], foo: false } ``` ### populate -- * default: `false`. * key: `populate--` Should unparsed flags be stored in `--` or `_`. _If disabled:_ ```sh node example.js a -b -- x y { _: [ 'a', 'x', 'y' ], b: true } ``` _If enabled:_ ```sh node example.js a -b -- x y { _: [ 'a' ], '--': [ 'x', 'y' ], b: true } ``` ### set placeholder key * default: `false`. * key: `set-placeholder-key`. Should a placeholder be added for keys not set via the corresponding CLI argument? _If disabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, c: 2 } ``` _If enabled:_ ```sh node example.js -a 1 -c 2 { _: [], a: 1, b: undefined, c: 2 } ``` ### halt at non-option * default: `false`. * key: `halt-at-non-option`. Should parsing stop at the first positional argument? This is similar to how e.g. `ssh` parses its command line. _If disabled:_ ```sh node example.js -a run b -x y { _: [ 'b' ], a: 'run', x: 'y' } ``` _If enabled:_ ```sh node example.js -a run b -x y { _: [ 'b', '-x', 'y' ], a: 'run' } ``` ### strip aliased * default: `false` * key: `strip-aliased` Should aliases be removed before returning results? _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` ### strip dashed * default: `false` * key: `strip-dashed` Should dashed keys be removed before returning results? This option has no effect if `camel-case-expansion` is disabled. _If disabled:_ ```sh node example.js --test-field 1 { _: [], 'test-field': 1, testField: 1 } ``` _If enabled:_ ```sh node example.js --test-field 1 { _: [], testField: 1 } ``` ### unknown options as args * default: `false` * key: `unknown-options-as-args` Should unknown options be treated like regular arguments? An unknown option is one that is not configured in `opts`. _If disabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true } ``` _If enabled_ ```sh node example.js --unknown-option --known-option 2 --string-option --unknown-option2 { _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' } ``` ## Special Thanks The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday's hard work. Thanks [substack](https://github.com/substack) **beep** **boop** \o/ ## License ISC # axios // adapters The modules under `adapters/` are modules that handle dispatching a request and settling a returned `Promise` once a response is received. ## Example ```js var settle = require('./../core/settle'); module.exports = function myAdapter(config) { // At this point: // - config has been merged with defaults // - request transformers have already run // - request interceptors have already run // Make the request using config provided // Upon response settle the Promise return new Promise(function(resolve, reject) { var response = { data: responseData, status: request.status, statusText: request.statusText, headers: responseHeaders, config: config, request: request }; settle(resolve, reject, response); // From here: // - response transformers will run // - response interceptors will run }); } ``` # hasurl [![NPM Version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] > Determine whether Node.js' native [WHATWG `URL`](https://nodejs.org/api/url.html#url_the_whatwg_url_api) implementation is available. ## Installation [Node.js](http://nodejs.org/) `>= 4` is required. To install, type this at the command line: ```shell npm install hasurl ``` ## Usage ```js const hasURL = require('hasurl'); if (hasURL()) { // supported } else { // fallback } ``` [npm-image]: https://img.shields.io/npm/v/hasurl.svg [npm-url]: https://npmjs.org/package/hasurl [travis-image]: https://img.shields.io/travis/stevenvachon/hasurl.svg [travis-url]: https://travis-ci.org/stevenvachon/hasurl ![Near, Inc. logo](https://near.org/wp-content/themes/near-19/assets/img/logo.svg?t=1553011311) ## Design ### Interface ```ts export function showYouKnow(): void; ``` - "View" function (ie. a function that does NOT alter contract state) - Takes no parameters - Returns nothing ```ts export function showYouKnow2(): bool; ``` - "View" function (ie. a function that does NOT alter contract state) - Takes no parameters - Returns true ```ts export function sayHello(): string; ``` - "View" function - Takes no parameters - Returns a string ```ts export function sayMyName(): string; ``` - "Change" function (although it does NOT alter state, it DOES read from `context`, [see docs for details](https://docs.near.org/docs/develop/contracts/as/intro)) - Takes no parameters - Returns a string ```ts export function saveMyName(): void; ``` - "Change" function (ie. a function that alters contract state) - Takes no parameters - Saves the sender account name to contract state - Returns nothing ```ts export function saveMyMessage(message: string): bool; ``` - "Change" function - Takes a single parameter message of type string - Saves the sender account name and message to contract state - Returns nothing ```ts export function getAllMessages(): Array<string>; ``` - "Change" function - Takes no parameters - Reads all recorded messages from contract state (this can become expensive!) - Returns an array of messages if any are found, otherwise empty array # safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] [travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg [travis-url]: https://travis-ci.org/feross/safe-buffer [npm-image]: https://img.shields.io/npm/v/safe-buffer.svg [npm-url]: https://npmjs.org/package/safe-buffer [downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg [downloads-url]: https://npmjs.org/package/safe-buffer [standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg [standard-url]: https://standardjs.com #### Safer Node.js Buffer API **Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, `Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** **Uses the built-in implementation when available.** ## install ``` npm install safe-buffer ``` ## usage The goal of this package is to provide a safe replacement for the node.js `Buffer`. It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to the top of your node.js modules: ```js var Buffer = require('safe-buffer').Buffer // Existing buffer code will continue to work without issues: new Buffer('hey', 'utf8') new Buffer([1, 2, 3], 'utf8') new Buffer(obj) new Buffer(16) // create an uninitialized buffer (potentially unsafe) // But you can use these new explicit APIs to make clear what you want: Buffer.from('hey', 'utf8') // convert from many types to a Buffer Buffer.alloc(16) // create a zero-filled buffer (safe) Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) ``` ## api ### Class Method: Buffer.from(array) <!-- YAML added: v3.0.0 --> * `array` {Array} Allocates a new `Buffer` using an `array` of octets. ```js const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); // creates a new Buffer containing ASCII bytes // ['b','u','f','f','e','r'] ``` A `TypeError` will be thrown if `array` is not an `Array`. ### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) <!-- YAML added: v5.10.0 --> * `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or a `new ArrayBuffer()` * `byteOffset` {Number} Default: `0` * `length` {Number} Default: `arrayBuffer.length - byteOffset` When passed a reference to the `.buffer` property of a `TypedArray` instance, the newly created `Buffer` will share the same allocated memory as the TypedArray. ```js const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); // shares the memory with arr; console.log(buf); // Prints: <Buffer 88 13 a0 0f> // changing the TypedArray changes the Buffer also arr[1] = 6000; console.log(buf); // Prints: <Buffer 88 13 70 17> ``` The optional `byteOffset` and `length` arguments specify a memory range within the `arrayBuffer` that will be shared by the `Buffer`. ```js const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length); // Prints: 2 ``` A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. ### Class Method: Buffer.from(buffer) <!-- YAML added: v3.0.0 --> * `buffer` {Buffer} Copies the passed `buffer` data onto a new `Buffer` instance. ```js const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); // 'auffer' console.log(buf2.toString()); // 'buffer' (copy is not changed) ``` A `TypeError` will be thrown if `buffer` is not a `Buffer`. ### Class Method: Buffer.from(str[, encoding]) <!-- YAML added: v5.10.0 --> * `str` {String} String to encode. * `encoding` {String} Encoding to use, Default: `'utf8'` Creates a new `Buffer` containing the given JavaScript string `str`. If provided, the `encoding` parameter identifies the character encoding. If not provided, `encoding` defaults to `'utf8'`. ```js const buf1 = Buffer.from('this is a tést'); console.log(buf1.toString()); // prints: this is a tést console.log(buf1.toString('ascii')); // prints: this is a tC)st const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf2.toString()); // prints: this is a tést ``` A `TypeError` will be thrown if `str` is not a string. ### Class Method: Buffer.alloc(size[, fill[, encoding]]) <!-- YAML added: v5.10.0 --> * `size` {Number} * `fill` {Value} Default: `undefined` * `encoding` {String} Default: `utf8` Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the `Buffer` will be *zero-filled*. ```js const buf = Buffer.alloc(5); console.log(buf); // <Buffer 00 00 00 00 00> ``` The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. If `fill` is specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill)`. See [`buf.fill()`][] for more information. ```js const buf = Buffer.alloc(5, 'a'); console.log(buf); // <Buffer 61 61 61 61 61> ``` If both `fill` and `encoding` are specified, the allocated `Buffer` will be initialized by calling `buf.fill(fill, encoding)`. For example: ```js const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf); // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64> ``` Calling `Buffer.alloc(size)` can be significantly slower than the alternative `Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance contents will *never contain sensitive data*. A `TypeError` will be thrown if `size` is not a number. ### Class Method: Buffer.allocUnsafe(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. ```js const buf = Buffer.allocUnsafe(5); console.log(buf); // <Buffer 78 e0 82 02 01> // (octets will be different, every time) buf.fill(0); console.log(buf); // <Buffer 00 00 00 00 00> ``` A `TypeError` will be thrown if `size` is not a number. Note that the `Buffer` module pre-allocates an internal `Buffer` instance of size `Buffer.poolSize` that is used as a pool for the fast allocation of new `Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated `new Buffer(size)` constructor) only when `size` is less than or equal to `Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default value of `Buffer.poolSize` is `8192` but can be modified. Use of this pre-allocated internal memory pool is a key difference between calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The difference is subtle but can be important when an application requires the additional performance that `Buffer.allocUnsafe(size)` provides. ### Class Method: Buffer.allocUnsafeSlow(size) <!-- YAML added: v5.10.0 --> * `size` {Number} Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The `size` must be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will be created if a `size` less than or equal to 0 is specified. The underlying memory for `Buffer` instances created in this way is *not initialized*. The contents of the newly created `Buffer` are unknown and *may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such `Buffer` instances to zeroes. When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, allocations under 4KB are, by default, sliced from a single pre-allocated `Buffer`. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many `Persistent` objects. However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then copy out the relevant bits. ```js // need to keep around a few small chunks of memory const store = []; socket.on('readable', () => { const data = socket.read(); // allocate for retained data const sb = Buffer.allocUnsafeSlow(10); // copy the data into the new allocation data.copy(sb, 0, 0, 10); store.push(sb); }); ``` Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* a developer has observed undue memory retention in their applications. A `TypeError` will be thrown if `size` is not a number. ### All the Rest The rest of the `Buffer` API is exactly the same as in node.js. [See the docs](https://nodejs.org/api/buffer.html). ## Related links - [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) - [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) ## Why is `Buffer` unsafe? Today, the node.js `Buffer` constructor is overloaded to handle many different argument types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), `ArrayBuffer`, and also `Number`. The API is optimized for convenience: you can throw any type at it, and it will try to do what you want. Because the Buffer constructor is so powerful, you often see code like this: ```js // Convert UTF-8 strings to hex function toHex (str) { return new Buffer(str).toString('hex') } ``` ***But what happens if `toHex` is called with a `Number` argument?*** ### Remote Memory Disclosure If an attacker can make your program call the `Buffer` constructor with a `Number` argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords. When the `Buffer` constructor is passed a `Number` argument, it returns an **UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like this, you **MUST** overwrite the contents before returning it to the user. From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): > `new Buffer(size)` > > - `size` Number > > The underlying memory for `Buffer` instances created in this way is not initialized. > **The contents of a newly created `Buffer` are unknown and could contain sensitive > data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. (Emphasis our own.) Whenever the programmer intended to create an uninitialized `Buffer` you often see code like this: ```js var buf = new Buffer(16) // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### Would this ever be a problem in real code? Yes. It's surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript. Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the `Buffer` constructor is more catastrophic. Here's an example of a vulnerable service that takes a JSON payload and converts it to hex: ```js // Take a JSON payload {str: "some string"} and convert it to hex var server = http.createServer(function (req, res) { var data = '' req.setEncoding('utf8') req.on('data', function (chunk) { data += chunk }) req.on('end', function () { var body = JSON.parse(data) res.end(new Buffer(body.str).toString('hex')) }) }) server.listen(8080) ``` In this example, an http client just has to send: ```json { "str": 1000 } ``` and it will get back 1,000 bytes of uninitialized memory from the server. This is a very serious bug. It's similar in severity to the [the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process memory by remote attackers. ### Which real-world packages were vulnerable? #### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) [Mathias Buus](https://github.com/mafintosh) and I ([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process. Here's [the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) that fixed it. We released a new fixed version, created a [Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version. #### [`ws`](https://www.npmjs.com/package/ws) That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the most popular WebSocket implementation in node.js. If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as expected, then uninitialized server memory would be disclosed to the remote peer. These were the vulnerable methods: ```js socket.send(number) socket.ping(number) socket.pong(number) ``` Here's a vulnerable socket server with some echo functionality: ```js server.on('connection', function (socket) { socket.on('message', function (message) { message = JSON.parse(message) if (message.type === 'echo') { socket.send(message.data) // send back the user's message } }) }) ``` `socket.send(number)` called on the server, will disclose server memory. Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue was fixed, with a more detailed explanation. Props to [Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the [Node Security Project disclosure](https://nodesecurity.io/advisories/67). ### What's the solution? It's important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower. But we need a better way to *signal our intent* as programmers. **When we want uninitialized memory, we should request it explicitly.** Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully. #### A new API: `Buffer.allocUnsafe(number)` The functionality of creating buffers with uninitialized memory should be part of another API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that frequently gets user input of all sorts of different types passed into it. ```js var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! // Immediately overwrite the uninitialized buffer with data from another buffer for (var i = 0; i < buf.length; i++) { buf[i] = otherBuf[i] } ``` ### How do we fix node.js core? We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as `semver-major`) which defends against one case: ```js var str = 16 new Buffer(str, 'utf8') ``` In this situation, it's implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not what the programmer intended. But this is only a partial solution, since if the programmer does `new Buffer(variable)` (without an `encoding` parameter) there's no way to know what they intended. If `variable` is sometimes a number, then uninitialized memory will sometimes be returned. ### What's the real long-term fix? We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when we need uninitialized memory. But that would break 1000s of packages. ~~We believe the best solution is to:~~ ~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ ~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ #### Update We now support adding three new APIs: - `Buffer.from(value)` - convert from any type to a buffer - `Buffer.alloc(size)` - create a zero-filled buffer - `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size This solves the core problem that affected `ws` and `bittorrent-dht` which is `Buffer(variable)` getting tricked into taking a number argument. This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use `Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. ### Conclusion We think there's a serious design issue with the `Buffer` API as it exists today. It promotes insecure software by putting high-risk functionality into a convenient API with friendly "developer ergonomics". This wasn't merely a theoretical exercise because we found the issue in some of the most popular npm packages. Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of `buffer`. ```js var Buffer = require('safe-buffer').Buffer ``` Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it's not a breaking change. Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while older, insecure packages would magically become safe from this attack vector. ## links - [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) - [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) - [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) ## credit The original issues in `bittorrent-dht` ([disclosure](https://nodesecurity.io/advisories/68)) and `ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by [Mathias Buus](https://github.com/mafintosh) and [Feross Aboukhadijeh](http://feross.org/). Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues and for his work running the [Node Security Project](https://nodesecurity.io/). Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and auditing the code. ## license MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) # Punycode.js [![Build status](https://travis-ci.org/bestiejs/punycode.js.svg?branch=master)](https://travis-ci.org/bestiejs/punycode.js) [![Code coverage status](http://img.shields.io/codecov/c/github/bestiejs/punycode.js.svg)](https://codecov.io/gh/bestiejs/punycode.js) [![Dependency status](https://gemnasium.com/bestiejs/punycode.js.svg)](https://gemnasium.com/bestiejs/punycode.js) Punycode.js is a robust Punycode converter that fully complies to [RFC 3492](https://tools.ietf.org/html/rfc3492) and [RFC 5891](https://tools.ietf.org/html/rfc5891). This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm: * [The C example code from RFC 3492](https://tools.ietf.org/html/rfc3492#appendix-C) * [`punycode.c` by _Markus W. Scherer_ (IBM)](http://opensource.apple.com/source/ICU/ICU-400.42/icuSources/common/punycode.c) * [`punycode.c` by _Ben Noordhuis_](https://github.com/bnoordhuis/punycode/blob/master/punycode.c) * [JavaScript implementation by _some_](http://stackoverflow.com/questions/183485/can-anyone-recommend-a-good-free-javascript-for-punycode-to-unicode-conversion/301287#301287) * [`punycode.js` by _Ben Noordhuis_](https://github.com/joyent/node/blob/426298c8c1c0d5b5224ac3658c41e7c2a3fe9377/lib/punycode.js) (note: [not fully compliant](https://github.com/joyent/node/issues/2072)) This project was [bundled](https://github.com/joyent/node/blob/master/lib/punycode.js) with Node.js from [v0.6.2+](https://github.com/joyent/node/compare/975f1930b1...61e796decc) until [v7](https://github.com/nodejs/node/pull/7941) (soft-deprecated). The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see [v1.4.1](https://github.com/bestiejs/punycode.js/releases/tag/v1.4.1). ## Installation Via [npm](https://www.npmjs.com/): ```bash npm install punycode --save ``` In [Node.js](https://nodejs.org/): ```js const punycode = require('punycode'); ``` ## API ### `punycode.decode(string)` Converts a Punycode string of ASCII symbols to a string of Unicode symbols. ```js // decode domain name parts punycode.decode('maana-pta'); // 'mañana' punycode.decode('--dqo34k'); // '☃-⌘' ``` ### `punycode.encode(string)` Converts a string of Unicode symbols to a Punycode string of ASCII symbols. ```js // encode domain name parts punycode.encode('mañana'); // 'maana-pta' punycode.encode('☃-⌘'); // '--dqo34k' ``` ### `punycode.toUnicode(input)` Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode. ```js // decode domain names punycode.toUnicode('xn--maana-pta.com'); // → 'mañana.com' punycode.toUnicode('xn----dqo34k.com'); // → '☃-⌘.com' // decode email addresses punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'); // → 'джумла@джpумлатест.bрфa' ``` ### `punycode.toASCII(input)` Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII. ```js // encode domain names punycode.toASCII('mañana.com'); // → 'xn--maana-pta.com' punycode.toASCII('☃-⌘.com'); // → 'xn----dqo34k.com' // encode email addresses punycode.toASCII('джумла@джpумлатест.bрфa'); // → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq' ``` ### `punycode.ucs2` #### `punycode.ucs2.decode(string)` Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); // → [0x61, 0x62, 0x63] // surrogate pair for U+1D306 TETRAGRAM FOR CENTRE: punycode.ucs2.decode('\uD834\uDF06'); // → [0x1D306] ``` #### `punycode.ucs2.encode(codePoints)` Creates a string based on an array of numeric code point values. ```js punycode.ucs2.encode([0x61, 0x62, 0x63]); // → 'abc' punycode.ucs2.encode([0x1D306]); // → '\uD834\uDF06' ``` ### `punycode.version` A string representing the current Punycode.js version number. ## Author | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| | [Mathias Bynens](https://mathiasbynens.be/) | ## License Punycode.js is available under the [MIT](https://mths.be/mit) license. # Near Bindings Generator Transforms the Assembyscript AST to serialize exported functions and add `encode` and `decode` functions for generating and parsing JSON strings. ## Using via CLI After installling, `npm install nearprotocol/near-bindgen-as`, it can be added to the cli arguments of the assemblyscript compiler you must add the following: ```bash asc <file> --transform near-bindgen-as ... ``` This module also adds a binary `near-asc` which adds the default arguments required to build near contracts as well as the transformer. ```bash near-asc <input file> <output file> ``` ## Using a script to compile Another way is to add a file such as `asconfig.js` such as: ```js const compile = require("near-bindgen-as/compiler").compile; compile("assembly/index.ts", // input file "out/index.wasm", // output file [ // "-O1", // Optional arguments "--debug", "--measure" ], // Prints out the final cli arguments passed to compiler. {verbose: true} ); ``` It can then be built with `node asconfig.js`. There is an example of this in the test directory. # balanced-match Match balanced string pairs, like `{` and `}` or `<b>` and `</b>`. Supports regular expressions as well! [![build status](https://secure.travis-ci.org/juliangruber/balanced-match.svg)](http://travis-ci.org/juliangruber/balanced-match) [![downloads](https://img.shields.io/npm/dm/balanced-match.svg)](https://www.npmjs.org/package/balanced-match) [![testling badge](https://ci.testling.com/juliangruber/balanced-match.png)](https://ci.testling.com/juliangruber/balanced-match) ## Example Get the first matching pair of braces: ```js var balanced = require('balanced-match'); console.log(balanced('{', '}', 'pre{in{nested}}post')); console.log(balanced('{', '}', 'pre{first}between{second}post')); console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post')); ``` The matches are: ```bash $ node example.js { start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' } { start: 3, end: 9, pre: 'pre', body: 'first', post: 'between{second}post' } { start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' } ``` ## API ### var m = balanced(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an object with those keys: * **start** the index of the first match of `a` * **end** the index of the matching `b` * **pre** the preamble, `a` and `b` not included * **body** the match, `a` and `b` not included * **post** the postscript, `a` and `b` not included If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`. ### var r = balanced.range(a, b, str) For the first non-nested matching pair of `a` and `b` in `str`, return an array with indexes: `[ <a index>, <b index> ]`. If there's no match, `undefined` will be returned. If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`. ## Installation With [npm](https://npmjs.org) do: ```bash npm install balanced-match ``` ## License (MIT) Copyright (c) 2013 Julian Gruber &lt;[email protected]&gt; Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # inflight Add callbacks to requests in flight to avoid async duplication ## USAGE ```javascript var inflight = require('inflight') // some request that does some stuff function req(key, callback) { // key is any random string. like a url or filename or whatever. // // will return either a falsey value, indicating that the // request for this key is already in flight, or a new callback // which when called will call all callbacks passed to inflightk // with the same key callback = inflight(key, callback) // If we got a falsey value back, then there's already a req going if (!callback) return // this is where you'd fetch the url or whatever // callback is also once()-ified, so it can safely be assigned // to multiple events etc. First call wins. setTimeout(function() { callback(null, key) }, 100) } // only assigns a single setTimeout // when it dings, all cbs get called req('foo', cb1) req('foo', cb2) req('foo', cb3) req('foo', cb4) ``` # y18n [![NPM version][npm-image]][npm-url] [![js-standard-style][standard-image]][standard-url] [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) The bare-bones internationalization library used by yargs. Inspired by [i18n](https://www.npmjs.com/package/i18n). ## Examples _simple string translation:_ ```js const __ = require('y18n')().__; console.log(__('my awesome string %s', 'foo')); ``` output: `my awesome string foo` _using tagged template literals_ ```js const __ = require('y18n')().__; const str = 'foo'; console.log(__`my awesome string ${str}`); ``` output: `my awesome string foo` _pluralization support:_ ```js const __n = require('y18n')().__n; console.log(__n('one fish %s', '%d fishes %s', 2, 'foo')); ``` output: `2 fishes foo` ## Deno Example As of `v5` `y18n` supports [Deno](https://github.com/denoland/deno): ```typescript import y18n from "https://deno.land/x/y18n/deno.ts"; const __ = y18n({ locale: 'pirate', directory: './test/locales' }).__ console.info(__`Hi, ${'Ben'} ${'Coe'}!`) ``` You will need to run with `--allow-read` to load alternative locales. ## JSON Language Files The JSON language files should be stored in a `./locales` folder. File names correspond to locales, e.g., `en.json`, `pirate.json`. When strings are observed for the first time they will be added to the JSON file corresponding to the current locale. ## Methods ### require('y18n')(config) Create an instance of y18n with the config provided, options include: * `directory`: the locale directory, default `./locales`. * `updateFiles`: should newly observed strings be updated in file, default `true`. * `locale`: what locale should be used. * `fallbackToLanguage`: should fallback to a language-only file (e.g. `en.json`) be allowed if a file matching the locale does not exist (e.g. `en_US.json`), default `true`. ### y18n.\_\_(str, arg, arg, arg) Print a localized string, `%s` will be replaced with `arg`s. This function can also be used as a tag for a template literal. You can use it like this: <code>__&#96;hello ${'world'}&#96;</code>. This will be equivalent to `__('hello %s', 'world')`. ### y18n.\_\_n(singularString, pluralString, count, arg, arg, arg) Print a localized string with appropriate pluralization. If `%d` is provided in the string, the `count` will replace this placeholder. ### y18n.setLocale(str) Set the current locale being used. ### y18n.getLocale() What locale is currently being used? ### y18n.updateLocale(obj) Update the current locale with the key value pairs in `obj`. ## Supported Node.js Versions Libraries in this ecosystem make a best effort to track [Node.js' release schedule](https://nodejs.org/en/about/releases/). Here's [a post on why we think this is important](https://medium.com/the-node-js-collection/maintainers-should-consider-following-node-js-release-schedule-ab08ed4de71a). ## License ISC [npm-url]: https://npmjs.org/package/y18n [npm-image]: https://img.shields.io/npm/v/y18n.svg [standard-image]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [standard-url]: https://github.com/feross/standard # cliui ![ci](https://github.com/yargs/cliui/workflows/ci/badge.svg) [![NPM version](https://img.shields.io/npm/v/cliui.svg)](https://www.npmjs.com/package/cliui) [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg)](https://conventionalcommits.org) ![nycrc config on GitHub](https://img.shields.io/nycrc/yargs/cliui) easily create complex multi-column command-line-interfaces. ## Example ```js const ui = require('cliui')() ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div( { text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }, { text: "the file to load." + chalk.green("(if this description is long it wraps).") , width: 20 }, { text: chalk.red("[required]"), align: 'right' } ) console.log(ui.toString()) ``` ## Deno/ESM Support As of `v7` `cliui` supports [Deno](https://github.com/denoland/deno) and [ESM](https://nodejs.org/api/esm.html#esm_ecmascript_modules): ```typescript import cliui from "https://deno.land/x/cliui/deno.ts"; const ui = cliui({}) ui.div('Usage: $0 [command] [options]') ui.div({ text: 'Options:', padding: [2, 0, 1, 0] }) ui.div({ text: "-f, --file", width: 20, padding: [0, 4, 0, 4] }) console.log(ui.toString()) ``` <img width="500" src="screenshot.png"> ## Layout DSL cliui exposes a simple layout DSL: If you create a single `ui.div`, passing a string rather than an object: * `\n`: characters will be interpreted as new rows. * `\t`: characters will be interpreted as new columns. * `\s`: characters will be interpreted as padding. **as an example...** ```js var ui = require('./')({ width: 60 }) ui.div( 'Usage: node ./bin/foo.js\n' + ' <regex>\t provide a regex\n' + ' <glob>\t provide a glob\t [required]' ) console.log(ui.toString()) ``` **will output:** ```shell Usage: node ./bin/foo.js <regex> provide a regex <glob> provide a glob [required] ``` ## Methods ```js cliui = require('cliui') ``` ### cliui({width: integer}) Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window's width and use it, and if that doesn't work, width will be set to `80`. ### cliui({wrap: boolean}) Enable or disable the wrapping of text in a column. ### cliui.div(column, column, column) Create a row with any number of columns, a column can either be a string, or an object with the following options: * **text:** some text to place in the column. * **width:** the width of a column. * **align:** alignment, `right` or `center`. * **padding:** `[top, right, bottom, left]`. * **border:** should a border be placed around the div? ### cliui.span(column, column, column) Similar to `div`, except the next row will be appended without a new line being created. ### cliui.resetOutput() Resets the UI elements of the current cliui instance, maintaining the values set for `width` and `wrap`. # set-blocking [![Build Status](https://travis-ci.org/yargs/set-blocking.svg)](https://travis-ci.org/yargs/set-blocking) [![NPM version](https://img.shields.io/npm/v/set-blocking.svg)](https://www.npmjs.com/package/set-blocking) [![Coverage Status](https://coveralls.io/repos/yargs/set-blocking/badge.svg?branch=)](https://coveralls.io/r/yargs/set-blocking?branch=master) [![Standard Version](https://img.shields.io/badge/release-standard%20version-brightgreen.svg)](https://github.com/conventional-changelog/standard-version) set blocking `stdio` and `stderr` ensuring that terminal output does not truncate. ```js const setBlocking = require('set-blocking') setBlocking(true) console.log(someLargeStringToOutput) ``` ## Historical Context/Word of Warning This was created as a shim to address the bug discussed in [node #6456](https://github.com/nodejs/node/issues/6456). This bug crops up on newer versions of Node.js (`0.12+`), truncating terminal output. You should be mindful of the side-effects caused by using `set-blocking`: * if your module sets blocking to `true`, it will effect other modules consuming your library. In [yargs](https://github.com/yargs/yargs/blob/master/yargs.js#L653) we only call `setBlocking(true)` once we already know we are about to call `process.exit(code)`. * this patch will not apply to subprocesses spawned with `isTTY = true`, this is the [default `spawn()` behavior](https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options). ## License ISC # once Only call a function once. ## usage ```javascript var once = require('once') function load (file, cb) { cb = once(cb) loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Or add to the Function.prototype in a responsible way: ```javascript // only has to be done once require('once').proto() function load (file, cb) { cb = cb.once() loader.load('file') loader.once('load', cb) loader.once('error', cb) } ``` Ironically, the prototype feature makes this module twice as complicated as necessary. To check whether you function has been called, use `fn.called`. Once the function is called for the first time the return value of the original function is saved in `fn.value` and subsequent calls will continue to return this value. ```javascript var once = require('once') function load (cb) { cb = once(cb) var stream = createStream() stream.once('data', cb) stream.once('end', function () { if (!cb.called) cb(new Error('not found')) }) } ``` ## `once.strict(func)` Throw an error if the function is called twice. Some functions are expected to be called only once. Using `once` for them would potentially hide logical errors. In the example below, the `greet` function has to call the callback only once: ```javascript function greet (name, cb) { // return is missing from the if statement // when no name is passed, the callback is called twice if (!name) cb('Hello anonymous') cb('Hello ' + name) } function log (msg) { console.log(msg) } // this will print 'Hello anonymous' but the logical error will be missed greet(null, once(msg)) // once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time greet(null, once.strict(msg)) ``` Like `chown -R`. Takes the same arguments as `fs.chown()` # Glob Match files using the patterns the shell uses, like stars and stuff. [![Build Status](https://travis-ci.org/isaacs/node-glob.svg?branch=master)](https://travis-ci.org/isaacs/node-glob/) [![Build Status](https://ci.appveyor.com/api/projects/status/kd7f3yftf7unxlsx?svg=true)](https://ci.appveyor.com/project/isaacs/node-glob) [![Coverage Status](https://coveralls.io/repos/isaacs/node-glob/badge.svg?branch=master&service=github)](https://coveralls.io/github/isaacs/node-glob?branch=master) This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. ![](logo/glob.png) ## Usage Install with npm ``` npm i glob ``` ```javascript var glob = require("glob") // options is optional glob("**/*.js", options, function (er, files) { // files is an array of filenames. // If the `nonull` option is set, and nothing // was found, then files is ["**/*.js"] // er is an error object or null. }) ``` ## Glob Primer "Globs" are the patterns you type when you do stuff like `ls *.js` on the command line, or put `build/*` in a `.gitignore` file. Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with `{` and end with `}`, with any number of comma-delimited sections within. Braced sections may contain slash characters, so `a{/b/c,bcd}` would expand into `a/b/c` and `abcd`. The following characters have special magic meaning when used in a path portion: * `*` Matches 0 or more characters in a single path portion * `?` Matches 1 character * `[...]` Matches a range of characters, similar to a RegExp range. If the first character of the range is `!` or `^` then it matches any character not in the range. * `!(pattern|pattern|pattern)` Matches anything that does not match any of the patterns provided. * `?(pattern|pattern|pattern)` Matches zero or one occurrence of the patterns provided. * `+(pattern|pattern|pattern)` Matches one or more occurrences of the patterns provided. * `*(a|b|c)` Matches zero or more occurrences of the patterns provided * `@(pattern|pat*|pat?erN)` Matches exactly one of the patterns provided * `**` If a "globstar" is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories. ### Dots If a file or directory path portion has a `.` as the first character, then it will not match any glob pattern unless that pattern's corresponding path part also has a `.` as its first character. For example, the pattern `a/.*/c` would match the file at `a/.b/c`. However the pattern `a/*/c` would not, because `*` does not start with a dot character. You can make glob treat dots as normal characters by setting `dot:true` in the options. ### Basename Matching If you set `matchBase:true` in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, `*.js` would match `test/simple/basic.js`. ### Empty Sets If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example: $ echo a*s*d*f a*s*d*f To get the bash-style behavior, set the `nonull:true` in the options. ### See Also: * `man sh` * `man bash` (Search for "Pattern Matching") * `man 3 fnmatch` * `man 5 gitignore` * [minimatch documentation](https://github.com/isaacs/minimatch) ## glob.hasMagic(pattern, [options]) Returns `true` if there are any special characters in the pattern, and `false` otherwise. Note that the options affect the results. If `noext:true` is set in the options object, then `+(a|b)` will not be considered a magic pattern. If the pattern has a brace expansion, like `a/{b/c,x/y}` then that is considered magical, unless `nobrace:true` is set in the options. ## glob(pattern, [options], cb) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * `cb` `{Function}` * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Perform an asynchronous glob search. ## glob.sync(pattern, [options]) * `pattern` `{String}` Pattern to be matched * `options` `{Object}` * return: `{Array<String>}` filenames found matching the pattern Perform a synchronous glob search. ## Class: glob.Glob Create a Glob object by instantiating the `glob.Glob` class. ```javascript var Glob = require("glob").Glob var mg = new Glob(pattern, options, cb) ``` It's an EventEmitter, and starts walking the filesystem to find matches immediately. ### new glob.Glob(pattern, [options], [cb]) * `pattern` `{String}` pattern to search for * `options` `{Object}` * `cb` `{Function}` Called when an error occurs, or matches are found * `err` `{Error | null}` * `matches` `{Array<String>}` filenames found matching the pattern Note that if the `sync` flag is set in the options, then matches will be immediately available on the `g.found` member. ### Properties * `minimatch` The minimatch object that the glob uses. * `options` The options object passed in. * `aborted` Boolean which is set to true when calling `abort()`. There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls. * `cache` Convenience object. Each field has the following possible values: * `false` - Path does not exist * `true` - Path exists * `'FILE'` - Path exists, and is not a directory * `'DIR'` - Path exists, and is a directory * `[file, entries, ...]` - Path exists, is a directory, and the array value is the results of `fs.readdir` * `statCache` Cache of `fs.stat` results, to prevent statting the same path multiple times. * `symlinks` A record of which paths are symbolic links, which is relevant in resolving `**` patterns. * `realpathCache` An optional object which is passed to `fs.realpath` to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used. ### Events * `end` When the matching is finished, this is emitted with all the matches found. If the `nonull` option is set, and no match was found, then the `matches` list contains the original pattern. The matches are sorted, unless the `nosort` flag is set. * `match` Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath. * `error` Emitted when an unexpected error is encountered, or whenever any fs error occurs if `options.strict` is set. * `abort` When `abort()` is called, this event is raised. ### Methods * `pause` Temporarily stop the search * `resume` Resume the search * `abort` Stop the search forever ### Options All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications. All options are false by default, unless otherwise noted. All options are added to the Glob object, as well. If you are running many `glob` operations, you can pass a Glob object as the `options` argument to a subsequent operation to shortcut some `stat` and `readdir` calls. At the very least, you may pass in shared `symlinks`, `statCache`, `realpathCache`, and `cache` options, so that parallel glob operations will be sped up by sharing information about the filesystem. * `cwd` The current working directory in which to search. Defaults to `process.cwd()`. * `root` The place where patterns starting with `/` will be mounted onto. Defaults to `path.resolve(options.cwd, "/")` (`/` on Unix systems, and `C:\` or some such on Windows.) * `dot` Include `.dot` files in normal matches and `globstar` matches. Note that an explicit dot in a portion of the pattern will always match dot files. * `nomount` By default, a pattern starting with a forward-slash will be "mounted" onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior. * `mark` Add a `/` character to directory matches. Note that this requires additional stat calls. * `nosort` Don't sort the results. * `stat` Set to true to stat *all* results. This reduces performance somewhat, and is completely unnecessary, unless `readdir` is presumed to be an untrustworthy indicator of file existence. * `silent` When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the `silent` option to true to suppress these warnings. * `strict` When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the `strict` option to raise an error in these cases. * `cache` See `cache` property above. Pass in a previously generated cache object to save some fs calls. * `statCache` A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See "Race Conditions" below.) * `symlinks` A cache of known symbolic links. You may pass in a previously generated `symlinks` object to save `lstat` calls when resolving `**` matches. * `sync` DEPRECATED: use `glob.sync(pattern, opts)` instead. * `nounique` In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior. * `nonull` Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3). * `debug` Set to enable debug logging in minimatch and glob. * `nobrace` Do not expand `{a,b}` and `{1..3}` brace sets. * `noglobstar` Do not match `**` against multiple filenames. (Ie, treat it as a normal `*` instead.) * `noext` Do not match `+(a|b)` "extglob" patterns. * `nocase` Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since `stat` and `readdir` will not raise errors. * `matchBase` Perform a basename-only match if the pattern does not contain any slash characters. That is, `*.js` would be treated as equivalent to `**/*.js`, matching all js files in all directories. * `nodir` Do not match directories, only files. (Note: to match *only* directories, simply put a `/` at the end of the pattern.) * `ignore` Add a pattern or an array of glob patterns to exclude matches. Note: `ignore` patterns are *always* in `dot:true` mode, regardless of any other settings. * `follow` Follow symlinked directories when expanding `**` patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links. * `realpath` Set to true to call `fs.realpath` on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink) * `absolute` Set to true to always receive absolute paths for matched files. Unlike `realpath`, this also affects the values returned in the `match` event. ## Comparisons to other fnmatch/glob implementations While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional. The double-star character `**` is supported by default, unless the `noglobstar` flag is set. This is supported in the manner of bsdglob and bash 4.3, where `**` only has special significance if it is the only thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but `a/**b` will not. Note that symlinked directories are not crawled as part of a `**`, though their contents may match against subsequent portions of the pattern. This prevents infinite loops and duplicates and the like. If an escaped pattern has no matches, and the `nonull` flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, `glob.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than `"*a?"`. This is akin to setting the `nullglob` option in bash, except that it does not resolve escaped pattern characters. If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like `+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded **first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are checked for validity. Since those two are valid, matching proceeds. ### Comments and Negation Previously, this module let you mark a pattern as a "comment" if it started with a `#` character, or a "negated" pattern if it started with a `!` character. These options were deprecated in version 5, and removed in version 6. To specify things that should not match, use the `ignore` option. ## Windows **Please only use forward-slashes in glob expressions.** Though windows uses either `/` or `\` as its path separator, only `/` characters are used by this glob implementation. You must use forward-slashes **only** in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators. Results from absolute patterns such as `/foo/*` are mounted onto the root setting using `path.join`. On windows, this will by default result in `/foo/*` matching `C:\foo\bar.txt`. ## Race Conditions Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such. As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result. As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls. Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem. ## Glob Logo Glob's logo was created by [Tanya Brassie](http://tanyabrassie.com/). Logo files can be found [here](https://github.com/isaacs/node-glob/tree/master/logo). The logo is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ## Contributing Any change to behavior (including bugfixes) must come with a test. Patches that fail tests or reduce performance will be rejected. ``` # to run tests npm test # to re-generate test fixtures npm run test-regen # to benchmark against bash/zsh npm run bench # to profile javascript npm run prof ``` ![](oh-my-glob.gif) # AssemblyScript Loader A convenient loader for [AssemblyScript](https://assemblyscript.org) modules. Demangles module exports to a friendly object structure compatible with TypeScript definitions and provides useful utility to read/write data from/to memory. [Documentation](https://assemblyscript.org/loader.html) # axios [![npm version](https://img.shields.io/npm/v/axios.svg?style=flat-square)](https://www.npmjs.org/package/axios) [![build status](https://img.shields.io/travis/axios/axios/master.svg?style=flat-square)](https://travis-ci.org/axios/axios) [![code coverage](https://img.shields.io/coveralls/mzabriskie/axios.svg?style=flat-square)](https://coveralls.io/r/mzabriskie/axios) [![install size](https://packagephobia.now.sh/badge?p=axios)](https://packagephobia.now.sh/result?p=axios) [![npm downloads](https://img.shields.io/npm/dm/axios.svg?style=flat-square)](http://npm-stat.com/charts.html?package=axios) [![gitter chat](https://img.shields.io/gitter/room/mzabriskie/axios.svg?style=flat-square)](https://gitter.im/mzabriskie/axios) [![code helpers](https://www.codetriage.com/axios/axios/badges/users.svg)](https://www.codetriage.com/axios/axios) Promise based HTTP client for the browser and node.js ## Features - Make [XMLHttpRequests](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) from the browser - Make [http](http://nodejs.org/api/http.html) requests from node.js - Supports the [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) API - Intercept request and response - Transform request and response data - Cancel requests - Automatic transforms for JSON data - Client side support for protecting against [XSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) ## Browser Support ![Chrome](https://raw.github.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.github.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![Safari](https://raw.github.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | ![Opera](https://raw.github.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Edge](https://raw.github.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![IE](https://raw.github.com/alrra/browser-logos/master/src/archive/internet-explorer_9-11/internet-explorer_9-11_48x48.png) | --- | --- | --- | --- | --- | --- | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | Latest ✔ | 11 ✔ | [![Browser Matrix](https://saucelabs.com/open_sauce/build_matrix/axios.svg)](https://saucelabs.com/u/axios) ## Installing Using npm: ```bash $ npm install axios ``` Using bower: ```bash $ bower install axios ``` Using yarn: ```bash $ yarn add axios ``` Using cdn: ```html <script src="https://unpkg.com/axios/dist/axios.min.js"></script> ``` ## Example ### note: CommonJS usage In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with `require()` use the following approach: ```js const axios = require('axios').default; // axios.<method> will now provide autocomplete and parameter typings ``` Performing a `GET` request ```js const axios = require('axios'); // Make a request for a user with a given ID axios.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); // Optionally the request above could also be done as axios.get('/user', { params: { ID: 12345 } }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }) .finally(function () { // always executed }); // Want to use async/await? Add the `async` keyword to your outer function/method. async function getUser() { try { const response = await axios.get('/user?ID=12345'); console.log(response); } catch (error) { console.error(error); } } ``` > **NOTE:** `async/await` is part of ECMAScript 2017 and is not supported in Internet > Explorer and older browsers, so use with caution. Performing a `POST` request ```js axios.post('/user', { firstName: 'Fred', lastName: 'Flintstone' }) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); ``` Performing multiple concurrent requests ```js function getUserAccount() { return axios.get('/user/12345'); } function getUserPermissions() { return axios.get('/user/12345/permissions'); } axios.all([getUserAccount(), getUserPermissions()]) .then(axios.spread(function (acct, perms) { // Both requests are now complete })); ``` ## axios API Requests can be made by passing the relevant config to `axios`. ##### axios(config) ```js // Send a POST request axios({ method: 'post', url: '/user/12345', data: { firstName: 'Fred', lastName: 'Flintstone' } }); ``` ```js // GET request for remote image axios({ method: 'get', url: 'http://bit.ly/2mTM3nY', responseType: 'stream' }) .then(function (response) { response.data.pipe(fs.createWriteStream('ada_lovelace.jpg')) }); ``` ##### axios(url[, config]) ```js // Send a GET request (default method) axios('/user/12345'); ``` ### Request method aliases For convenience aliases have been provided for all supported request methods. ##### axios.request(config) ##### axios.get(url[, config]) ##### axios.delete(url[, config]) ##### axios.head(url[, config]) ##### axios.options(url[, config]) ##### axios.post(url[, data[, config]]) ##### axios.put(url[, data[, config]]) ##### axios.patch(url[, data[, config]]) ###### NOTE When using the alias methods `url`, `method`, and `data` properties don't need to be specified in config. ### Concurrency Helper functions for dealing with concurrent requests. ##### axios.all(iterable) ##### axios.spread(callback) ### Creating an instance You can create a new instance of axios with a custom config. ##### axios.create([config]) ```js const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); ``` ### Instance methods The available instance methods are listed below. The specified config will be merged with the instance config. ##### axios#request(config) ##### axios#get(url[, config]) ##### axios#delete(url[, config]) ##### axios#head(url[, config]) ##### axios#options(url[, config]) ##### axios#post(url[, data[, config]]) ##### axios#put(url[, data[, config]]) ##### axios#patch(url[, data[, config]]) ##### axios#getUri([config]) ## Request Config These are the available config options for making requests. Only the `url` is required. Requests will default to `GET` if `method` is not specified. ```js { // `url` is the server URL that will be used for the request url: '/user', // `method` is the request method to be used when making the request method: 'get', // default // `baseURL` will be prepended to `url` unless `url` is absolute. // It can be convenient to set `baseURL` for an instance of axios to pass relative URLs // to methods of that instance. baseURL: 'https://some-domain.com/api/', // `transformRequest` allows changes to the request data before it is sent to the server // This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE' // The last function in the array must return a string or an instance of Buffer, ArrayBuffer, // FormData or Stream // You may modify the headers object. transformRequest: [function (data, headers) { // Do whatever you want to transform the data return data; }], // `transformResponse` allows changes to the response data to be made before // it is passed to then/catch transformResponse: [function (data) { // Do whatever you want to transform the data return data; }], // `headers` are custom headers to be sent headers: {'X-Requested-With': 'XMLHttpRequest'}, // `params` are the URL parameters to be sent with the request // Must be a plain object or a URLSearchParams object params: { ID: 12345 }, // `paramsSerializer` is an optional function in charge of serializing `params` // (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/) paramsSerializer: function (params) { return Qs.stringify(params, {arrayFormat: 'brackets'}) }, // `data` is the data to be sent as the request body // Only applicable for request methods 'PUT', 'POST', and 'PATCH' // When no `transformRequest` is set, must be of one of the following types: // - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams // - Browser only: FormData, File, Blob // - Node only: Stream, Buffer data: { firstName: 'Fred' }, // syntax alternative to send data into the body // method post // only the value is sent, not the key data: 'Country=Brasil&City=Belo Horizonte', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 1000, // default is `0` (no timeout) // `withCredentials` indicates whether or not cross-site Access-Control requests // should be made using credentials withCredentials: false, // default // `adapter` allows custom handling of requests which makes testing easier. // Return a promise and supply a valid response (see lib/adapters/README.md). adapter: function (config) { /* ... */ }, // `auth` indicates that HTTP Basic auth should be used, and supplies credentials. // This will set an `Authorization` header, overwriting any existing // `Authorization` custom headers you have set using `headers`. // Please note that only HTTP Basic auth is configurable through this parameter. // For Bearer tokens and such, use `Authorization` custom headers instead. auth: { username: 'janedoe', password: 's00pers3cret' }, // `responseType` indicates the type of data that the server will respond with // options are: 'arraybuffer', 'document', 'json', 'text', 'stream' // browser only: 'blob' responseType: 'json', // default // `responseEncoding` indicates encoding to use for decoding responses // Note: Ignored for `responseType` of 'stream' or client-side requests responseEncoding: 'utf8', // default // `xsrfCookieName` is the name of the cookie to use as a value for xsrf token xsrfCookieName: 'XSRF-TOKEN', // default // `xsrfHeaderName` is the name of the http header that carries the xsrf token value xsrfHeaderName: 'X-XSRF-TOKEN', // default // `onUploadProgress` allows handling of progress events for uploads onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `onDownloadProgress` allows handling of progress events for downloads onDownloadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, // `maxContentLength` defines the max size of the http response content in bytes allowed maxContentLength: 2000, // `validateStatus` defines whether to resolve or reject the promise for a given // HTTP response status code. If `validateStatus` returns `true` (or is set to `null` // or `undefined`), the promise will be resolved; otherwise, the promise will be // rejected. validateStatus: function (status) { return status >= 200 && status < 300; // default }, // `maxRedirects` defines the maximum number of redirects to follow in node.js. // If set to 0, no redirects will be followed. maxRedirects: 5, // default // `socketPath` defines a UNIX Socket to be used in node.js. // e.g. '/var/run/docker.sock' to send requests to the docker daemon. // Only either `socketPath` or `proxy` can be specified. // If both are specified, `socketPath` is used. socketPath: null, // default // `httpAgent` and `httpsAgent` define a custom agent to be used when performing http // and https requests, respectively, in node.js. This allows options to be added like // `keepAlive` that are not enabled by default. httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }), // 'proxy' defines the hostname and port of the proxy server. // You can also define your proxy using the conventional `http_proxy` and // `https_proxy` environment variables. If you are using environment variables // for your proxy configuration, you can also define a `no_proxy` environment // variable as a comma-separated list of domains that should not be proxied. // Use `false` to disable proxies, ignoring environment variables. // `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and // supplies credentials. // This will set an `Proxy-Authorization` header, overwriting any existing // `Proxy-Authorization` custom headers you have set using `headers`. proxy: { host: '127.0.0.1', port: 9000, auth: { username: 'mikeymike', password: 'rapunz3l' } }, // `cancelToken` specifies a cancel token that can be used to cancel the request // (see Cancellation section below for details) cancelToken: new CancelToken(function (cancel) { }) } ``` ## Response Schema The response for a request contains the following information. ```js { // `data` is the response that was provided by the server data: {}, // `status` is the HTTP status code from the server response status: 200, // `statusText` is the HTTP status message from the server response statusText: 'OK', // `headers` the headers that the server responded with // All header names are lower cased headers: {}, // `config` is the config that was provided to `axios` for the request config: {}, // `request` is the request that generated this response // It is the last ClientRequest instance in node.js (in redirects) // and an XMLHttpRequest instance in the browser request: {} } ``` When using `then`, you will receive the response as follows: ```js axios.get('/user/12345') .then(function (response) { console.log(response.data); console.log(response.status); console.log(response.statusText); console.log(response.headers); console.log(response.config); }); ``` When using `catch`, or passing a [rejection callback](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then) as second parameter of `then`, the response will be available through the `error` object as explained in the [Handling Errors](#handling-errors) section. ## Config Defaults You can specify config defaults that will be applied to every request. ### Global axios defaults ```js axios.defaults.baseURL = 'https://api.example.com'; axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; ``` ### Custom instance defaults ```js // Set config defaults when creating the instance const instance = axios.create({ baseURL: 'https://api.example.com' }); // Alter defaults after instance has been created instance.defaults.headers.common['Authorization'] = AUTH_TOKEN; ``` ### Config order of precedence Config will be merged with an order of precedence. The order is library defaults found in [lib/defaults.js](https://github.com/axios/axios/blob/master/lib/defaults.js#L28), then `defaults` property of the instance, and finally `config` argument for the request. The latter will take precedence over the former. Here's an example. ```js // Create an instance using the config defaults provided by the library // At this point the timeout config value is `0` as is the default for the library const instance = axios.create(); // Override timeout default for the library // Now all requests using this instance will wait 2.5 seconds before timing out instance.defaults.timeout = 2500; // Override timeout for this request as it's known to take a long time instance.get('/longRequest', { timeout: 5000 }); ``` ## Interceptors You can intercept requests or responses before they are handled by `then` or `catch`. ```js // Add a request interceptor axios.interceptors.request.use(function (config) { // Do something before request is sent return config; }, function (error) { // Do something with request error return Promise.reject(error); }); // Add a response interceptor axios.interceptors.response.use(function (response) { // Any status code that lie within the range of 2xx cause this function to trigger // Do something with response data return response; }, function (error) { // Any status codes that falls outside the range of 2xx cause this function to trigger // Do something with response error return Promise.reject(error); }); ``` If you need to remove an interceptor later you can. ```js const myInterceptor = axios.interceptors.request.use(function () {/*...*/}); axios.interceptors.request.eject(myInterceptor); ``` You can add interceptors to a custom instance of axios. ```js const instance = axios.create(); instance.interceptors.request.use(function () {/*...*/}); ``` ## Handling Errors ```js axios.get('/user/12345') .catch(function (error) { if (error.response) { // The request was made and the server responded with a status code // that falls out of the range of 2xx console.log(error.response.data); console.log(error.response.status); console.log(error.response.headers); } else if (error.request) { // The request was made but no response was received // `error.request` is an instance of XMLHttpRequest in the browser and an instance of // http.ClientRequest in node.js console.log(error.request); } else { // Something happened in setting up the request that triggered an Error console.log('Error', error.message); } console.log(error.config); }); ``` Using the `validateStatus` config option, you can define HTTP code(s) that should throw an error. ```js axios.get('/user/12345', { validateStatus: function (status) { return status < 500; // Reject only if the status code is greater than or equal to 500 } }) ``` Using `toJSON` you get an object with more information about the HTTP error. ```js axios.get('/user/12345') .catch(function (error) { console.log(error.toJSON()); }); ``` ## Cancellation You can cancel a request using a *cancel token*. > The axios cancel token API is based on the withdrawn [cancelable promises proposal](https://github.com/tc39/proposal-cancelable-promises). You can create a cancel token using the `CancelToken.source` factory as shown below: ```js const CancelToken = axios.CancelToken; const source = CancelToken.source(); axios.get('/user/12345', { cancelToken: source.token }).catch(function (thrown) { if (axios.isCancel(thrown)) { console.log('Request canceled', thrown.message); } else { // handle error } }); axios.post('/user/12345', { name: 'new name' }, { cancelToken: source.token }) // cancel the request (the message parameter is optional) source.cancel('Operation canceled by the user.'); ``` You can also create a cancel token by passing an executor function to the `CancelToken` constructor: ```js const CancelToken = axios.CancelToken; let cancel; axios.get('/user/12345', { cancelToken: new CancelToken(function executor(c) { // An executor function receives a cancel function as a parameter cancel = c; }) }); // cancel the request cancel(); ``` > Note: you can cancel several requests with the same cancel token. ## Using application/x-www-form-urlencoded format By default, axios serializes JavaScript objects to `JSON`. To send data in the `application/x-www-form-urlencoded` format instead, you can use one of the following options. ### Browser In a browser, you can use the [`URLSearchParams`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams) API as follows: ```js const params = new URLSearchParams(); params.append('param1', 'value1'); params.append('param2', 'value2'); axios.post('/foo', params); ``` > Note that `URLSearchParams` is not supported by all browsers (see [caniuse.com](http://www.caniuse.com/#feat=urlsearchparams)), but there is a [polyfill](https://github.com/WebReflection/url-search-params) available (make sure to polyfill the global environment). Alternatively, you can encode data using the [`qs`](https://github.com/ljharb/qs) library: ```js const qs = require('qs'); axios.post('/foo', qs.stringify({ 'bar': 123 })); ``` Or in another way (ES6), ```js import qs from 'qs'; const data = { 'bar': 123 }; const options = { method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, data: qs.stringify(data), url, }; axios(options); ``` ### Node.js In node.js, you can use the [`querystring`](https://nodejs.org/api/querystring.html) module as follows: ```js const querystring = require('querystring'); axios.post('http://something.com/', querystring.stringify({ foo: 'bar' })); ``` You can also use the [`qs`](https://github.com/ljharb/qs) library. ###### NOTE The `qs` library is preferable if you need to stringify nested objects, as the `querystring` method has known issues with that use case (https://github.com/nodejs/node-v0.x-archive/issues/1665). ## Semver Until axios reaches a `1.0` release, breaking changes will be released with a new minor version. For example `0.5.1`, and `0.5.4` will have the same API, but `0.6.0` will have breaking changes. ## Promises axios depends on a native ES6 Promise implementation to be [supported](http://caniuse.com/promises). If your environment doesn't support ES6 Promises, you can [polyfill](https://github.com/jakearchibald/es6-promise). ## TypeScript axios includes [TypeScript](http://typescriptlang.org) definitions. ```typescript import axios from 'axios'; axios.get('/user?ID=12345'); ``` ## Resources * [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md) * [Upgrade Guide](https://github.com/axios/axios/blob/master/UPGRADE_GUIDE.md) * [Ecosystem](https://github.com/axios/axios/blob/master/ECOSYSTEM.md) * [Contributing Guide](https://github.com/axios/axios/blob/master/CONTRIBUTING.md) * [Code of Conduct](https://github.com/axios/axios/blob/master/CODE_OF_CONDUCT.md) ## Credits axios is heavily inspired by the [$http service](https://docs.angularjs.org/api/ng/service/$http) provided in [Angular](https://angularjs.org/). Ultimately axios is an effort to provide a standalone `$http`-like service for use outside of Angular. ## License [MIT](LICENSE) # randexp.js randexp will generate a random string that matches a given RegExp Javascript object. [![Build Status](https://secure.travis-ci.org/fent/randexp.js.svg)](http://travis-ci.org/fent/randexp.js) [![Dependency Status](https://david-dm.org/fent/randexp.js.svg)](https://david-dm.org/fent/randexp.js) [![codecov](https://codecov.io/gh/fent/randexp.js/branch/master/graph/badge.svg)](https://codecov.io/gh/fent/randexp.js) # Usage ```js var RandExp = require('randexp'); // supports grouping and piping new RandExp(/hello+ (world|to you)/).gen(); // => hellooooooooooooooooooo world // sets and ranges and references new RandExp(/<([a-z]\w{0,20})>foo<\1>/).gen(); // => <m5xhdg>foo<m5xhdg> // wildcard new RandExp(/random stuff: .+/).gen(); // => random stuff: l3m;Hf9XYbI [YPaxV>U*4-_F!WXQh9>;rH3i l!8.zoh?[utt1OWFQrE ^~8zEQm]~tK // ignore case new RandExp(/xxx xtreme dragon warrior xxx/i).gen(); // => xxx xtReME dRAGON warRiOR xXX // dynamic regexp shortcut new RandExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i'); // is the same as new RandExp(new RegExp('(sun|mon|tue|wednes|thurs|fri|satur)day', 'i')); ``` If you're only going to use `gen()` once with a regexp and want slightly shorter syntax for it ```js var randexp = require('randexp').randexp; randexp(/[1-6]/); // 4 randexp('great|good( job)?|excellent'); // great ``` If you miss the old syntax ```js require('randexp').sugar(); /yes|no|maybe|i don't know/.gen(); // maybe ``` # Motivation Regular expressions are used in every language, every programmer is familiar with them. Regex can be used to easily express complex strings. What better way to generate a random string than with a language you can use to express the string you want? Thanks to [String-Random](http://search.cpan.org/~steve/String-Random-0.22/lib/String/Random.pm) for giving me the idea to make this in the first place and [randexp](https://github.com/benburkert/randexp) for the sweet `.gen()` syntax. # Default Range The default generated character range includes printable ASCII. In order to add or remove characters, a `defaultRange` attribute is exposed. you can `subtract(from, to)` and `add(from, to)` ```js var randexp = new RandExp(/random stuff: .+/); randexp.defaultRange.subtract(32, 126); randexp.defaultRange.add(0, 65535); randexp.gen(); // => random stuff: 湐箻ໜ䫴␩⶛㳸長���邓蕲뤀쑡篷皇硬剈궦佔칗븛뀃匫鴔事좍ﯣ⭼ꝏ䭍詳蒂䥂뽭 ``` # Custom PRNG The default randomness is provided by `Math.random()`. If you need to use a seedable or cryptographic PRNG, you can override `RandExp.prototype.randInt` or `randexp.randInt` (where `randexp` is an instance of `RandExp`). `randInt(from, to)` accepts an inclusive range and returns a randomly selected number within that range. # Infinite Repetitionals Repetitional tokens such as `*`, `+`, and `{3,}` have an infinite max range. In this case, randexp looks at its min and adds 100 to it to get a useable max value. If you want to use another int other than 100 you can change the `max` property in `RandExp.prototype` or the RandExp instance. ```js var randexp = new RandExp(/no{1,}/); randexp.max = 1000000; ``` With `RandExp.sugar()` ```js var regexp = /(hi)*/; regexp.max = 1000000; ``` # Bad Regular Expressions There are some regular expressions which can never match any string. * Ones with badly placed positionals such as `/a^/` and `/$c/m`. Randexp will ignore positional tokens. * Back references to non-existing groups like `/(a)\1\2/`. Randexp will ignore those references, returning an empty string for them. If the group exists only after the reference is used such as in `/\1 (hey)/`, it will too be ignored. * Custom negated character sets with two sets inside that cancel each other out. Example: `/[^\w\W]/`. If you give this to randexp, it will return an empty string for this set since it can't match anything. # Projects based on randexp.js ## JSON-Schema Faker Use generators to populate JSON Schema samples. See: [jsf on github](https://github.com/json-schema-faker/json-schema-faker/) and [jsf demo page](http://json-schema-faker.js.org/). # Install ### Node.js npm install randexp ### Browser Download the [minified version](https://github.com/fent/randexp.js/releases) from the latest release. # Tests Tests are written with [mocha](https://mochajs.org) ```bash npm test ``` # License MIT # Web IDL Type Conversions on JavaScript Values This package implements, in JavaScript, the algorithms to convert a given JavaScript value according to a given [Web IDL](http://heycam.github.io/webidl/) [type](http://heycam.github.io/webidl/#idl-types). The goal is that you should be able to write code like ```js "use strict"; const conversions = require("webidl-conversions"); function doStuff(x, y) { x = conversions["boolean"](x); y = conversions["unsigned long"](y); // actual algorithm code here } ``` and your function `doStuff` will behave the same as a Web IDL operation declared as ```webidl void doStuff(boolean x, unsigned long y); ``` ## API This package's main module's default export is an object with a variety of methods, each corresponding to a different Web IDL type. Each method, when invoked on a JavaScript value, will give back the new JavaScript value that results after passing through the Web IDL conversion rules. (See below for more details on what that means.) Alternately, the method could throw an error, if the Web IDL algorithm is specified to do so: for example `conversions["float"](NaN)` [will throw a `TypeError`](http://heycam.github.io/webidl/#es-float). Each method also accepts a second, optional, parameter for miscellaneous options. For conversion methods that throw errors, a string option `{ context }` may be provided to provide more information in the error message. (For example, `conversions["float"](NaN, { context: "Argument 1 of Interface's operation" })` will throw an error with message `"Argument 1 of Interface's operation is not a finite floating-point value."`) Specific conversions may also accept other options, the details of which can be found below. ## Conversions implemented Conversions for all of the basic types from the Web IDL specification are implemented: - [`any`](https://heycam.github.io/webidl/#es-any) - [`void`](https://heycam.github.io/webidl/#es-void) - [`boolean`](https://heycam.github.io/webidl/#es-boolean) - [Integer types](https://heycam.github.io/webidl/#es-integer-types), which can additionally be provided the boolean options `{ clamp, enforceRange }` as a second parameter - [`float`](https://heycam.github.io/webidl/#es-float), [`unrestricted float`](https://heycam.github.io/webidl/#es-unrestricted-float) - [`double`](https://heycam.github.io/webidl/#es-double), [`unrestricted double`](https://heycam.github.io/webidl/#es-unrestricted-double) - [`DOMString`](https://heycam.github.io/webidl/#es-DOMString), which can additionally be provided the boolean option `{ treatNullAsEmptyString }` as a second parameter - [`ByteString`](https://heycam.github.io/webidl/#es-ByteString), [`USVString`](https://heycam.github.io/webidl/#es-USVString) - [`object`](https://heycam.github.io/webidl/#es-object) - [`Error`](https://heycam.github.io/webidl/#es-Error) - [Buffer source types](https://heycam.github.io/webidl/#es-buffer-source-types) Additionally, for convenience, the following derived type definitions are implemented: - [`ArrayBufferView`](https://heycam.github.io/webidl/#ArrayBufferView) - [`BufferSource`](https://heycam.github.io/webidl/#BufferSource) - [`DOMTimeStamp`](https://heycam.github.io/webidl/#DOMTimeStamp) - [`Function`](https://heycam.github.io/webidl/#Function) - [`VoidFunction`](https://heycam.github.io/webidl/#VoidFunction) (although it will not censor the return type) Derived types, such as nullable types, promise types, sequences, records, etc. are not handled by this library. You may wish to investigate the [webidl2js](https://github.com/jsdom/webidl2js) project. ### A note on the `long long` types The `long long` and `unsigned long long` Web IDL types can hold values that cannot be stored in JavaScript numbers, so the conversion is imperfect. For example, converting the JavaScript number `18446744073709552000` to a Web IDL `long long` is supposed to produce the Web IDL value `-18446744073709551232`. Since we are representing our Web IDL values in JavaScript, we can't represent `-18446744073709551232`, so we instead the best we could do is `-18446744073709552000` as the output. This library actually doesn't even get that far. Producing those results would require doing accurate modular arithmetic on 64-bit intermediate values, but JavaScript does not make this easy. We could pull in a big-integer library as a dependency, but in lieu of that, we for now have decided to just produce inaccurate results if you pass in numbers that are not strictly between `Number.MIN_SAFE_INTEGER` and `Number.MAX_SAFE_INTEGER`. ## Background What's actually going on here, conceptually, is pretty weird. Let's try to explain. Web IDL, as part of its madness-inducing design, has its own type system. When people write algorithms in web platform specs, they usually operate on Web IDL values, i.e. instances of Web IDL types. For example, if they were specifying the algorithm for our `doStuff` operation above, they would treat `x` as a Web IDL value of [Web IDL type `boolean`](http://heycam.github.io/webidl/#idl-boolean). Crucially, they would _not_ treat `x` as a JavaScript variable whose value is either the JavaScript `true` or `false`. They're instead working in a different type system altogether, with its own rules. Separately from its type system, Web IDL defines a ["binding"](http://heycam.github.io/webidl/#ecmascript-binding) of the type system into JavaScript. This contains rules like: when you pass a JavaScript value to the JavaScript method that manifests a given Web IDL operation, how does that get converted into a Web IDL value? For example, a JavaScript `true` passed in the position of a Web IDL `boolean` argument becomes a Web IDL `true`. But, a JavaScript `true` passed in the position of a [Web IDL `unsigned long`](http://heycam.github.io/webidl/#idl-unsigned-long) becomes a Web IDL `1`. And so on. Finally, we have the actual implementation code. This is usually C++, although these days [some smart people are using Rust](https://github.com/servo/servo). The implementation, of course, has its own type system. So when they implement the Web IDL algorithms, they don't actually use Web IDL values, since those aren't "real" outside of specs. Instead, implementations apply the Web IDL binding rules in such a way as to convert incoming JavaScript values into C++ values. For example, if code in the browser called `doStuff(true, true)`, then the implementation code would eventually receive a C++ `bool` containing `true` and a C++ `uint32_t` containing `1`. The upside of all this is that implementations can abstract all the conversion logic away, letting Web IDL handle it, and focus on implementing the relevant methods in C++ with values of the correct type already provided. That is payoff of Web IDL, in a nutshell. And getting to that payoff is the goal of _this_ project—but for JavaScript implementations, instead of C++ ones. That is, this library is designed to make it easier for JavaScript developers to write functions that behave like a given Web IDL operation. So conceptually, the conversion pipeline, which in its general form is JavaScript values ↦ Web IDL values ↦ implementation-language values, in this case becomes JavaScript values ↦ Web IDL values ↦ JavaScript values. And that intermediate step is where all the logic is performed: a JavaScript `true` becomes a Web IDL `1` in an unsigned long context, which then becomes a JavaScript `1`. ## Don't use this Seriously, why would you ever use this? You really shouldn't. Web IDL is … strange, and you shouldn't be emulating its semantics. If you're looking for a generic argument-processing library, you should find one with better rules than those from Web IDL. In general, your JavaScript should not be trying to become more like Web IDL; if anything, we should fix Web IDL to make it more like JavaScript. The _only_ people who should use this are those trying to create faithful implementations (or polyfills) of web platform interfaces defined in Web IDL. Its main consumer is the [jsdom](https://github.com/tmpvar/jsdom) project. # lodash.sortby v4.7.0 The [lodash](https://lodash.com/) method `_.sortBy` exported as a [Node.js](https://nodejs.org/) module. ## Installation Using npm: ```bash $ {sudo -H} npm i -g npm $ npm i --save lodash.sortby ``` In Node.js: ```js var sortBy = require('lodash.sortby'); ``` See the [documentation](https://lodash.com/docs#sortBy) or [package source](https://github.com/lodash/lodash/blob/4.7.0-npm-packages/lodash.sortby) for more details. binaryen.js =========== **binaryen.js** is a port of [Binaryen](https://github.com/WebAssembly/binaryen) to the Web, allowing you to generate [WebAssembly](https://webassembly.org) using a JavaScript API. <a href="https://github.com/AssemblyScript/binaryen.js/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/workflow/status/AssemblyScript/binaryen.js/Build/master?label=build&logo=github" alt="Build status" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen.svg?label=latest&color=007acc&logo=npm" alt="npm version" /></a> <a href="https://www.npmjs.com/package/binaryen"><img src="https://img.shields.io/npm/v/binaryen/nightly.svg?label=nightly&color=007acc&logo=npm" alt="npm nightly version" /></a> Usage ----- ``` $> npm install binaryen ``` ```js var binaryen = require("binaryen"); // Create a module with a single function var myModule = new binaryen.Module(); myModule.addFunction("add", binaryen.createType([ binaryen.i32, binaryen.i32 ]), binaryen.i32, [ binaryen.i32 ], myModule.block(null, [ myModule.local.set(2, myModule.i32.add( myModule.local.get(0, binaryen.i32), myModule.local.get(1, binaryen.i32) ) ), myModule.return( myModule.local.get(2, binaryen.i32) ) ]) ); myModule.addFunctionExport("add", "add"); // Optimize the module using default passes and levels myModule.optimize(); // Validate the module if (!myModule.validate()) throw new Error("validation error"); // Generate text format and binary var textData = myModule.emitText(); var wasmData = myModule.emitBinary(); // Example usage with the WebAssembly API var compiled = new WebAssembly.Module(wasmData); var instance = new WebAssembly.Instance(compiled, {}); console.log(instance.exports.add(41, 1)); ``` The buildbot also publishes nightly versions once a day if there have been changes. The latest nightly can be installed through ``` $> npm install binaryen@nightly ``` or you can use one of the [previous versions](https://github.com/AssemblyScript/binaryen.js/tags) instead if necessary. ### Usage with a CDN * From GitHub via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/gh/AssemblyScript/binaryen.js@VERSION/index.js` * From npm via [jsDelivr](https://www.jsdelivr.com):<br /> `https://cdn.jsdelivr.net/npm/binaryen@VERSION/index.js` * From npm via [unpkg](https://unpkg.com):<br /> `https://unpkg.com/binaryen@VERSION/index.js` Replace `VERSION` with a [specific version](https://github.com/AssemblyScript/binaryen.js/releases) or omit it (not recommended in production) to use master/latest. API --- **Please note** that the Binaryen API is evolving fast and that definitions and documentation provided by the package tend to get out of sync despite our best efforts. It's a bot after all. If you rely on binaryen.js and spot an issue, please consider sending a PR our way by updating [index.d.ts](./index.d.ts) and [README.md](./README.md) to reflect the [current API](https://github.com/WebAssembly/binaryen/blob/master/src/js/binaryen.js-post.js). <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> ### Contents - [Types](#types) - [Module construction](#module-construction) - [Module manipulation](#module-manipulation) - [Module validation](#module-validation) - [Module optimization](#module-optimization) - [Module creation](#module-creation) - [Expression construction](#expression-construction) - [Control flow](#control-flow) - [Variable accesses](#variable-accesses) - [Integer operations](#integer-operations) - [Floating point operations](#floating-point-operations) - [Datatype conversions](#datatype-conversions) - [Function calls](#function-calls) - [Linear memory accesses](#linear-memory-accesses) - [Host operations](#host-operations) - [Vector operations 🦄](#vector-operations-) - [Atomic memory accesses 🦄](#atomic-memory-accesses-) - [Atomic read-modify-write operations 🦄](#atomic-read-modify-write-operations-) - [Atomic wait and notify operations 🦄](#atomic-wait-and-notify-operations-) - [Sign extension operations 🦄](#sign-extension-operations-) - [Multi-value operations 🦄](#multi-value-operations-) - [Exception handling operations 🦄](#exception-handling-operations-) - [Reference types operations 🦄](#reference-types-operations-) - [Expression manipulation](#expression-manipulation) - [Relooper](#relooper) - [Source maps](#source-maps) - [Debugging](#debugging) <!-- END doctoc generated TOC please keep comment here to allow auto update --> [Future features](http://webassembly.org/docs/future-features/) 🦄 might not be supported by all runtimes. ### Types * **none**: `Type`<br /> The none type, e.g., `void`. * **i32**: `Type`<br /> 32-bit integer type. * **i64**: `Type`<br /> 64-bit integer type. * **f32**: `Type`<br /> 32-bit float type. * **f64**: `Type`<br /> 64-bit float (double) type. * **v128**: `Type`<br /> 128-bit vector type. 🦄 * **funcref**: `Type`<br /> A function reference. 🦄 * **anyref**: `Type`<br /> Any host reference. 🦄 * **nullref**: `Type`<br /> A null reference. 🦄 * **exnref**: `Type`<br /> An exception reference. 🦄 * **unreachable**: `Type`<br /> Special type indicating unreachable code when obtaining information about an expression. * **auto**: `Type`<br /> Special type used in **Module#block** exclusively. Lets the API figure out a block's result type automatically. * **createType**(types: `Type[]`): `Type`<br /> Creates a multi-value type from an array of types. * **expandType**(type: `Type`): `Type[]`<br /> Expands a multi-value type to an array of types. ### Module construction * new **Module**()<br /> Constructs a new module. * **parseText**(text: `string`): `Module`<br /> Creates a module from Binaryen's s-expression text format (not official stack-style text format). * **readBinary**(data: `Uint8Array`): `Module`<br /> Creates a module from binary data. ### Module manipulation * Module#**addFunction**(name: `string`, params: `Type`, results: `Type`, vars: `Type[]`, body: `ExpressionRef`): `FunctionRef`<br /> Adds a function. `vars` indicate additional locals, in the given order. * Module#**getFunction**(name: `string`): `FunctionRef`<br /> Gets a function, by name, * Module#**removeFunction**(name: `string`): `void`<br /> Removes a function, by name. * Module#**getNumFunctions**(): `number`<br /> Gets the number of functions within the module. * Module#**getFunctionByIndex**(index: `number`): `FunctionRef`<br /> Gets the function at the specified index. * Module#**addFunctionImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, params: `Type`, results: `Type`): `void`<br /> Adds a function import. * Module#**addTableImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a table import. There's just one table for now, using name `"0"`. * Module#**addMemoryImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`): `void`<br /> Adds a memory import. There's just one memory for now, using name `"0"`. * Module#**addGlobalImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, globalType: `Type`): `void`<br /> Adds a global variable import. Imported globals must be immutable. * Module#**addFunctionExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a function export. * Module#**addTableExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a table export. There's just one table for now, using name `"0"`. * Module#**addMemoryExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a memory export. There's just one memory for now, using name `"0"`. * Module#**addGlobalExport**(internalName: `string`, externalName: `string`): `ExportRef`<br /> Adds a global variable export. Exported globals must be immutable. * Module#**getNumExports**(): `number`<br /> Gets the number of exports witin the module. * Module#**getExportByIndex**(index: `number`): `ExportRef`<br /> Gets the export at the specified index. * Module#**removeExport**(externalName: `string`): `void`<br /> Removes an export, by external name. * Module#**addGlobal**(name: `string`, type: `Type`, mutable: `number`, value: `ExpressionRef`): `GlobalRef`<br /> Adds a global instance variable. * Module#**getGlobal**(name: `string`): `GlobalRef`<br /> Gets a global, by name, * Module#**removeGlobal**(name: `string`): `void`<br /> Removes a global, by name. * Module#**setFunctionTable**(initial: `number`, maximum: `number`, funcs: `string[]`, offset?: `ExpressionRef`): `void`<br /> Sets the contents of the function table. There's just one table for now, using name `"0"`. * Module#**getFunctionTable**(): `{ imported: boolean, segments: TableElement[] }`<br /> Gets the contents of the function table. * TableElement#**offset**: `ExpressionRef` * TableElement#**names**: `string[]` * Module#**setMemory**(initial: `number`, maximum: `number`, exportName: `string | null`, segments: `MemorySegment[]`, flags?: `number[]`, shared?: `boolean`): `void`<br /> Sets the memory. There's just one memory for now, using name `"0"`. Providing `exportName` also creates a memory export. * MemorySegment#**offset**: `ExpressionRef` * MemorySegment#**data**: `Uint8Array` * MemorySegment#**passive**: `boolean` * Module#**getNumMemorySegments**(): `number`<br /> Gets the number of memory segments within the module. * Module#**getMemorySegmentInfoByIndex**(index: `number`): `MemorySegmentInfo`<br /> Gets information about the memory segment at the specified index. * MemorySegmentInfo#**offset**: `number` * MemorySegmentInfo#**data**: `Uint8Array` * MemorySegmentInfo#**passive**: `boolean` * Module#**setStart**(start: `FunctionRef`): `void`<br /> Sets the start function. * Module#**getFeatures**(): `Features`<br /> Gets the WebAssembly features enabled for this module. Note that the return value may be a bitmask indicating multiple features. Possible feature flags are: * Features.**MVP**: `Features` * Features.**Atomics**: `Features` * Features.**BulkMemory**: `Features` * Features.**MutableGlobals**: `Features` * Features.**NontrappingFPToInt**: `Features` * Features.**SignExt**: `Features` * Features.**SIMD128**: `Features` * Features.**ExceptionHandling**: `Features` * Features.**TailCall**: `Features` * Features.**ReferenceTypes**: `Features` * Features.**Multivalue**: `Features` * Features.**All**: `Features` * Module#**setFeatures**(features: `Features`): `void`<br /> Sets the WebAssembly features enabled for this module. * Module#**addCustomSection**(name: `string`, contents: `Uint8Array`): `void`<br /> Adds a custom section to the binary. * Module#**autoDrop**(): `void`<br /> Enables automatic insertion of `drop` operations where needed. Lets you not worry about dropping when creating your code. * **getFunctionInfo**(ftype: `FunctionRef`: `FunctionInfo`<br /> Obtains information about a function. * FunctionInfo#**name**: `string` * FunctionInfo#**module**: `string | null` (if imported) * FunctionInfo#**base**: `string | null` (if imported) * FunctionInfo#**params**: `Type` * FunctionInfo#**results**: `Type` * FunctionInfo#**vars**: `Type` * FunctionInfo#**body**: `ExpressionRef` * **getGlobalInfo**(global: `GlobalRef`): `GlobalInfo`<br /> Obtains information about a global. * GlobalInfo#**name**: `string` * GlobalInfo#**module**: `string | null` (if imported) * GlobalInfo#**base**: `string | null` (if imported) * GlobalInfo#**type**: `Type` * GlobalInfo#**mutable**: `boolean` * GlobalInfo#**init**: `ExpressionRef` * **getExportInfo**(export_: `ExportRef`): `ExportInfo`<br /> Obtains information about an export. * ExportInfo#**kind**: `ExternalKind` * ExportInfo#**name**: `string` * ExportInfo#**value**: `string` Possible `ExternalKind` values are: * **ExternalFunction**: `ExternalKind` * **ExternalTable**: `ExternalKind` * **ExternalMemory**: `ExternalKind` * **ExternalGlobal**: `ExternalKind` * **ExternalEvent**: `ExternalKind` * **getEventInfo**(event: `EventRef`): `EventInfo`<br /> Obtains information about an event. * EventInfo#**name**: `string` * EventInfo#**module**: `string | null` (if imported) * EventInfo#**base**: `string | null` (if imported) * EventInfo#**attribute**: `number` * EventInfo#**params**: `Type` * EventInfo#**results**: `Type` * **getSideEffects**(expr: `ExpressionRef`, features: `FeatureFlags`): `SideEffects`<br /> Gets the side effects of the specified expression. * SideEffects.**None**: `SideEffects` * SideEffects.**Branches**: `SideEffects` * SideEffects.**Calls**: `SideEffects` * SideEffects.**ReadsLocal**: `SideEffects` * SideEffects.**WritesLocal**: `SideEffects` * SideEffects.**ReadsGlobal**: `SideEffects` * SideEffects.**WritesGlobal**: `SideEffects` * SideEffects.**ReadsMemory**: `SideEffects` * SideEffects.**WritesMemory**: `SideEffects` * SideEffects.**ImplicitTrap**: `SideEffects` * SideEffects.**IsAtomic**: `SideEffects` * SideEffects.**Throws**: `SideEffects` * SideEffects.**Any**: `SideEffects` ### Module validation * Module#**validate**(): `boolean`<br /> Validates the module. Returns `true` if valid, otherwise prints validation errors and returns `false`. ### Module optimization * Module#**optimize**(): `void`<br /> Optimizes the module using the default optimization passes. * Module#**optimizeFunction**(func: `FunctionRef | string`): `void`<br /> Optimizes a single function using the default optimization passes. * Module#**runPasses**(passes: `string[]`): `void`<br /> Runs the specified passes on the module. * Module#**runPassesOnFunction**(func: `FunctionRef | string`, passes: `string[]`): `void`<br /> Runs the specified passes on a single function. * **getOptimizeLevel**(): `number`<br /> Gets the currently set optimize level. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **setOptimizeLevel**(level: `number`): `void`<br /> Sets the optimization level to use. `0`, `1`, `2` correspond to `-O0`, `-O1`, `-O2` (default), etc. * **getShrinkLevel**(): `number`<br /> Gets the currently set shrink level. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **setShrinkLevel**(level: `number`): `void`<br /> Sets the shrink level to use. `0`, `1`, `2` correspond to `-O0`, `-Os` (default), `-Oz`. * **getDebugInfo**(): `boolean`<br /> Gets whether generating debug information is currently enabled or not. * **setDebugInfo**(on: `boolean`): `void`<br /> Enables or disables debug information in emitted binaries. * **getLowMemoryUnused**(): `boolean`<br /> Gets whether the low 1K of memory can be considered unused when optimizing. * **setLowMemoryUnused**(on: `boolean`): `void`<br /> Enables or disables whether the low 1K of memory can be considered unused when optimizing. * **getPassArgument**(key: `string`): `string | null`<br /> Gets the value of the specified arbitrary pass argument. * **setPassArgument**(key: `string`, value: `string | null`): `void`<br /> Sets the value of the specified arbitrary pass argument. Removes the respective argument if `value` is `null`. * **clearPassArguments**(): `void`<br /> Clears all arbitrary pass arguments. * **getAlwaysInlineMaxSize**(): `number`<br /> Gets the function size at which we always inline. * **setAlwaysInlineMaxSize**(size: `number`): `void`<br /> Sets the function size at which we always inline. * **getFlexibleInlineMaxSize**(): `number`<br /> Gets the function size which we inline when functions are lightweight. * **setFlexibleInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when functions are lightweight. * **getOneCallerInlineMaxSize**(): `number`<br /> Gets the function size which we inline when there is only one caller. * **setOneCallerInlineMaxSize**(size: `number`): `void`<br /> Sets the function size which we inline when there is only one caller. ### Module creation * Module#**emitBinary**(): `Uint8Array`<br /> Returns the module in binary format. * Module#**emitBinary**(sourceMapUrl: `string | null`): `BinaryWithSourceMap`<br /> Returns the module in binary format with its source map. If `sourceMapUrl` is `null`, source map generation is skipped. * BinaryWithSourceMap#**binary**: `Uint8Array` * BinaryWithSourceMap#**sourceMap**: `string | null` * Module#**emitText**(): `string`<br /> Returns the module in Binaryen's s-expression text format (not official stack-style text format). * Module#**emitAsmjs**(): `string`<br /> Returns the [asm.js](http://asmjs.org/) representation of the module. * Module#**dispose**(): `void`<br /> Releases the resources held by the module once it isn't needed anymore. ### Expression construction #### [Control flow](http://webassembly.org/docs/semantics/#control-constructs-and-instructions) * Module#**block**(label: `string | null`, children: `ExpressionRef[]`, resultType?: `Type`): `ExpressionRef`<br /> Creates a block. `resultType` defaults to `none`. * Module#**if**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse?: `ExpressionRef`): `ExpressionRef`<br /> Creates an if or if/else combination. * Module#**loop**(label: `string | null`, body: `ExpressionRef`): `ExpressionRef`<br /> Creates a loop. * Module#**br**(label: `string`, condition?: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a branch (br) to a label. * Module#**switch**(labels: `string[]`, defaultLabel: `string`, condition: `ExpressionRef`, value?: `ExpressionRef`): `ExpressionRef`<br /> Creates a switch (br_table). * Module#**nop**(): `ExpressionRef`<br /> Creates a no-operation (nop) instruction. * Module#**return**(value?: `ExpressionRef`): `ExpressionRef` Creates a return. * Module#**unreachable**(): `ExpressionRef`<br /> Creates an [unreachable](http://webassembly.org/docs/semantics/#unreachable) instruction that will always trap. * Module#**drop**(value: `ExpressionRef`): `ExpressionRef`<br /> Creates a [drop](http://webassembly.org/docs/semantics/#type-parametric-operators) of a value. * Module#**select**(condition: `ExpressionRef`, ifTrue: `ExpressionRef`, ifFalse: `ExpressionRef`, type?: `Type`): `ExpressionRef`<br /> Creates a [select](http://webassembly.org/docs/semantics/#type-parametric-operators) of one of two values. #### [Variable accesses](http://webassembly.org/docs/semantics/#local-variables) * Module#**local.get**(index: `number`, type: `Type`): `ExpressionRef`<br /> Creates a local.get for the local at the specified index. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**local.set**(index: `number`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a local.set for the local at the specified index. * Module#**local.tee**(index: `number`, value: `ExpressionRef`, type: `Type`): `ExpressionRef`<br /> Creates a local.tee for the local at the specified index. A tee differs from a set in that the value remains on the stack. Note that we must specify the type here as we may not have created the local being accessed yet. * Module#**global.get**(name: `string`, type: `Type`): `ExpressionRef`<br /> Creates a global.get for the global with the specified name. Note that we must specify the type here as we may not have created the global being accessed yet. * Module#**global.set**(name: `string`, value: `ExpressionRef`): `ExpressionRef`<br /> Creates a global.set for the global with the specified name. #### [Integer operations](http://webassembly.org/docs/semantics/#32-bit-integer-operators) * Module#i32.**const**(value: `number`): `ExpressionRef` * Module#i32.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i64.**const**(value: `number`): `ExpressionRef` * Module#i64.**clz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**ctz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**popcnt**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**eqz**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**div_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rem_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**shr_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotl**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**rotr**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**le_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Floating point operations](http://webassembly.org/docs/semantics/#floating-point-operators) * Module#f32.**const**(value: `number`): `ExpressionRef` * Module#f32.**const_bits**(value: `number`): `ExpressionRef` * Module#f32.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#f64.**const**(value: `number`): `ExpressionRef` * Module#f64.**const_bits**(value: `number`): `ExpressionRef` * Module#f64.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**ceil**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**floor**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**trunc**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**nearest**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**copysign**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` #### [Datatype conversions](http://webassembly.org/docs/semantics/#datatype-conversions-truncations-reinterpretations-promotions-and-demotions) * Module#i32.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**wrap**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**trunc_s.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_s.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f32**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**trunc_u.f64**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f32.**demote**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**reinterpret**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_s.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i32**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**convert_u.i64**(value: `ExpressionRef`): `ExpressionRef` * Module#f64.**promote**(value: `ExpressionRef`): `ExpressionRef` #### [Function calls](http://webassembly.org/docs/semantics/#calls) * Module#**call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef` Creates a call to a function. Note that we must specify the return type here as we may not have created the function being called yet. * Module#**return_call**(name: `string`, operands: `ExpressionRef[]`, returnType: `Type`): `ExpressionRef`<br /> Like **call**, but creates a tail-call. 🦄 * Module#**call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Similar to **call**, but calls indirectly, i.e., via a function pointer, so an expression replaces the name as the called value. * Module#**return_call_indirect**(target: `ExpressionRef`, operands: `ExpressionRef[]`, params: `Type`, results: `Type`): `ExpressionRef`<br /> Like **call_indirect**, but creates a tail-call. 🦄 #### [Linear memory accesses](http://webassembly.org/docs/semantics/#linear-memory-accesses) * Module#i32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> * Module#i32.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef`<br /> > * Module#i64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load16_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**load32_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store8**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store16**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**store32**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f32.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f32.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#f64.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#f64.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Host operations](http://webassembly.org/docs/semantics/#resizing) * Module#**memory.size**(): `ExpressionRef` * Module#**memory.grow**(value: `number`): `ExpressionRef` #### [Vector operations](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) 🦄 * Module#v128.**const**(bytes: `Uint8Array`): `ExpressionRef` * Module#v128.**load**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#v128.**store**(offset: `number`, align: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#v128.**not**(value: `ExpressionRef`): `ExpressionRef` * Module#v128.**and**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**or**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**xor**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**andnot**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v128.**bitselect**(left: `ExpressionRef`, right: `ExpressionRef`, cond: `ExpressionRef`): `ExpressionRef` > * Module#i8x16.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i8x16.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i8x16.**narrow_i16x8_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` > * Module#i16x8.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i16x8.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**add_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**sub_saturate_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**avgr_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**narrow_i32x4_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_low_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**widen_high_i8x16_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i16x8.**load8x8_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**gt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**le_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**lt_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**ge_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**min_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**max_u**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**dot_i16x8_s**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**trunc_sat_f32x4_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_low_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**widen_high_i16x8_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32x4.**load16x4_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#i64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**extract_lane_s**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**extract_lane_u**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#i64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**any_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**all_true**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shl**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_s**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**shr_u**(vec: `ExpressionRef`, shift: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**trunc_sat_f64x2_u**(value: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_s**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64x2.**load32x2_u**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#f32x4.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f32x4.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f32x4.**convert_i32x4_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#f64x2.**splat**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**extract_lane**(vec: `ExpressionRef`, index: `number`): `ExpressionRef` * Module#f64x2.**replace_lane**(vec: `ExpressionRef`, index: `number`, value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**eq**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ne**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**lt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**gt**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**le**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**ge**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**abs**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**neg**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sqrt**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfma**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**qfms**(a: `ExpressionRef`, b: `ExpressionRef`, c: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**add**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**sub**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**mul**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**div**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**min**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**max**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_s**(value: `ExpressionRef`): `ExpressionRef` * Module#f64x2.**convert_i64x2_u**(value: `ExpressionRef`): `ExpressionRef` > * Module#v8x16.**shuffle**(left: `ExpressionRef`, right: `ExpressionRef`, mask: `Uint8Array`): `ExpressionRef` * Module#v8x16.**swizzle**(left: `ExpressionRef`, right: `ExpressionRef`): `ExpressionRef` * Module#v8x16.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v16x8.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v32x4.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` > * Module#v64x2.**load_splat**(offset: `number`, align: `number`, ptr: `ExpressionRef`): `ExpressionRef` #### [Atomic memory accesses](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#atomic-memory-accesses) 🦄 * Module#i32.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.load**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load8_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load16_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.load32_u**(offset: `number`, ptr: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store8**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store16**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.store32**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` #### [Atomic read-modify-write operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#read-modify-write) 🦄 * Module#i32.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i32.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` > * Module#i64.**atomic.rmw.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw8_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw16_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.add**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.sub**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.and**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.or**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xor**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.xchg**(offset: `number`, ptr: `ExpressionRef`, value: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.rmw32_u.cmpxchg**(offset: `number`, ptr: `ExpressionRef`, expected: `ExpressionRef`, replacement: `ExpressionRef`): `ExpressionRef` #### [Atomic wait and notify operations](https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md#wait-and-notify-operators) 🦄 * Module#i32.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#i64.**atomic.wait**(ptr: `ExpressionRef`, expected: `ExpressionRef`, timeout: `ExpressionRef`): `ExpressionRef` * Module#**atomic.notify**(ptr: `ExpressionRef`, notifyCount: `ExpressionRef`): `ExpressionRef` * Module#**atomic.fence**(): `ExpressionRef` #### [Sign extension operations](https://github.com/WebAssembly/sign-extension-ops/blob/master/proposals/sign-extension-ops/Overview.md) 🦄 * Module#i32.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` > * Module#i64.**extend8_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend16_s**(value: `ExpressionRef`): `ExpressionRef` * Module#i64.**extend32_s**(value: `ExpressionRef`): `ExpressionRef` #### [Multi-value operations](https://github.com/WebAssembly/multi-value/blob/master/proposals/multi-value/Overview.md) 🦄 Note that these are pseudo instructions enabling Binaryen to reason about multiple values on the stack. * Module#**push**(value: `ExpressionRef`): `ExpressionRef` * Module#i32.**pop**(): `ExpressionRef` * Module#i64.**pop**(): `ExpressionRef` * Module#f32.**pop**(): `ExpressionRef` * Module#f64.**pop**(): `ExpressionRef` * Module#v128.**pop**(): `ExpressionRef` * Module#funcref.**pop**(): `ExpressionRef` * Module#anyref.**pop**(): `ExpressionRef` * Module#nullref.**pop**(): `ExpressionRef` * Module#exnref.**pop**(): `ExpressionRef` * Module#tuple.**make**(elements: `ExpressionRef[]`): `ExpressionRef` * Module#tuple.**extract**(tuple: `ExpressionRef`, index: `number`): `ExpressionRef` #### [Exception handling operations](https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md) 🦄 * Module#**try**(body: `ExpressionRef`, catchBody: `ExpressionRef`): `ExpressionRef` * Module#**throw**(event: `string`, operands: `ExpressionRef[]`): `ExpressionRef` * Module#**rethrow**(exnref: `ExpressionRef`): `ExpressionRef` * Module#**br_on_exn**(label: `string`, event: `string`, exnref: `ExpressionRef`): `ExpressionRef` > * Module#**addEvent**(name: `string`, attribute: `number`, params: `Type`, results: `Type`): `Event` * Module#**getEvent**(name: `string`): `Event` * Module#**removeEvent**(name: `stirng`): `void` * Module#**addEventImport**(internalName: `string`, externalModuleName: `string`, externalBaseName: `string`, attribute: `number`, params: `Type`, results: `Type`): `void` * Module#**addEventExport**(internalName: `string`, externalName: `string`): `ExportRef` #### [Reference types operations](https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md) 🦄 * Module#ref.**null**(): `ExpressionRef` * Module#ref.**is_null**(value: `ExpressionRef`): `ExpressionRef` * Module#ref.**func**(name: `string`): `ExpressionRef` ### Expression manipulation * **getExpressionId**(expr: `ExpressionRef`): `ExpressionId`<br /> Gets the id (kind) of the specified expression. Possible values are: * **InvalidId**: `ExpressionId` * **BlockId**: `ExpressionId` * **IfId**: `ExpressionId` * **LoopId**: `ExpressionId` * **BreakId**: `ExpressionId` * **SwitchId**: `ExpressionId` * **CallId**: `ExpressionId` * **CallIndirectId**: `ExpressionId` * **LocalGetId**: `ExpressionId` * **LocalSetId**: `ExpressionId` * **GlobalGetId**: `ExpressionId` * **GlobalSetId**: `ExpressionId` * **LoadId**: `ExpressionId` * **StoreId**: `ExpressionId` * **ConstId**: `ExpressionId` * **UnaryId**: `ExpressionId` * **BinaryId**: `ExpressionId` * **SelectId**: `ExpressionId` * **DropId**: `ExpressionId` * **ReturnId**: `ExpressionId` * **HostId**: `ExpressionId` * **NopId**: `ExpressionId` * **UnreachableId**: `ExpressionId` * **AtomicCmpxchgId**: `ExpressionId` * **AtomicRMWId**: `ExpressionId` * **AtomicWaitId**: `ExpressionId` * **AtomicNotifyId**: `ExpressionId` * **AtomicFenceId**: `ExpressionId` * **SIMDExtractId**: `ExpressionId` * **SIMDReplaceId**: `ExpressionId` * **SIMDShuffleId**: `ExpressionId` * **SIMDTernaryId**: `ExpressionId` * **SIMDShiftId**: `ExpressionId` * **SIMDLoadId**: `ExpressionId` * **MemoryInitId**: `ExpressionId` * **DataDropId**: `ExpressionId` * **MemoryCopyId**: `ExpressionId` * **MemoryFillId**: `ExpressionId` * **RefNullId**: `ExpressionId` * **RefIsNullId**: `ExpressionId` * **RefFuncId**: `ExpressionId` * **TryId**: `ExpressionId` * **ThrowId**: `ExpressionId` * **RethrowId**: `ExpressionId` * **BrOnExnId**: `ExpressionId` * **PushId**: `ExpressionId` * **PopId**: `ExpressionId` * **getExpressionType**(expr: `ExpressionRef`): `Type`<br /> Gets the type of the specified expression. * **getExpressionInfo**(expr: `ExpressionRef`): `ExpressionInfo`<br /> Obtains information about an expression, always including: * Info#**id**: `ExpressionId` * Info#**type**: `Type` Additional properties depend on the expression's `id` and are usually equivalent to the respective parameters when creating such an expression: * BlockInfo#**name**: `string` * BlockInfo#**children**: `ExpressionRef[]` > * IfInfo#**condition**: `ExpressionRef` * IfInfo#**ifTrue**: `ExpressionRef` * IfInfo#**ifFalse**: `ExpressionRef | null` > * LoopInfo#**name**: `string` * LoopInfo#**body**: `ExpressionRef` > * BreakInfo#**name**: `string` * BreakInfo#**condition**: `ExpressionRef | null` * BreakInfo#**value**: `ExpressionRef | null` > * SwitchInfo#**names**: `string[]` * SwitchInfo#**defaultName**: `string | null` * SwitchInfo#**condition**: `ExpressionRef` * SwitchInfo#**value**: `ExpressionRef | null` > * CallInfo#**target**: `string` * CallInfo#**operands**: `ExpressionRef[]` > * CallImportInfo#**target**: `string` * CallImportInfo#**operands**: `ExpressionRef[]` > * CallIndirectInfo#**target**: `ExpressionRef` * CallIndirectInfo#**operands**: `ExpressionRef[]` > * LocalGetInfo#**index**: `number` > * LocalSetInfo#**isTee**: `boolean` * LocalSetInfo#**index**: `number` * LocalSetInfo#**value**: `ExpressionRef` > * GlobalGetInfo#**name**: `string` > * GlobalSetInfo#**name**: `string` * GlobalSetInfo#**value**: `ExpressionRef` > * LoadInfo#**isAtomic**: `boolean` * LoadInfo#**isSigned**: `boolean` * LoadInfo#**offset**: `number` * LoadInfo#**bytes**: `number` * LoadInfo#**align**: `number` * LoadInfo#**ptr**: `ExpressionRef` > * StoreInfo#**isAtomic**: `boolean` * StoreInfo#**offset**: `number` * StoreInfo#**bytes**: `number` * StoreInfo#**align**: `number` * StoreInfo#**ptr**: `ExpressionRef` * StoreInfo#**value**: `ExpressionRef` > * ConstInfo#**value**: `number | { low: number, high: number }` > * UnaryInfo#**op**: `number` * UnaryInfo#**value**: `ExpressionRef` > * BinaryInfo#**op**: `number` * BinaryInfo#**left**: `ExpressionRef` * BinaryInfo#**right**: `ExpressionRef` > * SelectInfo#**ifTrue**: `ExpressionRef` * SelectInfo#**ifFalse**: `ExpressionRef` * SelectInfo#**condition**: `ExpressionRef` > * DropInfo#**value**: `ExpressionRef` > * ReturnInfo#**value**: `ExpressionRef | null` > * NopInfo > * UnreachableInfo > * HostInfo#**op**: `number` * HostInfo#**nameOperand**: `string | null` * HostInfo#**operands**: `ExpressionRef[]` > * AtomicRMWInfo#**op**: `number` * AtomicRMWInfo#**bytes**: `number` * AtomicRMWInfo#**offset**: `number` * AtomicRMWInfo#**ptr**: `ExpressionRef` * AtomicRMWInfo#**value**: `ExpressionRef` > * AtomicCmpxchgInfo#**bytes**: `number` * AtomicCmpxchgInfo#**offset**: `number` * AtomicCmpxchgInfo#**ptr**: `ExpressionRef` * AtomicCmpxchgInfo#**expected**: `ExpressionRef` * AtomicCmpxchgInfo#**replacement**: `ExpressionRef` > * AtomicWaitInfo#**ptr**: `ExpressionRef` * AtomicWaitInfo#**expected**: `ExpressionRef` * AtomicWaitInfo#**timeout**: `ExpressionRef` * AtomicWaitInfo#**expectedType**: `Type` > * AtomicNotifyInfo#**ptr**: `ExpressionRef` * AtomicNotifyInfo#**notifyCount**: `ExpressionRef` > * AtomicFenceInfo > * SIMDExtractInfo#**op**: `Op` * SIMDExtractInfo#**vec**: `ExpressionRef` * SIMDExtractInfo#**index**: `ExpressionRef` > * SIMDReplaceInfo#**op**: `Op` * SIMDReplaceInfo#**vec**: `ExpressionRef` * SIMDReplaceInfo#**index**: `ExpressionRef` * SIMDReplaceInfo#**value**: `ExpressionRef` > * SIMDShuffleInfo#**left**: `ExpressionRef` * SIMDShuffleInfo#**right**: `ExpressionRef` * SIMDShuffleInfo#**mask**: `Uint8Array` > * SIMDTernaryInfo#**op**: `Op` * SIMDTernaryInfo#**a**: `ExpressionRef` * SIMDTernaryInfo#**b**: `ExpressionRef` * SIMDTernaryInfo#**c**: `ExpressionRef` > * SIMDShiftInfo#**op**: `Op` * SIMDShiftInfo#**vec**: `ExpressionRef` * SIMDShiftInfo#**shift**: `ExpressionRef` > * SIMDLoadInfo#**op**: `Op` * SIMDLoadInfo#**offset**: `number` * SIMDLoadInfo#**align**: `number` * SIMDLoadInfo#**ptr**: `ExpressionRef` > * MemoryInitInfo#**segment**: `number` * MemoryInitInfo#**dest**: `ExpressionRef` * MemoryInitInfo#**offset**: `ExpressionRef` * MemoryInitInfo#**size**: `ExpressionRef` > * MemoryDropInfo#**segment**: `number` > * MemoryCopyInfo#**dest**: `ExpressionRef` * MemoryCopyInfo#**source**: `ExpressionRef` * MemoryCopyInfo#**size**: `ExpressionRef` > * MemoryFillInfo#**dest**: `ExpressionRef` * MemoryFillInfo#**value**: `ExpressionRef` * MemoryFillInfo#**size**: `ExpressionRef` > * TryInfo#**body**: `ExpressionRef` * TryInfo#**catchBody**: `ExpressionRef` > * RefNullInfo > * RefIsNullInfo#**value**: `ExpressionRef` > * RefFuncInfo#**func**: `string` > * ThrowInfo#**event**: `string` * ThrowInfo#**operands**: `ExpressionRef[]` > * RethrowInfo#**exnref**: `ExpressionRef` > * BrOnExnInfo#**name**: `string` * BrOnExnInfo#**event**: `string` * BrOnExnInfo#**exnref**: `ExpressionRef` > * PopInfo > * PushInfo#**value**: `ExpressionRef` * **emitText**(expression: `ExpressionRef`): `string`<br /> Emits the expression in Binaryen's s-expression text format (not official stack-style text format). * **copyExpression**(expression: `ExpressionRef`): `ExpressionRef`<br /> Creates a deep copy of an expression. ### Relooper * new **Relooper**()<br /> Constructs a relooper instance. This lets you provide an arbitrary CFG, and the relooper will structure it for WebAssembly. * Relooper#**addBlock**(code: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block to the CFG, containing the provided code as its body. * Relooper#**addBranch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, condition: `ExpressionRef`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block to another block, with a condition (or nothing, if this is the default branch to take from the origin - each block must have one such branch), and optional code to execute on the branch (useful for phis). * Relooper#**addBlockWithSwitch**(code: `ExpressionRef`, condition: `ExpressionRef`): `RelooperBlockRef`<br /> Adds a new block, which ends with a switch/br_table, with provided code and condition (that determines where we go in the switch). * Relooper#**addBranchForSwitch**(from: `RelooperBlockRef`, to: `RelooperBlockRef`, indexes: `number[]`, code: `ExpressionRef`): `void`<br /> Adds a branch from a block ending in a switch, to another block, using an array of indexes that determine where to go, and optional code to execute on the branch. * Relooper#**renderAndDispose**(entry: `RelooperBlockRef`, labelHelper: `number`, module: `Module`): `ExpressionRef`<br /> Renders and cleans up the Relooper instance. Call this after you have created all the blocks and branches, giving it the entry block (where control flow begins), a label helper variable (an index of a local we can use, necessary for irreducible control flow), and the module. This returns an expression - normal WebAssembly code - that you can use normally anywhere. ### Source maps * Module#**addDebugInfoFileName**(filename: `string`): `number`<br /> Adds a debug info file name to the module and returns its index. * Module#**getDebugInfoFileName**(index: `number`): `string | null` <br /> Gets the name of the debug info file at the specified index. * Module#**setDebugLocation**(func: `FunctionRef`, expr: `ExpressionRef`, fileIndex: `number`, lineNumber: `number`, columnNumber: `number`): `void`<br /> Sets the debug location of the specified `ExpressionRef` within the specified `FunctionRef`. ### Debugging * Module#**interpret**(): `void`<br /> Runs the module in the interpreter, calling the start function.
fujimo21_near_bike_share_dapp
.gitpod.yml .parcel-cache fd9578a0b48cf9c8.txt README.md contract Cargo.toml src lib.rs frontend App.js __mocks__ fileMock.js assets css global.css img logo-black.svg logo-white.svg js near config.js utils.js index.html index.js integration-tests package.json rs Cargo.toml src tests.rs src config.ts main.ava.ts package.json
near-blank-project ================== This [React] app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/assets/js/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages
kuutamolabs_kld
.github dependabot.yml workflows typo.yml upgrade-flakes.yml Cargo.toml README.md bors.toml ctl Cargo.toml src main.rs system_info.rs example kld.toml kld Cargo.toml build.rs src api channels.rs invoices.rs macaroon_auth.rs mod.rs network.rs payloads.rs payments.rs peers.rs routes.rs skt_addr.rs utility.rs wallet.rs ws.rs bitcoind bitcoind_client.rs bitcoind_interface.rs mock.rs mod.rs utxo_lookup.rs cli client.rs commands.rs main.rs database forward.rs invoice.rs ldk_database.rs mod.rs payment.rs peer.rs sql V10__initial_channels.sql V1__ldk.sql V2__wallet.sql V3__payments.sql V4__invoices.sql V5__invoice_label.sql V6__spendable_outputs.sql V7__forwards.sql V8__channels.sql V9__payment_hash.sql wallet_database.rs key_generator.rs kld main.rs ldk channel_utils.rs controller.rs event_handler.rs lightning_interface.rs mod.rs peer_manager.rs lib.rs logger.rs prometheus.rs settings bitcoin_network.rs mod.rs wallet bdk_wallet.rs mod.rs wallet_interface.rs tests api cli.rs mod.rs prometheus.rs rest.rs apis.rs bitcoind.rs database ldk_database.rs main.rs wallet_database.rs mocks mock_bitcoind.rs mock_lightning.rs mock_wallet.rs mod.rs smoke main.rs start.rs Peers Channels Network On chain wallet Payments Invoices Kuutamo Apis mgr Cargo.toml src certs cockroachdb.rs lightning.rs mod.rs command.rs config.rs flake.rs generate_config.rs install.rs lib.rs logging.rs main.rs nixos_rebuild.rs reboot.rs secrets.rs ssh.rs utils.rs nix modules tests lightning.rs test-config.toml test-flake db-00.toml db-01.toml kld-00.toml test-utils Cargo.toml README.md src bitcoin_manager.rs cockroach_manager.rs electrs_manager.rs kld_manager.rs lib.rs ports.rs tui Cargo.toml README.md assets keybinding.toml vim_keybinding.toml wordbinding.toml src action.rs app.rs components command details.rs list.rs mod.rs parsers.rs query.rs debug.rs history details.rs list.rs mod.rs mod.rs tab_bar.rs i18n.rs keybinding.rs lib.rs main.rs mod.rs mode.rs style.rs tests channel_details.rs mod.rs tui.rs utils.rs typos.toml
🌔kuutamo lightning-knd TUI --- A Terminal User Interface for [lightning-knd](https://github.com/kuutamolabs/lightning-knd). Now you can try with previous version of app with `--sync` flag for all commands. Current asynchronous app is **still under development** and already supports default [keybinding](https://github.com/kuutamolabs/lightning-tui/blob/non-blocking/assets/keybinding.toml) is here You can copy `assets/vim_keybinding.toml` or create anyone `keybinding.toml` in your working directory to overwrite any key bindings. - [x] non-blocking architecture - [x] log features - [x] multiple mode key bindings - [x] allow user customized key bindings - [x] i18n support based on `LANG` setting - [x] command list - [x] add action inspector in debug component - [x] helper page - [x] command history - [x] prompt if the command is not ready We will do following items later. - [ ] reimplement each command in non-blocking way - [x] Node information - [ ] NodeFees, - [ ] NodeEslq, - [ ] NodeSign, - [ ] NodeLsfd, - [ ] NetwLsnd, - [ ] NetwFeer, - [ ] PeerList, - [x] Connect Peer - [ ] PeerDisc, - [ ] PaymList, - [ ] PaymSdky, - [ ] PaymPayi, - [ ] InvoList, - [ ] InvoGene, - [ ] InvoDeco, - [ ] ChanList, - [x] Open Channel - [ ] ChanSetf, - [ ] ChanClos, - [ ] ChanHist, - [ ] ChanBala, - [ ] ChanLsfd, Certs generated with command: # 🌘kuutamo lightning distribution ### a Lighting Service Provider (LSP) router node cluster **Nota bene**: kuutamo is bleeding edge decentralized financial infrastructure. Use with caution and only with funds you are prepared to lose. If you want to put it into production and would like to discuss SRE overlay support, please get in touch with us at [[email protected]](mailto:[email protected]) ## Prerequisites - 1 or 3 server(s)/node(s): Any Linux OS - 1 client/local machine: Any Linux OS ## Key components ### Client side: - `kld-mgr` - A CLI tool that will SSH to your server(s) to perform the initial deployment - `kld-cli` - A CLI tool that uses the kld API to support LSP operations - `kld-tui` - A Terminal User Interface that uses the kld API to support LSP operations (WIP) ### Server side: - `kld` - an LSP router, built on [LDK](https://github.com/lightningdevkit) - `cockroachdb` - a cloud-native, distributed SQL database - `telegraf` - an agent for collecting and sending metrics to any URL that supports the [Prometheus's Remote Write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) - `promtail` - an agent which ships the contents of local logs to a private Grafana Loki instance or Grafana Cloud - `bitcoind` - a bitcoin client - `electrs` - a bitcoin database indexer - `kuutamo-upgrade` - an updater service that will monitor the deployment repository and apply any required upgrades ## Nix quickstart kld-mgr: ```bash nix run github:kuutamolabs/kld#kld-mgr -- help ``` kld-cli: ```bash nix run github:kuutamolabs/kld#kld-cli -- help ``` kld-tui: ```bash nix run github:kuutamolabs/kld#kld-tui -- help ``` ## Installing Nix 1. Install the Nix package manager, if you don't already have it. https://zero-to-nix.com/start/install 2. Trust pre-built binaries (optional): ```shell $ printf 'trusted-substituters = https://cache.garnix.io https://cache.nixos.org/\ntrusted-public-keys = cache.garnix.io:CTFPyKSLcx5RMJKfLo5EEPUObbA78b0YQ2DTCJXqr9g= cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=' | sudo tee -a /etc/nix/nix.conf && sudo systemctl restart nix-daemon ``` 3. Test ```shell $ nix run --refresh github:kuutamolabs/kld#kld-mgr -- help ``` ## Install and in life operations By default, nodes are locked down once installed and cannot be connected to over SSH. Nodes are upgraded using a GitOps model enabling complete system change auditability. The `kuutamo-updater` service checks for updates in your private deployment repository. If found, the cluster will upgrade. The maintainers of the deployment repository control when upgrades are accepted. They will review/audit, approve and merge the updated `flake.lock` PR. An example install and upgrade workflow is shown below using GitHub. Other Git platforms such as Bitbucket and GitLab can be used inplace. `kld-mgr` requires root SSH access to server(s) to perform the initial install. Other cluster bootstrap methods can be used, such as via USB disk or PXE. > Note: For Test/Dev deployments you can retain Root and SSH capabilities by setting the `DEBUG` environment variable to `true` when performing `install`. Although monitoring is not mandatory for deploying a node, it is highly recommended. Configure the `self_monitoring_url`, `self_monitoring_username`, and `self_monitoring_password` fields of the host in the `kld.toml`. To view Logs remotely set the `promtail_client` in the form `https://<user_id>:<token>@<client hostname>/loki/api/vi/push` ![install and upgrade GitOps setup](./install-upgrade-gitops.jpg) - Step 1: Generate example `kld.toml` ```shell $ nix run github:kuutamolabs/kld#kld-mgr generate-example > kld.toml ``` - Step 2: Generate classic token with full repo permission, please refer to the [Github doc](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) - Step 5.1: Generate deployment config ```shell $ nix run github:kuutamolabs/kld#kld-mgr generate-config ./deployment ``` - Step 5.2: Setup Git & GitHub deployment repository ```shell $ cd ./deployment $ git init $ git add . $ git commit -m "init deploy" $ git remote add origin [email protected]:my-org/deployment $ git push $ cd .. ``` - Step 6: Add the flake-lock-update Github Action ```shell $ mkdir -p ./deployment/.github/workflows $ curl https://raw.githubusercontent.com/DeterminateSystems/update-flake-lock/main/.github/workflows/update.yml --output ./deployment/.github/workflows/upgrade.yml ``` Please refer to [update-flake-lock](https://github.com/DeterminateSystems/update-flake-lock) to configure this Action to your requirements. - Step 7: Install ```shell $ nix run github:kuutamolabs/kld#kld-mgr install ``` - Connect to node via API. kld API is served on port `2244` ```shell $ nix run github:kuutamolabs/kld/mgr#kld-cli -- -t "x.x.x.x:2244" -c "secrets/lightning/ca.pem" -m "secrets/admin.macaroon get-info" ``` ## kld-cli ```shell $ nix run github:kuutamolabs/kld#kld-cli -- help ``` ``` Usage: kld-cli --target <TARGET> --cert-path <CERT_PATH> --macaroon-path <MACAROON_PATH> <COMMAND> Commands: get-info Fetch information about this lightning node sign Creates a signature of the message using nodes secret key (message limit 65536 chars) get-balance Fetch confirmed and unconfirmed on-chain balance new-address Generates new on-chain address for receiving funds withdraw Send on-chain funds out of the wallet list-funds Show available funds from the internal wallet list-peers Fetch a list of this nodes peers connect-peer Connect with a network peer disconnect-peer Disconnect from a network peer list-channels Fetch a list of this nodes open channels open-channel Open a channel with another node set-channel-fee Set channel fees close-channel Close a channel network-nodes Get node information from the network graph network-channels Get channel information from the network graph fee-rates Return feerate estimates, either satoshi-per-kw or satoshi-per-kb keysend Pay a node without an invoice generate-invoice Generate a bolt11 invoice for receiving a payment list-invoices List all invoices pay-invoice Pay an invoice list-payments List all payments estimate-channel-liquidity Estimate channel liquidity to a target node local-remote-balance Fetch the aggregate local and remote channel balances (msat) of the node get-fees Get node routing fees list-forwards Fetch a list of the forwarded htlcs help Print this message or the help of the given subcommand(s) Options: -t, --target <TARGET> IP address or hostname of the target machine -c, --cert-path <CERT_PATH> Path to the TLS cert of the target API -m, --macaroon-path <MACAROON_PATH> Path to the macaroon for authenticating with the API -h, --help Print help -V, --version Print version ```
Panasthetik_rust-simple-aggregator
Cargo.toml README.md src main.rs
# rust-simple-aggregator Aggregates data from Supabase, MongoDB and NEAR Protocol and prints to the terminal. This is the first draft/demo version, without unit tests. UPDATE 3/2/2023: Changed to a "paragraph" to format the refined JSON output from all three sources, instead of a bulk data dump. Instructions are forthcoming in early 2023.
nativeanish_in-counter-near
README.md lib.rs package.json public index.html src utils near.js
# in-counter-near Incomplete counter Application in near protocols
NEARWEEK_sputnik-dao-2-ui-reference-mainnet
.eslintrc.js .github ISSUE_TEMPLATE BOUNTY.yml .vscode settings.json README.md babel.config.js package.json src __mocks__ fileMock.js assets logo-black.svg logo-white.svg near-social.svg roketo-logo.svg components dao dao.css shared dao-search.css config.js constants index.js contexts DaosContext.js WalletSelectorContext.js global.css hooks useChangeDao.js useDaoCount.js useDaoList.js useDaoSearchFilters.js useNumberProposals.js useOuterClick.js usePagination.js useQuery.js index.html index.js jest.init.js main.test.js utils Loading.js funcs.js store.js utils.js wallet login index.html
# sputnik-dao-2-ui-reference > UI for the Reference implementation [Sputnik-DAO-2 smart contract](https://github.com/near-daos/sputnik-dao-contract) ## Guide ### Setup ``` yarn install ``` ### Develop ``` yarn start ```
NEARFoundation_near-js-encryption-box
.github workflows main.yml README.md package-lock.json package.json src index.ts utils keyConverter.ts test near-js-encryption-box.test.ts tsconfig.json
# near-js-encryption-box [![NEAR](https://img.shields.io/badge/NEAR-%E2%8B%88-111111.svg)](https://near.org/) [![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE) > An experimental library to encrypt and decrypt data using the NEAR account's ed25519 keypairs; you can use it to store encrypted data on-chain, off-chain, or in any decentralized storage (IPFS, Arweave). ⚠️ This is an experimental library. We do not recommend to use it to store confidential information publically. ## Installation ```bash npm install @nearfoundation/near-js-encryption-box ``` ## Usage You can find below an example where Alice encrypt data with her private key and Bob's public key. Then Bob can decrypt the message with his private key, Alice's public key and a nonce. ```js import { create, open } from 'near-js-encryption-box'; import { utils } from 'near-api-js'; // Randomly generating key pairs for the example const keyPairAlice = utils.key_pair.KeyPairEd25519.fromRandom(); const keyPairBob = utils.key_pair.KeyPairEd25519.fromRandom(); // Encrypting a message const message = 'Hello Bob'; const publicKeyBob = keyPairBob.getPublicKey().toString(); const privateKeyAlice = keyPairAlice.secretKey; const { secret, nonce } = create(message, publicKeyBob, privateKeyAlice); // you can also pass your own custom nonce as a 4th parameter // Decrypting the message const publicKeyAlice = keyPairAlice.getPublicKey().toString(); const privateKeyBob = keyPairBob.secretKey; const messageReceived = open(secret, publicKeyAlice, privateKeyBob, nonce); console.log(messageReceived); // will return 'Hello Bob' ``` Find more examples in the [near-js-encryption-box-test.ts](test/near-js-encryption-box.test.ts) ## Encryption - Convert NEAR Ed25519 signing key pair into Curve25519 key pair suitable for Diffie-Hellman key; using [ed2curve.js](https://github.com/dchest/ed2curve-js) - "Note that there's currently no proof that this is safe to do. It is safer to share both Ed25519 and Curve25519 public keys (their concatenation is 64 bytes long)." - Then uses Curve25519-XSalsa20-Poly1305 implemented by [TweetNaCl.js](https://tweetnacl.js.org) ## Authors - [Sandoche](https://github.com/sandoche) ## License MIT License
haoduoyu_wallet-selects
README.md babel.config.js package.json public index.html src assets css base.css main.js router index.js utils config.js index.js utils.js vue.config.js
# swap-select ## Project setup ``` yarn install ``` ### Compiles and hot-reloads for development ``` yarn serve ``` ### Compiles and minifies for production ``` yarn build ``` ### Lints and fixes files ``` yarn lint ``` ### Customize configuration See [Configuration Reference](https://cli.vuejs.org/config/).
kurodenjiro_drop-auth
README.md package-lock.json package.json public index.html src api index.ts components CreateAccount styles FormContainer.ts Login Login.style.ts Sign Sign.styles.ts Values fiatValueManager.ts formatNearAmount.ts store.ts tmp_fetch_send_json.ts TableContent TableContent.styles.ts index.html lib Spinner index.ts Toast README.md api.ts index.ts store.ts styles.ts Tooltip index.ts controller.ts firestoreController.ts networkParams.ts useAuthState.ts styles globals.css theme.css translations de.json en.json types global.d.ts utils config.ts firebase.ts form-validation.ts index.ts keypom-options.ts mpc-service.ts types.ts tailwind.config.js tsconfig.json vercel.json webpack.config.js
# Toast Implemented via Radix primitives: https://www.radix-ui.com/docs/primitives/components/toast _If the current props and Stitches style overrides aren't enough to cover your use case, feel free to implement your own component using the Radix primitives directly._ ## Example Using the `openToast` API allows you to easily open a toast from any context: ```tsx import { openToast } from '@/components/lib/Toast'; ... <Button onClick={() => openToast({ type: 'ERROR', title: 'Toast Title', description: 'This is a great toast description.', }) } > Open a Toast </Button> ``` You can pass other options too: ```tsx <Button onClick={() => openToast({ type: 'SUCCESS', // SUCCESS | INFO | ERROR title: 'Toast Title', description: 'This is a great toast description.', icon: 'ph-bold ph-pizza', // https://phosphoricons.com/ duration: 20000, // milliseconds (pass Infinity to disable auto close) }) } > Open a Toast </Button> ``` ## Deduplicate If you need to ensure only a single instance of a toast is ever displayed at once, you can deduplicate by passing a unique `id` key. If a toast with the passed `id` is currently open, a new toast will not be opened: ```tsx <Button onClick={() => openToast({ id: 'my-unique-toast', title: 'Toast Title', description: 'This is a great toast description.', }) } > Deduplicated Toast </Button> ``` ## Custom Toast If you need something more custom, you can render a custom toast using `lib/Toast/Toaster.tsx` as an example like so: ```tsx import * as Toast from '@/components/lib/Toast'; ... <Toast.Provider duration={5000}> <Toast.Root open={isOpen} onOpenChange={setIsOpen}> <Toast.Title>My Title</Toast.Title> <Toast.Description>My Description</Toast.Description> <Toast.CloseButton /> </Toast.Root> <Toast.Viewport /> </Toast.Provider> ``` ![Mzb5g8UA](https://github.com/kurodenjiro/drop-auth/assets/112561517/fc21b4ee-f0ef-4188-85e7-23fe7314073f) Video Demo: https://www.youtube.com/watch?v=vXNQwOeIIzo MPC Architecture: Social Account Integration: Implement social Auth protocols to allow secure and seamless integration with social media accounts (Google, Twitter,...) Wallet System: Develop a user-friendly wallet interface that supports account abstractions Security: Utilize MPC for enhanced security and recovery options. Impact: Friendly expeirence wallet, user no need to care about remember seed phrase, or gas fee MPC: We can custom MPC, and all developer can use it freely Impact to Ecosystem: Enhanced User Experience: Simplifying the wallet creation process using familiar social media accounts. Increased Adoption: Lowering the barrier to entry for non-technical users in the crypto space. Innovation in Wallet Services: Introducing the concept of Wallet-as-a-Service, potentially transforming how users interact with blockchain ecosystems. Community Building: Encouraging more users to participate in the blockchain ecosystem through familiar platforms ## Impact to Ecosystem: Enhanced User Experience: Simplifying the wallet creation process using familiar social media accounts. Increased Adoption: Lowering the barrier to entry for non-technical users in the crypto space. Innovation in Wallet Services: Introducing the concept of Wallet-as-a-Service, potentially transforming how users interact with blockchain ecosystems. Community Building: Encouraging more users to participate in the blockchain ecosystem through familiar platforms # BlockQuest Demo: We create basic Campaign/Airdrop social task for user, just a demo for usecase/testnet of MPC ## For Organizer: ### 1. Login with Twitter ![s1gR8O6J](https://github.com/kurodenjiro/drop-auth/assets/112561517/6129b4cb-ca57-422f-8f0f-09d28a8bf187) ### 2. Create the mission ![2WdY7SwS](https://github.com/kurodenjiro/drop-auth/assets/112561517/979ae55b-7340-4560-817c-fb271add3d38) ## For Users: ### 1.Login with Twitter ### 2.Choose the mission ![VN-ACmwU](https://github.com/kurodenjiro/drop-auth/assets/112561517/dd7cda0b-4e3a-4fec-b41a-08bf18670ad6) ### 3. Do social task ### 4. Claim the reward ![D-oqCPTj](https://github.com/kurodenjiro/drop-auth/assets/112561517/1595fd47-2541-42e1-a61f-a0c1e77fa09c)
near_near-redpacket
.csscomb.json .github ISSUE_TEMPLATE BOUNTY.yml .travis.yml README.md babel.config.js gulpfile.js package-lock.json package.json src 404.html App.css App.js App.test.js Claim.js Drops.js __mocks__ fileMock.js assets css near.css near.min.css spectre.css spectre.min.css img icon-account.svg redpacket-cover.svg near-logo.svg config.js index.html index.js jest.init.js main.test.js util near-util.js util.js wallet login index.html
<p> <img src="https://nearprotocol.com/wp-content/themes/near-19/assets/img/logo.svg?t=1553011311" width="240"> </p> ## Linkdrop example with contract account deployment ## About the app The app allows you to give people NEAR accounts that are prefunded with NEAR tokens. This is better than just a vanilla Linkdrop because it allows you to queue up multiple of them and it has a nice UI for the recipient. The app does this by sending funds to the Linkdrop contract which will create "Drops". You will have a list of these in local storage and you can remove them at any time. This claims the funds back to your current account. **NOTE:** If you follow the wallet link of a drop, be warned it will not create accounts because your contract is not eligible to create the `.testnet` domain accounts. Instead, click "Share Drop Link" and visit your own drop. You will now see a *URL Drop* heading with some information about the drop. This is what another user would see if they used your URL. You can either: 1. claim the funds 2. create an account 3. create a contract account (deploys a locked multisig account) ## Contract For more details on the linkdrop contract: https://github.com/near/near-linkdrop ## Quickstart ``` yarn && yarn dev ``` ## Deploying your own contract It's recommended you create a sub account to handle your contract deployments: ``` near login near create_account [account_id] --masterAccount [your_account_id] --initialBalance [1-5 N] ``` Now update config.js and set: ``` const CONTRACT_NAME = [account_id] ``` ## The Linkdrop contract and calling it from JS All calls to the contract can be found in `src/Drops.js`. The original linkdrop contract is here: https://github.com/nearprotocol/near-linkdrop An additional function is added to the regular linkdrop contract: ``` pub fn create_limited_contract_account ``` This takes 3 additional arguments over the existing `pub fn create_account_and_claim` function. In order to successfully invoke from JS you must pass in the following: ``` new_account_id: string, new_public_key: string, allowance: string, contract_bytes: [...new Uint8Array(contract_bytes)], method_names: [...new Uint8Array(new TextEncoder().encode(` methods,account,is_limited_too_call `))] ``` ##### IMPORTANT: Make sure you have the latest version of NEAR Shell and Node Version > 10.x 1. [Node.js](https://nodejs.org/en/download/package-manager/) 2. near-shell ``` npm i -g near-shell ``` ### To run on NEAR testnet ```bash yarn && yarn dev ```
miguelacosta84_NEAR-lista-de-precios
README.md | api.js app.js blockchain.js examples nft_deploy deploy_tokens.js token_types.js near-api-ui README.md package.json public index.html manifest.json robots.txt src App.css App.js App.test.js assets explorer-bg.svg icon-network-right.svg logo-black.svg logo-white.svg logo.svg config.js index.css index.html index.js logo.svg reportWebVitals.js setupTests.js utils.js package.json smart-contract-code as-pect.config.js as_types.d.ts asconfig.json assembly __tests__ as-pect.d.ts example.spec.ts index.ts model.ts tsconfig.json package.json token.js user.js web .styleci.yml README.md app Console Kernel.php Exceptions Handler.php Http Controllers Auth ConfirmPasswordController.php ForgotPasswordController.php LoginController.php RegisterController.php ResetPasswordController.php VerificationController.php Controller.php HomeController.php MyListController.php Kernel.php Middleware Authenticate.php EncryptCookies.php PreventRequestsDuringMaintenance.php RedirectIfAuthenticated.php TrimStrings.php TrustHosts.php TrustProxies.php VerifyCsrfToken.php Requests CreateItemRequest.php CreateMyListRequest.php Interfaces IServices.php Models ApiError.php MyList.php User.php Providers AppServiceProvider.php AuthServiceProvider.php BroadcastServiceProvider.php EventServiceProvider.php RouteServiceProvider.php Services MyListService.php bootstrap app.php composer.json config app.php auth.php broadcasting.php cache.php cors.php database.php filesystems.php hashing.php logging.php mail.php queue.php sanctum.php services.php session.php view.php database factories UserFactory.php migrations 2014_10_12_000000_create_users_table.php 2014_10_12_100000_create_password_resets_table.php 2019_08_19_000000_create_failed_jobs_table.php 2019_12_14_000001_create_personal_access_tokens_table.php 2021_11_04_215722_create_my_list_table.php 2021_11_04_225723_add_field_keys_user_table.php seeders DatabaseSeeder.php package.json phpunit.xml public .htaccess assets css bootstrap.css bootstrap.min.css bootstrap_limitless.css bootstrap_limitless.min.css colors.css colors.min.css components.css components.min.css layout.css layout.min.css js app.js custom.js css app.css site.css datatables_new dataTables.bootstrap.css dataTables.bootstrap.js dataTables.bootstrap.min.js extensions AutoFill Readme.txt css dataTables.autoFill.css dataTables.autoFill.min.css examples columns.html complete-callback.html fill-both.html fill-horizontal.html index.html scrolling.html simple.html step-callback.html js dataTables.autoFill.js dataTables.autoFill.min.js Buttons License.txt Readme.md css buttons.bootstrap.css buttons.bootstrap.min.css buttons.dataTables.css buttons.dataTables.min.css buttons.foundation.css buttons.foundation.min.css buttons.jqueryui.css buttons.jqueryui.min.css examples api addRemove.html enable.html group.html index.html text.html column_visibility columnGroups.html columns.html columnsToggle.html index.html layout.html restore.html simple.html stateSave.html text.html flash copyi18n.html filename.html hidden.html index.html pdfMessage.html pdfPage.html simple.html swfPath.html tsv.html html5 columns.html copyi18n.html filename.html index.html outputFormat-function.html outputFormat-orthogonal.html pdfImage.html pdfMessage.html pdfOpen.html pdfPage.html simple.html tsv.html index.html initialisation className.html collections-sub.html collections.html custom.html export.html index.html keys.html multiple.html new.html pageLength.html plugins.html simple.html print autoPrint.html columns.html customisation.html index.html message.html select.html simple.html styling bootstrap.html foundation.html icons.html index.html jqueryui.html js buttons.bootstrap.js buttons.bootstrap.min.js buttons.colVis.js buttons.colVis.min.js buttons.flash.js buttons.flash.min.js buttons.foundation.js buttons.foundation.min.js buttons.html5.js buttons.html5.min.js buttons.jqueryui.js buttons.jqueryui.min.js buttons.print.js buttons.print.min.js dataTables.buttons.js dataTables.buttons.min.js jszip.min.js pdfmake.min.js vfs_fonts.min.js ColReorder License.txt Readme.md css dataTables.colReorder.css dataTables.colReorder.min.css examples alt_insert.html col_filter.html colvis.html fixedcolumns.html fixedheader.html index.html jqueryui.html new_init.html predefined.html realtime.html reset.html scrolling.html server_side.html simple.html state_save.html js dataTables.colReorder.js dataTables.colReorder.min.js ColVis License.txt Readme.md css dataTables.colVis.css dataTables.colVis.min.css dataTables.colvis.jqueryui.css examples button_order.html exclude_columns.html group_columns.html index.html jqueryui.html mouseover.html new_init.html restore.html simple.html text.html title_callback.html two_tables.html two_tables_identical.html js dataTables.colVis.js dataTables.colVis.min.js FixedColumns License.txt Readme.md css dataTables.fixedColumns.css dataTables.fixedColumns.min.css examples bootstrap.html col_filter.html colvis.html css_size.html index.html index_column.html left_right_columns.html right_column.html rowspan.html server-side-processing.html simple.html size_fixed.html size_fluid.html two_columns.html js dataTables.fixedColumns.js dataTables.fixedColumns.min.js FixedHeader Readme.txt css dataTables.fixedHeader.css dataTables.fixedHeader.min.css examples header_footer.html index.html simple.html top_left_right.html two_tables.html zIndexes.html js dataTables.fixedHeader.js dataTables.fixedHeader.min.js KeyTable Readme.txt css dataTables.keyTable.css dataTables.keyTable.min.css examples events.html html.html index.html scrolling.html simple.html js dataTables.keyTable.js dataTables.keyTable.min.js Responsive License.txt Readme.md css dataTables.responsive.css examples child-rows column-control.html custom-renderer.html disable-child-rows.html index.html right-column.html whole-row-control.html display-control auto.html classes.html complexHeader.html fixedHeader.html index.html init-classes.html index.html initialisation ajax.html className.html default.html index.html new.html option.html styling bootstrap.html compact.html foundation.html index.html scrolling.html js dataTables.responsive.js dataTables.responsive.min.js Scroller Readme.txt css dataTables.scroller.css dataTables.scroller.min.css examples api_scrolling.html data 2500.txt ssp.php index.html large_js_source.html server-side_processing.html simple.html state_saving.html js dataTables.scroller.js dataTables.scroller.min.js TableTools Readme.md css dataTables.tableTools.css dataTables.tableTools.min.css examples ajax.html alter_buttons.html bootstrap.html button_text.html collection.html defaults.html index.html jqueryui.html multi_instance.html multiple_tables.html new_init.html pdf_message.html plug-in.html select_column.html select_multi.html select_os.html select_single.html simple.html swf_path.html js dataTables.tableTools.js dataTables.tableTools.min.js i18n Spanish.json jquery.dataTables.css jquery.dataTables.js jquery.dataTables.min.css jquery.dataTables.min.js jquery.dataTables_themeroller.css media js dataTables.bootstrap.js dataTables.bootstrap.min.js jquery.dataTables.js dist css alt adminlte.components.css adminlte.components.min.css adminlte.core.css adminlte.core.min.css adminlte.extra-components.css adminlte.extra-components.min.css adminlte.pages.css adminlte.pages.min.css adminlte.plugins.css adminlte.plugins.min.css js .eslintrc.json adminlte.js adminlte.min.js demo.js pages dashboard.js dashboard2.js dashboard3.js index.php plugins bootstrap-colorpicker css bootstrap-colorpicker.css bootstrap-colorpicker.min.css js bootstrap-colorpicker.js bootstrap-colorpicker.min.js bootstrap-slider bootstrap-slider.js bootstrap-slider.min.js css bootstrap-slider.css bootstrap-slider.min.css bootstrap-switch css bootstrap2 bootstrap-switch.css bootstrap-switch.min.css bootstrap3 bootstrap-switch.css bootstrap-switch.min.css js bootstrap-switch.js bootstrap-switch.min.js bootstrap js bootstrap.bundle.js bootstrap.bundle.min.js bootstrap.js bootstrap.min.js bootstrap4-duallistbox bootstrap-duallistbox.css bootstrap-duallistbox.min.css jquery.bootstrap-duallistbox.js jquery.bootstrap-duallistbox.min.js bs-custom-file-input bs-custom-file-input.js bs-custom-file-input.min.js bs-stepper css bs-stepper.css bs-stepper.min.css js bs-stepper.js bs-stepper.min.js chart.js Chart.bundle.js Chart.bundle.min.js Chart.css Chart.js Chart.min.css Chart.min.js codemirror addon comment comment.js continuecomment.js dialog dialog.css dialog.js display autorefresh.js fullscreen.css fullscreen.js panel.js placeholder.js rulers.js edit closebrackets.js closetag.js continuelist.js matchbrackets.js matchtags.js trailingspace.js fold brace-fold.js comment-fold.js foldcode.js foldgutter.css foldgutter.js indent-fold.js markdown-fold.js xml-fold.js hint anyword-hint.js css-hint.js html-hint.js javascript-hint.js show-hint.css show-hint.js sql-hint.js xml-hint.js lint coffeescript-lint.js css-lint.js html-lint.js javascript-lint.js json-lint.js lint.css lint.js yaml-lint.js merge merge.css merge.js mode loadmode.js multiplex.js multiplex_test.js overlay.js simple.js runmode colorize.js runmode-standalone.js runmode.js runmode.node.js scroll annotatescrollbar.js scrollpastend.js simplescrollbars.css simplescrollbars.js search jump-to-line.js match-highlighter.js matchesonscrollbar.css matchesonscrollbar.js search.js searchcursor.js selection active-line.js mark-selection.js selection-pointer.js tern tern.css tern.js worker.js wrap hardwrap.js codemirror.css codemirror.js keymap emacs.js sublime.js vim.js mode apl apl.js asciiarmor asciiarmor.js asn.1 asn.1.js asterisk asterisk.js brainfuck brainfuck.js clike clike.js clojure clojure.js cmake cmake.js cobol cobol.js coffeescript coffeescript.js commonlisp commonlisp.js crystal crystal.js css css.js cypher cypher.js d d.js dart dart.js diff diff.js django django.js dockerfile dockerfile.js dtd dtd.js dylan dylan.js ebnf ebnf.js ecl ecl.js eiffel eiffel.js elm elm.js erlang erlang.js factor factor.js fcl fcl.js forth forth.js fortran fortran.js gas gas.js gfm gfm.js gherkin gherkin.js go go.js groovy groovy.js haml haml.js handlebars handlebars.js haskell-literate haskell-literate.js haskell haskell.js haxe haxe.js htmlembedded htmlembedded.js htmlmixed htmlmixed.js http http.js idl idl.js javascript javascript.js jinja2 jinja2.js jsx jsx.js julia julia.js livescript livescript.js lua lua.js markdown markdown.js mathematica mathematica.js mbox mbox.js meta.js mirc mirc.js mllike mllike.js modelica modelica.js mscgen mscgen.js mumps mumps.js nginx nginx.js nsis nsis.js ntriples ntriples.js octave octave.js oz oz.js pascal pascal.js pegjs pegjs.js perl perl.js php php.js pig pig.js powershell powershell.js properties properties.js protobuf protobuf.js pug pug.js puppet puppet.js python python.js q q.js r r.js rpm rpm.js rst rst.js ruby ruby.js rust rust.js sas sas.js sass sass.js scheme scheme.js shell shell.js sieve sieve.js slim slim.js smalltalk smalltalk.js smarty smarty.js solr solr.js soy soy.js sparql sparql.js spreadsheet spreadsheet.js sql sql.js stex stex.js stylus stylus.js swift swift.js tcl tcl.js textile textile.js tiddlywiki tiddlywiki.css tiddlywiki.js tiki tiki.css tiki.js toml toml.js tornado tornado.js troff troff.js ttcn-cfg ttcn-cfg.js ttcn ttcn.js turtle turtle.js twig twig.js vb vb.js vbscript vbscript.js velocity velocity.js verilog verilog.js vhdl vhdl.js vue vue.js wast wast.js webidl webidl.js xml xml.js xquery xquery.js yacas yacas.js yaml-frontmatter yaml-frontmatter.js yaml yaml.js z80 z80.js theme 3024-day.css 3024-night.css abcdef.css ambiance-mobile.css ambiance.css ayu-dark.css ayu-mirage.css base16-dark.css base16-light.css bespin.css blackboard.css cobalt.css colorforth.css darcula.css dracula.css duotone-dark.css duotone-light.css eclipse.css elegant.css erlang-dark.css gruvbox-dark.css hopscotch.css icecoder.css idea.css isotope.css lesser-dark.css liquibyte.css lucario.css material-darker.css material-ocean.css material-palenight.css material.css mbo.css mdn-like.css midnight.css monokai.css moxer.css neat.css neo.css night.css nord.css oceanic-next.css panda-syntax.css paraiso-dark.css paraiso-light.css pastel-on-dark.css railscasts.css rubyblue.css seti.css shadowfox.css solarized.css ssms.css the-matrix.css tomorrow-night-bright.css tomorrow-night-eighties.css ttcn.css twilight.css vibrant-ink.css xq-dark.css xq-light.css yeti.css yonce.css zenburn.css datatables-autofill css autoFill.bootstrap4.css autoFill.bootstrap4.min.css js autoFill.bootstrap4.js autoFill.bootstrap4.min.js dataTables.autoFill.js dataTables.autoFill.min.js datatables-bs4 css dataTables.bootstrap4.css dataTables.bootstrap4.min.css js dataTables.bootstrap4.js dataTables.bootstrap4.min.js datatables-buttons css buttons.bootstrap4.css buttons.bootstrap4.min.css js buttons.bootstrap4.js buttons.bootstrap4.min.js buttons.colVis.js buttons.colVis.min.js buttons.flash.js buttons.flash.min.js buttons.html5.js buttons.html5.min.js buttons.print.js buttons.print.min.js dataTables.buttons.js dataTables.buttons.min.js datatables-colreorder css colReorder.bootstrap4.css colReorder.bootstrap4.min.css js colReorder.bootstrap4.js colReorder.bootstrap4.min.js dataTables.colReorder.js dataTables.colReorder.min.js datatables-fixedcolumns css fixedColumns.bootstrap4.css fixedColumns.bootstrap4.min.css js dataTables.fixedColumns.js dataTables.fixedColumns.min.js fixedColumns.bootstrap4.js fixedColumns.bootstrap4.min.js datatables-fixedheader css fixedHeader.bootstrap4.css fixedHeader.bootstrap4.min.css js dataTables.fixedHeader.js dataTables.fixedHeader.min.js fixedHeader.bootstrap4.js fixedHeader.bootstrap4.min.js datatables-keytable css keyTable.bootstrap4.css keyTable.bootstrap4.min.css js dataTables.keyTable.js dataTables.keyTable.min.js keyTable.bootstrap4.js keyTable.bootstrap4.min.js datatables-responsive css responsive.bootstrap4.css responsive.bootstrap4.min.css js dataTables.responsive.js dataTables.responsive.min.js responsive.bootstrap4.js responsive.bootstrap4.min.js datatables-rowgroup css rowGroup.bootstrap4.css rowGroup.bootstrap4.min.css js dataTables.rowGroup.js dataTables.rowGroup.min.js rowGroup.bootstrap4.js rowGroup.bootstrap4.min.js datatables-rowreorder css rowReorder.bootstrap4.css rowReorder.bootstrap4.min.css js dataTables.rowReorder.js dataTables.rowReorder.min.js rowReorder.bootstrap4.js rowReorder.bootstrap4.min.js datatables-scroller css scroller.bootstrap4.css scroller.bootstrap4.min.css js dataTables.scroller.js dataTables.scroller.min.js scroller.bootstrap4.js scroller.bootstrap4.min.js datatables-searchbuilder css searchBuilder.bootstrap4.css searchBuilder.bootstrap4.min.css js dataTables.searchBuilder.js dataTables.searchBuilder.min.js searchBuilder.bootstrap4.js searchBuilder.bootstrap4.min.js datatables-searchpanes css searchPanes.bootstrap4.css searchPanes.bootstrap4.min.css js dataTables.searchPanes.js dataTables.searchPanes.min.js searchPanes.bootstrap4.js searchPanes.bootstrap4.min.js datatables-select css select.bootstrap4.css select.bootstrap4.min.css js dataTables.select.js dataTables.select.min.js select.bootstrap4.js select.bootstrap4.min.js datatables jquery.dataTables.js jquery.dataTables.min.js daterangepicker daterangepicker.css daterangepicker.js dropzone basic.css dropzone-amd-module.js dropzone.css dropzone.js min basic.css basic.min.css dropzone-amd-module.min.js dropzone.css dropzone.min.css dropzone.min.js ekko-lightbox ekko-lightbox.css ekko-lightbox.js ekko-lightbox.min.js fastclick fastclick.js filterizr filterizr.min.js jquery.filterizr.min.js vanilla.filterizr.min.js flag-icon-css css flag-icon.css flag-icon.min.css flags 1x1 ad.svg ae.svg af.svg ag.svg ai.svg al.svg am.svg ao.svg aq.svg ar.svg as.svg at.svg au.svg aw.svg ax.svg az.svg ba.svg bb.svg bd.svg be.svg bf.svg bg.svg bh.svg bi.svg bj.svg bl.svg bm.svg bn.svg bo.svg bq.svg br.svg bs.svg bt.svg bv.svg bw.svg by.svg bz.svg ca.svg cc.svg cd.svg cf.svg cg.svg ch.svg ci.svg ck.svg cl.svg cm.svg cn.svg co.svg cr.svg cu.svg cv.svg cw.svg cx.svg cy.svg cz.svg de.svg dj.svg dk.svg dm.svg do.svg dz.svg ec.svg ee.svg eg.svg eh.svg er.svg es-ca.svg es-ga.svg es.svg et.svg eu.svg fi.svg fj.svg fk.svg fm.svg fo.svg fr.svg ga.svg gb-eng.svg gb-nir.svg gb-sct.svg gb-wls.svg gb.svg gd.svg ge.svg gf.svg gg.svg gh.svg gi.svg gl.svg gm.svg gn.svg gp.svg gq.svg gr.svg gs.svg gt.svg gu.svg gw.svg gy.svg hk.svg hm.svg hn.svg hr.svg ht.svg hu.svg id.svg ie.svg il.svg im.svg in.svg io.svg iq.svg ir.svg is.svg it.svg je.svg jm.svg jo.svg jp.svg ke.svg kg.svg kh.svg ki.svg km.svg kn.svg kp.svg kr.svg kw.svg ky.svg kz.svg la.svg lb.svg lc.svg li.svg lk.svg lr.svg ls.svg lt.svg lu.svg lv.svg ly.svg ma.svg mc.svg md.svg me.svg mf.svg mg.svg mh.svg mk.svg ml.svg mm.svg mn.svg mo.svg mp.svg mq.svg mr.svg ms.svg mt.svg mu.svg mv.svg mw.svg mx.svg my.svg mz.svg na.svg nc.svg ne.svg nf.svg ng.svg ni.svg nl.svg no.svg np.svg nr.svg nu.svg nz.svg om.svg pa.svg pe.svg pf.svg pg.svg ph.svg pk.svg pl.svg pm.svg pn.svg pr.svg ps.svg pt.svg pw.svg py.svg qa.svg re.svg ro.svg rs.svg ru.svg rw.svg sa.svg sb.svg sc.svg sd.svg se.svg sg.svg sh.svg si.svg sj.svg sk.svg sl.svg sm.svg sn.svg so.svg sr.svg ss.svg st.svg sv.svg sx.svg sy.svg sz.svg tc.svg td.svg tf.svg tg.svg th.svg tj.svg tk.svg tl.svg tm.svg tn.svg to.svg tr.svg tt.svg tv.svg tw.svg tz.svg ua.svg ug.svg um.svg un.svg us.svg uy.svg uz.svg va.svg vc.svg ve.svg vg.svg vi.svg vn.svg vu.svg wf.svg ws.svg xk.svg ye.svg yt.svg za.svg zm.svg zw.svg 4x3 ad.svg ae.svg af.svg ag.svg ai.svg al.svg am.svg ao.svg aq.svg ar.svg as.svg at.svg au.svg aw.svg ax.svg az.svg ba.svg bb.svg bd.svg be.svg bf.svg bg.svg bh.svg bi.svg bj.svg bl.svg bm.svg bn.svg bo.svg bq.svg br.svg bs.svg bt.svg bv.svg bw.svg by.svg bz.svg ca.svg cc.svg cd.svg cf.svg cg.svg ch.svg ci.svg ck.svg cl.svg cm.svg cn.svg co.svg cr.svg cu.svg cv.svg cw.svg cx.svg cy.svg cz.svg de.svg dj.svg dk.svg dm.svg do.svg dz.svg ec.svg ee.svg eg.svg eh.svg er.svg es-ca.svg es-ga.svg es.svg et.svg eu.svg fi.svg fj.svg fk.svg fm.svg fo.svg fr.svg ga.svg gb-eng.svg gb-nir.svg gb-sct.svg gb-wls.svg gb.svg gd.svg ge.svg gf.svg gg.svg gh.svg gi.svg gl.svg gm.svg gn.svg gp.svg gq.svg gr.svg gs.svg gt.svg gu.svg gw.svg gy.svg hk.svg hm.svg hn.svg hr.svg ht.svg hu.svg id.svg ie.svg il.svg im.svg in.svg io.svg iq.svg ir.svg is.svg it.svg je.svg jm.svg jo.svg jp.svg ke.svg kg.svg kh.svg ki.svg km.svg kn.svg kp.svg kr.svg kw.svg ky.svg kz.svg la.svg lb.svg lc.svg li.svg lk.svg lr.svg ls.svg lt.svg lu.svg lv.svg ly.svg ma.svg mc.svg md.svg me.svg mf.svg mg.svg mh.svg mk.svg ml.svg mm.svg mn.svg mo.svg mp.svg mq.svg mr.svg ms.svg mt.svg mu.svg mv.svg mw.svg mx.svg my.svg mz.svg na.svg nc.svg ne.svg nf.svg ng.svg ni.svg nl.svg no.svg np.svg nr.svg nu.svg nz.svg om.svg pa.svg pe.svg pf.svg pg.svg ph.svg pk.svg pl.svg pm.svg pn.svg pr.svg ps.svg pt.svg pw.svg py.svg qa.svg re.svg ro.svg rs.svg ru.svg rw.svg sa.svg sb.svg sc.svg sd.svg se.svg sg.svg sh.svg si.svg sj.svg sk.svg sl.svg sm.svg sn.svg so.svg sr.svg ss.svg st.svg sv.svg sx.svg sy.svg sz.svg tc.svg td.svg tf.svg tg.svg th.svg tj.svg tk.svg tl.svg tm.svg tn.svg to.svg tr.svg tt.svg tv.svg tw.svg tz.svg ua.svg ug.svg um.svg un.svg us.svg uy.svg uz.svg va.svg vc.svg ve.svg vg.svg vi.svg vn.svg vu.svg wf.svg ws.svg xk.svg ye.svg yt.svg za.svg zm.svg zw.svg flot jquery.flot.js plugins jquery.flot.axislabels.js jquery.flot.browser.js jquery.flot.categories.js jquery.flot.composeImages.js jquery.flot.crosshair.js jquery.flot.drawSeries.js jquery.flot.errorbars.js jquery.flot.fillbetween.js jquery.flot.flatdata.js jquery.flot.hover.js jquery.flot.image.js jquery.flot.legend.js jquery.flot.logaxis.js jquery.flot.navigate.js jquery.flot.pie.js jquery.flot.resize.js jquery.flot.saturated.js jquery.flot.selection.js jquery.flot.stack.js jquery.flot.symbol.js jquery.flot.threshold.js jquery.flot.time.js jquery.flot.touch.js jquery.flot.touchNavigate.js jquery.flot.uiConstants.js fontawesome-free css all.css all.min.css brands.css brands.min.css fontawesome.css fontawesome.min.css regular.css regular.min.css solid.css solid.min.css svg-with-js.css svg-with-js.min.css v4-shims.css v4-shims.min.css webfonts fa-brands-400.svg fa-regular-400.svg fa-solid-900.svg fullcalendar LICENSE.txt locales-all.js locales-all.min.js locales af.js ar-dz.js ar-kw.js ar-ly.js ar-ma.js ar-sa.js ar-tn.js ar.js az.js bg.js bs.js ca.js cs.js cy.js da.js de-at.js de.js el.js en-au.js en-gb.js en-nz.js eo.js es-us.js es.js et.js eu.js fa.js fi.js fr-ca.js fr-ch.js fr.js gl.js he.js hi.js hr.js hu.js hy-am.js id.js is.js it.js ja.js ka.js kk.js ko.js lb.js lt.js lv.js mk.js ms.js nb.js ne.js nl.js nn.js pl.js pt-br.js pt.js ro.js ru.js sk.js sl.js sq.js sr-cyrl.js sr.js sv.js ta-in.js th.js tr.js ug.js uk.js uz.js vi.js zh-cn.js zh-tw.js main.css main.js main.min.css main.min.js icheck-bootstrap icheck-bootstrap.css icheck-bootstrap.min.css inputmask inputmask.js inputmask.min.js jquery.inputmask.js jquery.inputmask.min.js ion-rangeslider License.md css ion.rangeSlider.css ion.rangeSlider.min.css js ion.rangeSlider.js ion.rangeSlider.min.js jquery-knob jquery.knob.min.js jquery-mapael jquery.mapael.js jquery.mapael.min.js maps README.txt france_departments.js france_departments.min.js usa_states.js usa_states.min.js world_countries.js world_countries.min.js world_countries_mercator.js world_countries_mercator.min.js world_countries_miller.js world_countries_miller.min.js jquery-mousewheel LICENSE.txt jquery.mousewheel.js jquery-ui LICENSE.txt jquery-ui.css jquery-ui.js jquery-ui.min.css jquery-ui.min.js jquery-ui.structure.css jquery-ui.structure.min.css jquery-ui.theme.css jquery-ui.theme.min.css jquery-validation additional-methods.js additional-methods.min.js jquery.validate.js jquery.validate.min.js localization messages_ar.js messages_ar.min.js messages_az.js messages_az.min.js messages_bg.js messages_bg.min.js messages_bn_BD.js messages_bn_BD.min.js messages_ca.js messages_ca.min.js messages_cs.js messages_cs.min.js messages_da.js messages_da.min.js messages_de.js messages_de.min.js messages_el.js messages_el.min.js messages_es.js messages_es.min.js messages_es_AR.js messages_es_AR.min.js messages_es_PE.js messages_es_PE.min.js messages_et.js messages_et.min.js messages_eu.js messages_eu.min.js messages_fa.js messages_fa.min.js messages_fi.js messages_fi.min.js messages_fr.js messages_fr.min.js messages_ge.js messages_ge.min.js messages_gl.js messages_gl.min.js messages_he.js messages_he.min.js messages_hr.js messages_hr.min.js messages_hu.js messages_hu.min.js messages_hy_AM.js messages_hy_AM.min.js messages_id.js messages_id.min.js messages_is.js messages_is.min.js messages_it.js messages_it.min.js messages_ja.js messages_ja.min.js messages_ka.js messages_ka.min.js messages_kk.js messages_kk.min.js messages_ko.js messages_ko.min.js messages_lt.js messages_lt.min.js messages_lv.js messages_lv.min.js messages_mk.js messages_mk.min.js messages_my.js messages_my.min.js messages_nl.js messages_nl.min.js messages_no.js messages_no.min.js messages_pl.js messages_pl.min.js messages_pt_BR.js messages_pt_BR.min.js messages_pt_PT.js messages_pt_PT.min.js messages_ro.js messages_ro.min.js messages_ru.js messages_ru.min.js messages_sd.js messages_sd.min.js messages_si.js messages_si.min.js messages_sk.js messages_sk.min.js messages_sl.js messages_sl.min.js messages_sr.js messages_sr.min.js messages_sr_lat.js messages_sr_lat.min.js messages_sv.js messages_sv.min.js messages_th.js messages_th.min.js messages_tj.js messages_tj.min.js messages_tr.js messages_tr.min.js messages_uk.js messages_uk.min.js messages_ur.js messages_ur.min.js messages_vi.js messages_vi.min.js messages_zh.js messages_zh.min.js messages_zh_TW.js messages_zh_TW.min.js methods_de.js methods_de.min.js methods_es_CL.js methods_es_CL.min.js methods_fi.js methods_fi.min.js methods_it.js methods_it.min.js methods_nl.js methods_nl.min.js methods_pt.js methods_pt.min.js jquery jquery.js jquery.min.js jquery.slim.js jquery.slim.min.js jqvmap jquery.vmap.js jquery.vmap.min.js jqvmap.css jqvmap.min.css maps continents jquery.vmap.africa.js jquery.vmap.asia.js jquery.vmap.australia.js jquery.vmap.europe.js jquery.vmap.north-america.js jquery.vmap.south-america.js jquery.vmap.algeria.js jquery.vmap.argentina.js jquery.vmap.brazil.js jquery.vmap.canada.js jquery.vmap.croatia.js jquery.vmap.europe.js jquery.vmap.france.js jquery.vmap.germany.js jquery.vmap.greece.js jquery.vmap.indonesia.js jquery.vmap.iran.js jquery.vmap.iraq.js jquery.vmap.new_regions_france.js jquery.vmap.russia.js jquery.vmap.serbia.js jquery.vmap.tunisia.js jquery.vmap.turkey.js jquery.vmap.ukraine.js jquery.vmap.usa.districts.js jquery.vmap.usa.js jquery.vmap.world.js jsgrid demos db.js i18n jsgrid-de.js jsgrid-es.js jsgrid-fr.js jsgrid-he.js jsgrid-ja.js jsgrid-ka.js jsgrid-pl.js jsgrid-pt-br.js jsgrid-pt.js jsgrid-ru.js jsgrid-tr.js jsgrid-zh-cn.js jsgrid-zh-tw.js jsgrid-theme.css jsgrid-theme.min.css jsgrid.css jsgrid.js jsgrid.min.css jsgrid.min.js jszip jszip.js jszip.min.js moment locale af.js ar-dz.js ar-kw.js ar-ly.js ar-ma.js ar-sa.js ar-tn.js ar.js az.js be.js bg.js bm.js bn-bd.js bn.js bo.js br.js bs.js ca.js cs.js cv.js cy.js da.js de-at.js de-ch.js de.js dv.js el.js en-SG.js en-au.js en-ca.js en-gb.js en-ie.js en-il.js en-in.js en-nz.js eo.js es-do.js es-mx.js es-us.js es.js et.js eu.js fa.js fi.js fil.js fo.js fr-ca.js fr-ch.js fr.js fy.js ga.js gd.js gl.js gom-deva.js gom-latn.js gu.js he.js hi.js hr.js hu.js hy-am.js id.js is.js it-ch.js it.js ja.js jv.js ka.js kk.js km.js kn.js ko.js ku.js ky.js lb.js lo.js lt.js lv.js me.js mi.js mk.js ml.js mn.js mr.js ms-my.js ms.js mt.js my.js nb.js ne.js nl-be.js nl.js nn.js oc-lnc.js pa-in.js pl.js pt-br.js pt.js ro.js ru.js sd.js se.js si.js sk.js sl.js sq.js sr-cyrl.js sr.js ss.js sv.js sw.js ta.js te.js tet.js tg.js th.js tk.js tl-ph.js tlh.js tr.js tzl.js tzm-latn.js tzm.js ug-cn.js uk.js ur.js uz-latn.js uz.js vi.js x-pseudo.js yo.js zh-cn.js zh-hk.js zh-mo.js zh-tw.js locales.js locales.min.js moment-with-locales.js moment-with-locales.min.js moment.min.js overlayScrollbars css OverlayScrollbars.css OverlayScrollbars.min.css js OverlayScrollbars.js OverlayScrollbars.min.js jquery.overlayScrollbars.js jquery.overlayScrollbars.min.js pace-progress pace.js pace.min.js themes black pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css blue pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css green pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css orange pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css pink pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css purple pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css red pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css silver pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css white pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css yellow pace-theme-barber-shop.css pace-theme-big-counter.css pace-theme-bounce.css pace-theme-center-atom.css pace-theme-center-circle.css pace-theme-center-radar.css pace-theme-center-simple.css pace-theme-corner-indicator.css pace-theme-fill-left.css pace-theme-flash.css pace-theme-flat-top.css pace-theme-loading-bar.css pace-theme-mac-osx.css pace-theme-material.css pace-theme-minimal.css pdfmake vfs_fonts.js popper esm popper-utils.js popper-utils.min.js popper.js popper.min.js popper-utils.js popper-utils.min.js popper.js popper.min.js umd popper-utils.js popper-utils.min.js popper.js popper.min.js raphael Gruntfile.js license.txt raphael.js raphael.min.js raphael.no-deps.js raphael.no-deps.min.js select2-bootstrap4-theme select2-bootstrap4.css select2-bootstrap4.min.css select2 css select2.css select2.min.css js i18n af.js ar.js az.js bg.js bn.js bs.js build.txt ca.js cs.js da.js de.js dsb.js el.js en.js es.js et.js eu.js fa.js fi.js fr.js gl.js he.js hi.js hr.js hsb.js hu.js hy.js id.js is.js it.js ja.js ka.js km.js ko.js lt.js lv.js mk.js ms.js nb.js ne.js nl.js pl.js ps.js pt-BR.js pt.js ro.js ru.js sk.js sl.js sq.js sr-Cyrl.js sr.js sv.js th.js tk.js tr.js uk.js vi.js zh-CN.js zh-TW.js select2.full.js select2.full.min.js select2.js select2.min.js sparklines sparkline.js summernote lang summernote-ar-AR.js summernote-ar-AR.min.js summernote-ar-AR.min.js.LICENSE.txt summernote-az-AZ.js summernote-az-AZ.min.js summernote-az-AZ.min.js.LICENSE.txt summernote-bg-BG.js summernote-bg-BG.min.js summernote-bg-BG.min.js.LICENSE.txt summernote-ca-ES.js summernote-ca-ES.min.js summernote-ca-ES.min.js.LICENSE.txt summernote-cs-CZ.js summernote-cs-CZ.min.js summernote-cs-CZ.min.js.LICENSE.txt summernote-da-DK.js summernote-da-DK.min.js summernote-da-DK.min.js.LICENSE.txt summernote-de-DE.js summernote-de-DE.min.js summernote-de-DE.min.js.LICENSE.txt summernote-el-GR.js summernote-el-GR.min.js summernote-el-GR.min.js.LICENSE.txt summernote-es-ES.js summernote-es-ES.min.js summernote-es-ES.min.js.LICENSE.txt summernote-es-EU.js summernote-es-EU.min.js summernote-es-EU.min.js.LICENSE.txt summernote-fa-IR.js summernote-fa-IR.min.js summernote-fa-IR.min.js.LICENSE.txt summernote-fi-FI.js summernote-fi-FI.min.js summernote-fi-FI.min.js.LICENSE.txt summernote-fr-FR.js summernote-fr-FR.min.js summernote-fr-FR.min.js.LICENSE.txt summernote-gl-ES.js summernote-gl-ES.min.js summernote-gl-ES.min.js.LICENSE.txt summernote-he-IL.js summernote-he-IL.min.js summernote-he-IL.min.js.LICENSE.txt summernote-hr-HR.js summernote-hr-HR.min.js summernote-hr-HR.min.js.LICENSE.txt summernote-hu-HU.js summernote-hu-HU.min.js summernote-hu-HU.min.js.LICENSE.txt summernote-id-ID.js summernote-id-ID.min.js summernote-id-ID.min.js.LICENSE.txt summernote-it-IT.js summernote-it-IT.min.js summernote-it-IT.min.js.LICENSE.txt summernote-ja-JP.js summernote-ja-JP.min.js summernote-ja-JP.min.js.LICENSE.txt summernote-ko-KR.js summernote-ko-KR.min.js summernote-ko-KR.min.js.LICENSE.txt summernote-lt-LT.js summernote-lt-LT.min.js summernote-lt-LT.min.js.LICENSE.txt summernote-lt-LV.js summernote-lt-LV.min.js summernote-lt-LV.min.js.LICENSE.txt summernote-mn-MN.js summernote-mn-MN.min.js summernote-mn-MN.min.js.LICENSE.txt summernote-nb-NO.js summernote-nb-NO.min.js summernote-nb-NO.min.js.LICENSE.txt summernote-nl-NL.js summernote-nl-NL.min.js summernote-nl-NL.min.js.LICENSE.txt summernote-pl-PL.js summernote-pl-PL.min.js summernote-pl-PL.min.js.LICENSE.txt summernote-pt-BR.js summernote-pt-BR.min.js summernote-pt-BR.min.js.LICENSE.txt summernote-pt-PT.js summernote-pt-PT.min.js summernote-pt-PT.min.js.LICENSE.txt summernote-ro-RO.js summernote-ro-RO.min.js summernote-ro-RO.min.js.LICENSE.txt summernote-ru-RU.js summernote-ru-RU.min.js summernote-ru-RU.min.js.LICENSE.txt summernote-sk-SK.js summernote-sk-SK.min.js summernote-sk-SK.min.js.LICENSE.txt summernote-sl-SI.js summernote-sl-SI.min.js summernote-sl-SI.min.js.LICENSE.txt summernote-sr-RS-Latin.js summernote-sr-RS-Latin.min.js summernote-sr-RS-Latin.min.js.LICENSE.txt summernote-sr-RS.js summernote-sr-RS.min.js summernote-sr-RS.min.js.LICENSE.txt summernote-sv-SE.js summernote-sv-SE.min.js summernote-sv-SE.min.js.LICENSE.txt summernote-ta-IN.js summernote-ta-IN.min.js summernote-ta-IN.min.js.LICENSE.txt summernote-th-TH.js summernote-th-TH.min.js summernote-th-TH.min.js.LICENSE.txt summernote-tr-TR.js summernote-tr-TR.min.js summernote-tr-TR.min.js.LICENSE.txt summernote-uk-UA.js summernote-uk-UA.min.js summernote-uk-UA.min.js.LICENSE.txt summernote-uz-UZ.js summernote-uz-UZ.min.js summernote-uz-UZ.min.js.LICENSE.txt summernote-vi-VN.js summernote-vi-VN.min.js summernote-vi-VN.min.js.LICENSE.txt summernote-zh-CN.js summernote-zh-CN.min.js summernote-zh-CN.min.js.LICENSE.txt summernote-zh-TW.js summernote-zh-TW.min.js summernote-zh-TW.min.js.LICENSE.txt plugin databasic summernote-ext-databasic.css summernote-ext-databasic.js hello summernote-ext-hello.js specialchars summernote-ext-specialchars.js summernote-bs4.css summernote-bs4.js summernote-bs4.min.css summernote-bs4.min.js summernote-bs4.min.js.LICENSE.txt summernote-lite.css summernote-lite.js summernote-lite.min.css summernote-lite.min.js summernote-lite.min.js.LICENSE.txt summernote.css summernote.js summernote.min.css summernote.min.js summernote.min.js.LICENSE.txt sweetalert2-theme-bootstrap-4 bootstrap-4.css bootstrap-4.min.css sweetalert2 sweetalert2.all.js sweetalert2.all.min.js sweetalert2.css sweetalert2.js sweetalert2.min.css sweetalert2.min.js tempusdominus-bootstrap-4 css tempusdominus-bootstrap-4.css tempusdominus-bootstrap-4.min.css js tempusdominus-bootstrap-4.js tempusdominus-bootstrap-4.min.js toastr toastr.css toastr.min.css toastr.min.js uplot uPlot.cjs.js uPlot.esm.js uPlot.iife.js uPlot.iife.min.js uPlot.min.css robots.txt resources css app.css js app.js bootstrap.js lang en auth.php pagination.php passwords.php validation.php views auth login.blade.php passwords confirm.blade.php email.blade.php reset.blade.php register.blade.php verify.blade.php home.blade.php layouts app.blade.php menu.blade.php modals.blade.php pageheader.blade.php sidebar.blade.php mylist index.blade.php routes api.php channels.php console.php web.php server.php tests CreatesApplication.php Feature ExampleTest.php TestCase.php Unit ExampleTest.php webpack.mix.js utils definition addOptionMethod plugin bridge bridget the rectangle ("block") style of event the dot style of event spacing between time and title
# Getting Started with Create React App This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). ## Available Scripts In the project directory, you can run: ### `yarn start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.\ You will also see any lint errors in the console. ### `yarn test` Launches the test runner in the interactive watch mode.\ See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. ### `yarn build` Builds the app for production to the `build` folder.\ It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. ### `yarn eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. ## Learn More You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `yarn build` fails to minify This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) # NEAR lista de precios > Creación de listas de artículos, precios y tienda de compra basado en protocolo NEAR, se compone de un servicio backend hecho en NodeJS que interactúa con el protocolo NEAR mediante la libreria NEAR-API-JS, de esa manera se ofrece un REST API sencillo. Se integra ademas un frontend basado en Laravel que opera con los endpoints del backend para la creación y eliminación de artículos, precios y tiendas de compra en el blockchain de NEAR. --- ## Descripcion General | Ruta | Metodo | Descripcion | | ------------------------------------------ | ------ | --------------------------------------------------------------------------------------------------------------------------- | | [`/deploy`](#deploy) | POST | Despliega un Contrato en NEAR. | [`/addProduct`](#addProduct) | POST | Agrega un producto al listado en el storage del blockchain | | [`/getAllProducts`](#getAllProducts) | POST | Obtiene un listado de productos almacenados en el storage del blockchain | | [`/getAllProductsByListId`](#getAllProductsByListId) | POST | Obtiene un listado de productos ordenados por categoria almacenados en el storage del blockchain. | | [`/getProduct`](#getProduct) | POST | Obtiene un producto por id almacenado en el storage del blockchain. | | [`/deleteProduct`](#deleteProduct) | POST | Elimina un producto del listado almacenado en el storage del blockchain. | | --- ## Requisitos - [Cuenta NEAR](https://docs.near.org/docs/develop/basics/create-account) _(Con acceso a llave privada o frase)_ - [Node.js](https://nodejs.org/en/download/package-manager/) - [npm](https://www.npmjs.com/get-npm) o [Yarn](https://yarnpkg.com/getting-started/install) - Herramienta API como Postman [Postman](https://www.postman.com/downloads/) --- ## Configuración 1. Clonar repositorio ```bash git clone [email protected]:miguelacosta84/NEAR-lista-de-precios.git ``` 2. Instalar dependencias ```bash npm install ``` 3. Configurar `near-api-server.config.json` Configuracion default: ```json { "server_host": "localhost", "server_port": 3000, "rpc_node": "https://rpc.testnet.near.org", "init_disabled": true } ``` _**Note:** `init_disabled` no es utilizado 4. Iniciar Servidor ```bash node app ``` --- # CONTRATOS ## `/deploy` > _Despliega un Contrato en la red blockchain de NEAR basado en el archivo WASM de la carpeta `/contracts` ._ **Method:** **`POST`** | Param | Description | | -------------------------------- | ------------------------------------------------------------------------------------ | | `account_id` | _Cuenta sobre la que se desplegara el contrato._ | | `seed_phrase` _OR_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en `account_id`._ | | `contract` | _Archivo WASM ubicado en la carpeta `/contracts` resultante de compilar el contrato del proyecto._ | Ejemplo: ```json { "account_id": "dev-1636081698178-54540899156051", "seed_phrase": "witch collapse practice feed shame open despair creek road again ice least", "contract": "save-together.wasm" } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "status": { "SuccessValue": "" }, "transaction": { "signer_id": "dev-1636081698178-54540899156051", "public_key": "ed25519:Cgg4i7ciid8uG4K5Vnjzy5N4PXLst5aeH9ApRAUA3y8U", "nonce": 5, "receiver_id": "dev-1636081698178-54540899156051", "actions": [ { "DeployContract": { "code": "hT9saWV3aok50F8JundSIWAW+lxOcBOns1zenB2fB4E=" } } ], "signature": "ed25519:3VrppDV8zMMRXErdBJVU9MMbbKZ4SK1pBZqXoyw3oSSiXTeyR2W7upNhhZPdFJ1tNBr9h9SnsTVeBm5W9Bhaemis", "hash": "HbokHoCGcjGQZrz8yU8QDqBeAm5BN8iPjaSMXu7Yp2KY" }, "transaction_outcome": { "proof": [ { "hash": "Dfjn2ro1dXrPqgzd5zU7eJpCMKnATm295ceocX73Qiqn", "direction": "Right" }, { "hash": "9raAgMrEmLpL6uiynMAi9rykJrXPEZN4WSxLJUJXbipY", "direction": "Right" } ], "block_hash": "B64cQPDNkwiCcN3SGXU2U5Jz5M9EKF1hC6uDi4S15Fb3", "id": "HbokHoCGcjGQZrz8yU8QDqBeAm5BN8iPjaSMXu7Yp2KY", "outcome": { "logs": [], "receipt_ids": ["D94GcZVXE2WgPGuaJPJq8MdeEUidrN1FPkuU75NXWm7X"], "gas_burnt": 1733951676474, "tokens_burnt": "173395167647400000000", "executor_id": "dev-1636081698178-54540899156051", "status": { "SuccessReceiptId": "D94GcZVXE2WgPGuaJPJq8MdeEUidrN1FPkuU75NXWm7X" } } }, "receipts_outcome": [ { "proof": [ { "hash": "3HLkv7KrQ9LPptX658QiwkFagv8NwjcxF6ti15Een4uh", "direction": "Left" }, { "hash": "9raAgMrEmLpL6uiynMAi9rykJrXPEZN4WSxLJUJXbipY", "direction": "Right" } ], "block_hash": "B64cQPDNkwiCcN3SGXU2U5Jz5M9EKF1hC6uDi4S15Fb3", "id": "D94GcZVXE2WgPGuaJPJq8MdeEUidrN1FPkuU75NXWm7X", "outcome": { "logs": [], "receipt_ids": [], "gas_burnt": 1733951676474, "tokens_burnt": "173395167647400000000", "executor_id": "dev-1636081698178-54540899156051", "status": { "SuccessValue": "" } } } ] } ``` </p> </details> --- ## `/addProduct` > _Agrega un producto al listado en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "addProduct", "params": { "name":"Tortillas","price":"15","store":"Soriana","myListId":"102" } } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "isError": false, "message": "", "id": 0, "exist": false, "rows": [], "code": 200, "modelo": { "receipts_outcome": [ { "block_hash": "5Qd3KaCW5Tk1WSTUoew1cM62nLQC6HpSc84VWD1As9D2", "id": "D55c6pHkxx8yPXPvcAeAwkGP5zMZu8adYGkzPsAyBFgi", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 5456675905386, "logs": [], "metadata": { "gas_profile": [ { "cost": "BASE", "cost_category": "WASM_HOST_COST", "gas_used": "2647681110" }, { "cost": "CONTRACT_COMPILE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "35445963" }, { "cost": "CONTRACT_COMPILE_BYTES", "cost_category": "WASM_HOST_COST", "gas_used": "3903234000" }, { "cost": "READ_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "20878905600" }, { "cost": "READ_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "604411947" }, { "cost": "WRITE_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "5607589722" }, { "cost": "WRITE_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "204282900" }, { "cost": "READ_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "5034330372" }, { "cost": "READ_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "7392150" }, { "cost": "WRITE_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "8596567458" }, { "cost": "WRITE_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "288918864" }, { "cost": "STORAGE_WRITE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "192590208000" }, { "cost": "STORAGE_WRITE_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "2325934611" }, { "cost": "STORAGE_WRITE_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "3256946595" }, { "cost": "STORAGE_WRITE_EVICTED_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "32117307" }, { "cost": "STORAGE_READ_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "56356845750" }, { "cost": "STORAGE_READ_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "402382929" }, { "cost": "STORAGE_READ_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "5611005" }, { "cost": "STORAGE_HAS_KEY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "54039896625" }, { "cost": "STORAGE_HAS_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "246326760" }, { "cost": "TOUCHING_TRIE_NODE", "cost_category": "WASM_HOST_COST", "gas_used": "2334783609270" } ], "version": 1 }, "receipt_ids": [ "5bczBo7YHqUGFRuqXf8QYUkATQu1tgSqhxtnAr76szD2" ], "status": { "SuccessValue": "" }, "tokens_burnt": "545667590538600000000" }, "proof": [ { "direction": "Left", "hash": "AKAjEvJb2C6UFAXCokG76HKjacypLaWhomsVEgABCcmH" } ] }, { "block_hash": "jVnRvVu1SRQUXA16rRzJvsssLRi7Ui1shM6D2jtw2de", "id": "5bczBo7YHqUGFRuqXf8QYUkATQu1tgSqhxtnAr76szD2", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 223182562500, "logs": [], "metadata": { "gas_profile": [], "version": 1 }, "receipt_ids": [], "status": { "SuccessValue": "" }, "tokens_burnt": "0" }, "proof": [] } ], "status": { "SuccessValue": "" }, "transaction": { "actions": [ { "FunctionCall": { "args": "eyJuYW1lIjoiQkFUIERFIEJBU0VCQUxMIiwicHJpY2UiOiIyMzAiLCJzdG9yZSI6IldhbHRtYXJ0IiwibXlMaXN0SWQiOiI0In0=", "deposit": "0", "gas": 30000000000000, "method_name": "addProduct" } } ], "hash": "GGi7SMumYVwBnUHkEmFta1SCT9Tq1ZLCsB2hErAZ5yp8", "nonce": 70093263000023, "public_key": "ed25519:9VFReAeRicyUKjx3A6vFqScSAALB3utmG3gDzKx552Tt", "receiver_id": "dev-1636077519984-29378305835325", "signature": "ed25519:225G927xHshu3G2hGj1UTf926DxRvvQSK7cEefcdmk18scMLyepafxWqbYAhTqywCcTxpM9xGHua5ocx5zRA8Xq2", "signer_id": "dev-1636077519984-29378305835325" }, "transaction_outcome": { "block_hash": "5Qd3KaCW5Tk1WSTUoew1cM62nLQC6HpSc84VWD1As9D2", "id": "GGi7SMumYVwBnUHkEmFta1SCT9Tq1ZLCsB2hErAZ5yp8", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 2428108818456, "logs": [], "metadata": { "gas_profile": null, "version": 1 }, "receipt_ids": [ "D55c6pHkxx8yPXPvcAeAwkGP5zMZu8adYGkzPsAyBFgi" ], "status": { "SuccessReceiptId": "D55c6pHkxx8yPXPvcAeAwkGP5zMZu8adYGkzPsAyBFgi" }, "tokens_burnt": "242810881845600000000" }, "proof": [ { "direction": "Right", "hash": "5RGZ9x3jgBoLWKfQvHqQFjJUGACT8Lf9a4r9w6SXYKxP" } ] } }, "request": [], "errors": [] } ``` </p> </details> --- ## `/getAllProducts` > _Obtiene un listado de productos almacenados en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "getAllProducts" } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json ``` </p> </details> --- ## `/getAllProductsByListId` > _Obtiene un listado de productos ordenados por categoria almacenados en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "getAllProductsByListId", "params": {"listId":"102"} } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json ``` </p> </details> --- ## `/getProduct` > _Obtiene un producto por id almacenado en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "getProduct", "params": {"key":"0"} } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "isError": false, "message": "", "id": 0, "exist": false, "rows": [], "code": 200, "modelo": [], "request": [], "errors": [], "data": [ { "id": "1", "name": "Balon de futbol", "price": "120", "store": "Soriana", "myListId": "4" }, { "id": "2", "name": "Arroz los valles", "price": "35", "store": "Bodega Abarrey", "myListId": "1" }, { "id": "3", "name": "BAT DE BASEBALL", "price": "230", "store": "Waltmart", "myListId": "4" } ] } ``` </p> </details> --- ## `/deleteProduct` > _Obtiene un producto por id almacenado en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "deleteProduct", "params": {"key":"0"} } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "isError": false, "message": "", "id": 0, "exist": false, "rows": [], "code": 200, "modelo": { "receipts_outcome": [ { "block_hash": "3XdH6bxSNvybCb9GphS94GwtfVD9fSo56L239yGZ2CeD", "id": "A873A8pA3tf7U8cg9oi9Yz6rYMPq8X8YugN7AXwwfxX9", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 7011999229704, "logs": [], "metadata": { "gas_profile": [ { "cost": "BASE", "cost_category": "WASM_HOST_COST", "gas_used": "4501057887" }, { "cost": "CONTRACT_COMPILE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "35445963" }, { "cost": "CONTRACT_COMPILE_BYTES", "cost_category": "WASM_HOST_COST", "gas_used": "3903234000" }, { "cost": "READ_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "23488768800" }, { "cost": "READ_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "330715971" }, { "cost": "WRITE_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "11215179444" }, { "cost": "WRITE_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "315957552" }, { "cost": "READ_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "10068660744" }, { "cost": "READ_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "11433192" }, { "cost": "WRITE_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "20058657402" }, { "cost": "WRITE_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "840145644" }, { "cost": "STORAGE_WRITE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "64196736000" }, { "cost": "STORAGE_WRITE_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "916277271" }, { "cost": "STORAGE_WRITE_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "31018539" }, { "cost": "STORAGE_WRITE_EVICTED_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "32117307" }, { "cost": "STORAGE_READ_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "169070537250" }, { "cost": "STORAGE_READ_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "1021433589" }, { "cost": "STORAGE_READ_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "589155525" }, { "cost": "STORAGE_REMOVE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "106946061000" }, { "cost": "STORAGE_REMOVE_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "764407680" }, { "cost": "STORAGE_REMOVE_RET_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "1199281824" }, { "cost": "STORAGE_HAS_KEY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "108079793250" }, { "cost": "STORAGE_HAS_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "615816900" }, { "cost": "TOUCHING_TRIE_NODE", "cost_category": "WASM_HOST_COST", "gas_used": "3880571378166" } ], "version": 1 }, "receipt_ids": [ "HVQqC8zMJxnhhciFppgdFxbAsMd3vv8boByDaZK77L1J" ], "status": { "SuccessValue": "" }, "tokens_burnt": "701199922970400000000" }, "proof": [ { "direction": "Left", "hash": "Efcneb7gaGwzyYNarcHEBjrZXSUFZn2bt8SvcGHGACEt" } ] }, { "block_hash": "F4zRgUy7c7jMYEnEVvG2zVsyjHEDQEbeXbDrVdkGxs8s", "id": "HVQqC8zMJxnhhciFppgdFxbAsMd3vv8boByDaZK77L1J", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 223182562500, "logs": [], "metadata": { "gas_profile": [], "version": 1 }, "receipt_ids": [], "status": { "SuccessValue": "" }, "tokens_burnt": "0" }, "proof": [] } ], "status": { "SuccessValue": "" }, "transaction": { "actions": [ { "FunctionCall": { "args": "eyJrZXkiOiIzIn0=", "deposit": "0", "gas": 30000000000000, "method_name": "deleteProduct" } } ], "hash": "4USCGvowa6VFwTo8b4t81fEk969jfL9ZCPjXXpZTCfLN", "nonce": 70093263000024, "public_key": "ed25519:9VFReAeRicyUKjx3A6vFqScSAALB3utmG3gDzKx552Tt", "receiver_id": "dev-1636077519984-29378305835325", "signature": "ed25519:4mHm3w7yi3a4rxUENCeV6KxV2FXTAu5PJPXyBjvfzz2bLysnfkWZxHNvbZAdh2LJ8GGEvfSsPiPoDpZkrqUkDYb9", "signer_id": "dev-1636077519984-29378305835325" }, "transaction_outcome": { "block_hash": "3XdH6bxSNvybCb9GphS94GwtfVD9fSo56L239yGZ2CeD", "id": "4USCGvowa6VFwTo8b4t81fEk969jfL9ZCPjXXpZTCfLN", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 2427974662416, "logs": [], "metadata": { "gas_profile": null, "version": 1 }, "receipt_ids": [ "A873A8pA3tf7U8cg9oi9Yz6rYMPq8X8YugN7AXwwfxX9" ], "status": { "SuccessReceiptId": "A873A8pA3tf7U8cg9oi9Yz6rYMPq8X8YugN7AXwwfxX9" }, "tokens_burnt": "242797466241600000000" }, "proof": [ { "direction": "Right", "hash": "GGWiS8wbPRzG5iYcFAovA9NrwCCH37iZ9sVkWdCrBvCT" } ] } }, "request": [], "errors": [] } ``` </p> </details> --- # Pruebas Unitarias > Desde una sesion de consola acceder a `smart-contract-code` y ejecutar el comando: ```bash yarn asp ``` --- # Wireframe ![Screenshot](./wireframe/login.png) ![Screenshot](./wireframe/registro.png) ![Screenshot](./wireframe/listproducts.png) # NEAR lista de precios > Creación de listas de artículos, precios y tienda de compra basado en protocolo NEAR, se compone de un servicio backend hecho en NodeJS que interactúa con el protocolo NEAR mediante la libreria NEAR-API-JS, de esa manera se ofrece un REST API sencillo. Se integra ademas un frontend basado en Laravel que opera con los endpoints del backend para la creación y eliminación de artículos, precios y tiendas de compra en el blockchain de NEAR. ###### Demo: * [REST API Backend con conectividad al NEAR Testnet y Laravel Frontend](https://nearpricelist.tk) --- ## Descripcion General | Ruta | Metodo | Descripcion | | ------------------------------------------ | ------ | --------------------------------------------------------------------------------------------------------------------------- | | [`/deploy`](#deploy) | POST | Despliega un Contrato en NEAR. | [`/addProduct`](#addProduct) | POST | Agrega un producto al listado en el storage del blockchain | | [`/getAllProducts`](#getAllProducts) | POST | Obtiene un listado de productos almacenados en el storage del blockchain | | [`/getAllProductsByListId`](#getAllProductsByListId) | POST | Obtiene un listado de productos ordenados por categoria almacenados en el storage del blockchain. | | [`/getProduct`](#getProduct) | POST | Obtiene un producto por id almacenado en el storage del blockchain. | | [`/deleteProduct`](#deleteProduct) | POST | Elimina un producto del listado almacenado en el storage del blockchain. | | --- ## Requisitos - [Cuenta NEAR](https://docs.near.org/docs/develop/basics/create-account) _(Con acceso a llave privada o frase)_ - [Node.js](https://nodejs.org/en/download/package-manager/) - [npm](https://www.npmjs.com/get-npm) o [Yarn](https://yarnpkg.com/getting-started/install) - Herramienta API como Postman [Postman](https://www.postman.com/downloads/) --- ## Configuración 1. Clonar repositorio ```bash git clone [email protected]:miguelacosta84/NEAR-lista-de-precios.git ``` 2. Instalar dependencias ```bash npm install ``` 3. Configurar `near-api-server.config.json` Configuracion default: ```json { "server_host": "localhost", "server_port": 3000, "rpc_node": "https://rpc.testnet.near.org", "init_disabled": true } ``` _**Note:** `init_disabled` no es utilizado 4. Iniciar Servidor ```bash node app ``` --- # CONTRATOS ## `/deploy` > _Despliega un Contrato en la red blockchain de NEAR basado en el archivo WASM de la carpeta `/contracts` ._ **Method:** **`POST`** | Param | Description | | -------------------------------- | ------------------------------------------------------------------------------------ | | `account_id` | _Cuenta sobre la que se desplegara el contrato._ | | `seed_phrase` _OR_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en `account_id`._ | | `contract` | _Archivo WASM ubicado en la carpeta `/contracts` resultante de compilar el contrato del proyecto._ | Ejemplo: ```json { "account_id": "dev-1636081698178-54540899156051", "seed_phrase": "witch collapse practice feed shame open despair creek road again ice least", "contract": "save-together.wasm" } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "status": { "SuccessValue": "" }, "transaction": { "signer_id": "dev-1636081698178-54540899156051", "public_key": "ed25519:Cgg4i7ciid8uG4K5Vnjzy5N4PXLst5aeH9ApRAUA3y8U", "nonce": 5, "receiver_id": "dev-1636081698178-54540899156051", "actions": [ { "DeployContract": { "code": "hT9saWV3aok50F8JundSIWAW+lxOcBOns1zenB2fB4E=" } } ], "signature": "ed25519:3VrppDV8zMMRXErdBJVU9MMbbKZ4SK1pBZqXoyw3oSSiXTeyR2W7upNhhZPdFJ1tNBr9h9SnsTVeBm5W9Bhaemis", "hash": "HbokHoCGcjGQZrz8yU8QDqBeAm5BN8iPjaSMXu7Yp2KY" }, "transaction_outcome": { "proof": [ { "hash": "Dfjn2ro1dXrPqgzd5zU7eJpCMKnATm295ceocX73Qiqn", "direction": "Right" }, { "hash": "9raAgMrEmLpL6uiynMAi9rykJrXPEZN4WSxLJUJXbipY", "direction": "Right" } ], "block_hash": "B64cQPDNkwiCcN3SGXU2U5Jz5M9EKF1hC6uDi4S15Fb3", "id": "HbokHoCGcjGQZrz8yU8QDqBeAm5BN8iPjaSMXu7Yp2KY", "outcome": { "logs": [], "receipt_ids": ["D94GcZVXE2WgPGuaJPJq8MdeEUidrN1FPkuU75NXWm7X"], "gas_burnt": 1733951676474, "tokens_burnt": "173395167647400000000", "executor_id": "dev-1636081698178-54540899156051", "status": { "SuccessReceiptId": "D94GcZVXE2WgPGuaJPJq8MdeEUidrN1FPkuU75NXWm7X" } } }, "receipts_outcome": [ { "proof": [ { "hash": "3HLkv7KrQ9LPptX658QiwkFagv8NwjcxF6ti15Een4uh", "direction": "Left" }, { "hash": "9raAgMrEmLpL6uiynMAi9rykJrXPEZN4WSxLJUJXbipY", "direction": "Right" } ], "block_hash": "B64cQPDNkwiCcN3SGXU2U5Jz5M9EKF1hC6uDi4S15Fb3", "id": "D94GcZVXE2WgPGuaJPJq8MdeEUidrN1FPkuU75NXWm7X", "outcome": { "logs": [], "receipt_ids": [], "gas_burnt": 1733951676474, "tokens_burnt": "173395167647400000000", "executor_id": "dev-1636081698178-54540899156051", "status": { "SuccessValue": "" } } } ] } ``` </p> </details> --- ## `/addProduct` > _Agrega un producto al listado en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "addProduct", "params": { "name":"Tortillas","price":"15","store":"Soriana","myListId":"102" } } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "isError": false, "message": "", "id": 0, "exist": false, "rows": [], "code": 200, "modelo": { "receipts_outcome": [ { "block_hash": "5Qd3KaCW5Tk1WSTUoew1cM62nLQC6HpSc84VWD1As9D2", "id": "D55c6pHkxx8yPXPvcAeAwkGP5zMZu8adYGkzPsAyBFgi", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 5456675905386, "logs": [], "metadata": { "gas_profile": [ { "cost": "BASE", "cost_category": "WASM_HOST_COST", "gas_used": "2647681110" }, { "cost": "CONTRACT_COMPILE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "35445963" }, { "cost": "CONTRACT_COMPILE_BYTES", "cost_category": "WASM_HOST_COST", "gas_used": "3903234000" }, { "cost": "READ_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "20878905600" }, { "cost": "READ_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "604411947" }, { "cost": "WRITE_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "5607589722" }, { "cost": "WRITE_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "204282900" }, { "cost": "READ_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "5034330372" }, { "cost": "READ_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "7392150" }, { "cost": "WRITE_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "8596567458" }, { "cost": "WRITE_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "288918864" }, { "cost": "STORAGE_WRITE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "192590208000" }, { "cost": "STORAGE_WRITE_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "2325934611" }, { "cost": "STORAGE_WRITE_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "3256946595" }, { "cost": "STORAGE_WRITE_EVICTED_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "32117307" }, { "cost": "STORAGE_READ_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "56356845750" }, { "cost": "STORAGE_READ_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "402382929" }, { "cost": "STORAGE_READ_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "5611005" }, { "cost": "STORAGE_HAS_KEY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "54039896625" }, { "cost": "STORAGE_HAS_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "246326760" }, { "cost": "TOUCHING_TRIE_NODE", "cost_category": "WASM_HOST_COST", "gas_used": "2334783609270" } ], "version": 1 }, "receipt_ids": [ "5bczBo7YHqUGFRuqXf8QYUkATQu1tgSqhxtnAr76szD2" ], "status": { "SuccessValue": "" }, "tokens_burnt": "545667590538600000000" }, "proof": [ { "direction": "Left", "hash": "AKAjEvJb2C6UFAXCokG76HKjacypLaWhomsVEgABCcmH" } ] }, { "block_hash": "jVnRvVu1SRQUXA16rRzJvsssLRi7Ui1shM6D2jtw2de", "id": "5bczBo7YHqUGFRuqXf8QYUkATQu1tgSqhxtnAr76szD2", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 223182562500, "logs": [], "metadata": { "gas_profile": [], "version": 1 }, "receipt_ids": [], "status": { "SuccessValue": "" }, "tokens_burnt": "0" }, "proof": [] } ], "status": { "SuccessValue": "" }, "transaction": { "actions": [ { "FunctionCall": { "args": "eyJuYW1lIjoiQkFUIERFIEJBU0VCQUxMIiwicHJpY2UiOiIyMzAiLCJzdG9yZSI6IldhbHRtYXJ0IiwibXlMaXN0SWQiOiI0In0=", "deposit": "0", "gas": 30000000000000, "method_name": "addProduct" } } ], "hash": "GGi7SMumYVwBnUHkEmFta1SCT9Tq1ZLCsB2hErAZ5yp8", "nonce": 70093263000023, "public_key": "ed25519:9VFReAeRicyUKjx3A6vFqScSAALB3utmG3gDzKx552Tt", "receiver_id": "dev-1636077519984-29378305835325", "signature": "ed25519:225G927xHshu3G2hGj1UTf926DxRvvQSK7cEefcdmk18scMLyepafxWqbYAhTqywCcTxpM9xGHua5ocx5zRA8Xq2", "signer_id": "dev-1636077519984-29378305835325" }, "transaction_outcome": { "block_hash": "5Qd3KaCW5Tk1WSTUoew1cM62nLQC6HpSc84VWD1As9D2", "id": "GGi7SMumYVwBnUHkEmFta1SCT9Tq1ZLCsB2hErAZ5yp8", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 2428108818456, "logs": [], "metadata": { "gas_profile": null, "version": 1 }, "receipt_ids": [ "D55c6pHkxx8yPXPvcAeAwkGP5zMZu8adYGkzPsAyBFgi" ], "status": { "SuccessReceiptId": "D55c6pHkxx8yPXPvcAeAwkGP5zMZu8adYGkzPsAyBFgi" }, "tokens_burnt": "242810881845600000000" }, "proof": [ { "direction": "Right", "hash": "5RGZ9x3jgBoLWKfQvHqQFjJUGACT8Lf9a4r9w6SXYKxP" } ] } }, "request": [], "errors": [] } ``` </p> </details> --- ## `/getAllProducts` > _Obtiene un listado de productos almacenados en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "getAllProducts" } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json ``` </p> </details> --- ## `/getAllProductsByListId` > _Obtiene un listado de productos ordenados por categoria almacenados en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "getAllProductsByListId", "params": {"listId":"102"} } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json ``` </p> </details> --- ## `/getProduct` > _Obtiene un producto por id almacenado en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "getProduct", "params": {"key":"0"} } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "isError": false, "message": "", "id": 0, "exist": false, "rows": [], "code": 200, "modelo": [], "request": [], "errors": [], "data": [ { "id": "1", "name": "Balon de futbol", "price": "120", "store": "Soriana", "myListId": "4" }, { "id": "2", "name": "Arroz los valles", "price": "35", "store": "Bodega Abarrey", "myListId": "1" }, { "id": "3", "name": "BAT DE BASEBALL", "price": "230", "store": "Waltmart", "myListId": "4" } ] } ``` </p> </details> --- ## `/deleteProduct` > _Obtiene un producto por id almacenado en el storage del blockchain._ **Method:** **`POST`** | Param | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------------------------- | | `account_id` | _Cuenta que ejecutara el llamado al metodo del contrato y pagara el costo en gas._ | | `seed_phrase` _O_ `private_key` | _Seed phrase O private key del id de la cuenta indicada en account_id._ | | `contract` | _Account id del contrato que estas llamando._ | | `method` | _Metodo publico correspondiente al contrato que se esta mandando llamar._ | | `params` | _Argumentos del metodo que se manda llamar. Su uso es opcional._ | Example: ```json { "account_id": "dev-1636081698178-54540899156051", "private_key": "2Kh6PJjxH5PTTsVnYqtgnnwXHeafvVGczDXoCb33ws8reyq8J4oBYix1KP2ugRQ7q9NQUyPcVFTtbSG3ARVKETfK", "contract": "dev-1636081698178-54540899156051", "method": "deleteProduct", "params": {"key":"0"} } ``` <details> <summary><strong>Respuesta ejemplo:</strong> </summary> <p> ```json { "isError": false, "message": "", "id": 0, "exist": false, "rows": [], "code": 200, "modelo": { "receipts_outcome": [ { "block_hash": "3XdH6bxSNvybCb9GphS94GwtfVD9fSo56L239yGZ2CeD", "id": "A873A8pA3tf7U8cg9oi9Yz6rYMPq8X8YugN7AXwwfxX9", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 7011999229704, "logs": [], "metadata": { "gas_profile": [ { "cost": "BASE", "cost_category": "WASM_HOST_COST", "gas_used": "4501057887" }, { "cost": "CONTRACT_COMPILE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "35445963" }, { "cost": "CONTRACT_COMPILE_BYTES", "cost_category": "WASM_HOST_COST", "gas_used": "3903234000" }, { "cost": "READ_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "23488768800" }, { "cost": "READ_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "330715971" }, { "cost": "WRITE_MEMORY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "11215179444" }, { "cost": "WRITE_MEMORY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "315957552" }, { "cost": "READ_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "10068660744" }, { "cost": "READ_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "11433192" }, { "cost": "WRITE_REGISTER_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "20058657402" }, { "cost": "WRITE_REGISTER_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "840145644" }, { "cost": "STORAGE_WRITE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "64196736000" }, { "cost": "STORAGE_WRITE_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "916277271" }, { "cost": "STORAGE_WRITE_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "31018539" }, { "cost": "STORAGE_WRITE_EVICTED_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "32117307" }, { "cost": "STORAGE_READ_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "169070537250" }, { "cost": "STORAGE_READ_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "1021433589" }, { "cost": "STORAGE_READ_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "589155525" }, { "cost": "STORAGE_REMOVE_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "106946061000" }, { "cost": "STORAGE_REMOVE_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "764407680" }, { "cost": "STORAGE_REMOVE_RET_VALUE_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "1199281824" }, { "cost": "STORAGE_HAS_KEY_BASE", "cost_category": "WASM_HOST_COST", "gas_used": "108079793250" }, { "cost": "STORAGE_HAS_KEY_BYTE", "cost_category": "WASM_HOST_COST", "gas_used": "615816900" }, { "cost": "TOUCHING_TRIE_NODE", "cost_category": "WASM_HOST_COST", "gas_used": "3880571378166" } ], "version": 1 }, "receipt_ids": [ "HVQqC8zMJxnhhciFppgdFxbAsMd3vv8boByDaZK77L1J" ], "status": { "SuccessValue": "" }, "tokens_burnt": "701199922970400000000" }, "proof": [ { "direction": "Left", "hash": "Efcneb7gaGwzyYNarcHEBjrZXSUFZn2bt8SvcGHGACEt" } ] }, { "block_hash": "F4zRgUy7c7jMYEnEVvG2zVsyjHEDQEbeXbDrVdkGxs8s", "id": "HVQqC8zMJxnhhciFppgdFxbAsMd3vv8boByDaZK77L1J", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 223182562500, "logs": [], "metadata": { "gas_profile": [], "version": 1 }, "receipt_ids": [], "status": { "SuccessValue": "" }, "tokens_burnt": "0" }, "proof": [] } ], "status": { "SuccessValue": "" }, "transaction": { "actions": [ { "FunctionCall": { "args": "eyJrZXkiOiIzIn0=", "deposit": "0", "gas": 30000000000000, "method_name": "deleteProduct" } } ], "hash": "4USCGvowa6VFwTo8b4t81fEk969jfL9ZCPjXXpZTCfLN", "nonce": 70093263000024, "public_key": "ed25519:9VFReAeRicyUKjx3A6vFqScSAALB3utmG3gDzKx552Tt", "receiver_id": "dev-1636077519984-29378305835325", "signature": "ed25519:4mHm3w7yi3a4rxUENCeV6KxV2FXTAu5PJPXyBjvfzz2bLysnfkWZxHNvbZAdh2LJ8GGEvfSsPiPoDpZkrqUkDYb9", "signer_id": "dev-1636077519984-29378305835325" }, "transaction_outcome": { "block_hash": "3XdH6bxSNvybCb9GphS94GwtfVD9fSo56L239yGZ2CeD", "id": "4USCGvowa6VFwTo8b4t81fEk969jfL9ZCPjXXpZTCfLN", "outcome": { "executor_id": "dev-1636077519984-29378305835325", "gas_burnt": 2427974662416, "logs": [], "metadata": { "gas_profile": null, "version": 1 }, "receipt_ids": [ "A873A8pA3tf7U8cg9oi9Yz6rYMPq8X8YugN7AXwwfxX9" ], "status": { "SuccessReceiptId": "A873A8pA3tf7U8cg9oi9Yz6rYMPq8X8YugN7AXwwfxX9" }, "tokens_burnt": "242797466241600000000" }, "proof": [ { "direction": "Right", "hash": "GGWiS8wbPRzG5iYcFAovA9NrwCCH37iZ9sVkWdCrBvCT" } ] } }, "request": [], "errors": [] } ``` </p> </details>
longvuit18_pnc-fe
.gitpod.yml README.md frontend App.js __mocks__ fileMock.js assets css global.css img logo-black.svg logo-white.svg js near config.js utils.js index.html index.js integration-tests rs Cargo.toml src tests.rs ts package.json src main.ava.ts package.json
near-blank-project ================== This [React] app was initialized with [create-near-app] Quick Start =========== To run this project locally: 1. Prerequisites: Make sure you've installed [Node.js] ≥ 12 2. Install dependencies: `yarn install` 3. Run the local development server: `yarn dev` (see `package.json` for a full list of `scripts` you can run with `yarn`) Now you'll have a local development environment backed by the NEAR TestNet! Go ahead and play with the app and the code. As you make code changes, the app will automatically reload. Exploring The Code ================== 1. The "backend" code lives in the `/contract` folder. See the README there for more info. 2. The frontend code lives in the `/frontend` folder. `/frontend/index.html` is a great place to start exploring. Note that it loads in `/frontend/assets/js/index.js`, where you can learn how the frontend connects to the NEAR blockchain. 3. Tests: there are different kinds of tests for the frontend and the smart contract. See `contract/README` for info about how it's tested. The frontend code gets tested with [jest]. You can run both of these at once with `yarn run test`. Deploy ====== Every smart contract in NEAR has its [own associated account][NEAR accounts]. When you run `yarn dev`, your smart contract gets deployed to the live NEAR TestNet with a throwaway account. When you're ready to make it permanent, here's how. Step 0: Install near-cli (optional) ------------------------------------- [near-cli] is a command line interface (CLI) for interacting with the NEAR blockchain. It was installed to the local `node_modules` folder when you ran `yarn install`, but for best ergonomics you may want to install it globally: yarn install --global near-cli Or, if you'd rather use the locally-installed version, you can prefix all `near` commands with `npx` Ensure that it's installed with `near --version` (or `npx near --version`) Step 1: Create an account for the contract ------------------------------------------ Each account on NEAR can have at most one contract deployed to it. If you've already created an account such as `your-name.testnet`, you can deploy your contract to `near-blank-project.your-name.testnet`. Assuming you've already created an account on [NEAR Wallet], here's how to create `near-blank-project.your-name.testnet`: 1. Authorize NEAR CLI, following the commands it gives you: near login 2. Create a subaccount (replace `YOUR-NAME` below with your actual account name): near create-account near-blank-project.YOUR-NAME.testnet --masterAccount YOUR-NAME.testnet Step 2: set contract name in code --------------------------------- Modify the line in `src/config.js` that sets the account name of the contract. Set it to the account id you used above. const CONTRACT_NAME = process.env.CONTRACT_NAME || 'near-blank-project.YOUR-NAME.testnet' Step 3: deploy! --------------- One command: yarn deploy As you can see in `package.json`, this does two things: 1. builds & deploys smart contract to NEAR TestNet 2. builds & deploys frontend code to GitHub using [gh-pages]. This will only work if the project already has a repository set up on GitHub. Feel free to modify the `deploy` script in `package.json` to deploy elsewhere. Troubleshooting =============== On Windows, if you're seeing an error containing `EPERM` it may be related to spaces in your path. Please see [this issue](https://github.com/zkat/npx/issues/209) for more details. [React]: https://reactjs.org/ [create-near-app]: https://github.com/near/create-near-app [Node.js]: https://nodejs.org/en/download/package-manager/ [jest]: https://jestjs.io/ [NEAR accounts]: https://docs.near.org/docs/concepts/account [NEAR Wallet]: https://wallet.testnet.near.org/ [near-cli]: https://github.com/near/near-cli [gh-pages]: https://github.com/tschaub/gh-pages
kwklly_near-sign-test
README.md package-lock.json package.json src blockchains celo hw index.ts utils.ts keyStore.ts ledger.ts cosmos keyStore.ts ledger.ts flow keyStore.ts ledger.ts kusama keyStore.ts ledger.ts mina keyStore.ts ledger.ts near keyStore.ts ledger.ts polkadot keyStore.ts ledger.ts solana hw index.ts keyStore.ts ledger.ts terra keyStore.ts ledger.ts index.ts keyStore.ts ledger.ts types.ts test keystore.js keystore _getAccount.js mina.js near.js ledger.js ledger _getAccount.js cosmos.js mina.js near.js mnemonic.sample.json tsconfig.json
# @dsrv/kms dsrv key management store ## Usage ```html <script src="node_modules/argon2-browser/lib/argon2.js"></script> ``` ```javascript import { KMS, COIN, createKeyStore } from "@dsrv/kms"; // create key store const mnemonic = "...."; const password = "strong password"; const keyStore = await createKeyStore(mnemonic.split(" "), password); /* { t: 9, m: 262144, s: '89aaLUkbh3E3yvBvatitUsmznTMd2p7jU1cri5D5xBnu', j: [ 'eyJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiUEJFUzItSFMyNTYrQTEyOEtXIiwia2lkIjoiT1lBd0hGRW4zYmFKSWJkLXoyc09VMFhnRjVLRmtfb2ZBeWQwWmxMM0FjMCIsInAycyI6IlBqNHpCdS1aMC1laVVPcGx5emh5dXciLCJwMmMiOjgxOTJ9', 'A7jjx9G1jwylhRqmk9WLgc29_G_0Bn36buUSXC1u6zRq0jLzAEKOpg', '9COzNxXnCc_T1Jtg', 'VD5EXQ', 'BboSFxRBdGQlNyHqG8hOxw' ] } */ // get account const kms = new KMS({ keyStore, transport: null, }); const account = await kms.getAccount({ type: COIN.MINA, account: 0, index: 0, password }); /* B62qpgyAmA5yNgY4buNhTxTKYTvkqSFf442KkHzYHribCFjDmXcfHHm */ ``` ## Test keysore 1. yarn build 2. modify test/mnemonic.json 3. node test/keystore ## Test ledger nano s, nano x 1. yarn build 2. node test/ledger/mina o node test/ledger/cosmos
near_paperclip
.github ISSUE_TEMPLATE BOUNTY.yml .travis.yml CHANGELOG.md Cargo.toml README.md book README.md SUMMARY.md actix-operation-meta.md actix-plugin.md actix-schema-defaults.md actix-security.md actix-status-codes.md build-script.md cli.md compile-checks.md | core Cargo.toml src error.rs im.rs lib.rs util.rs v2 actix.rs extensions.rs mod.rs models.rs resolver.rs schema.rs macros Cargo.toml src actix.rs core.rs lib.rs plugins actix-web Cargo.toml src lib.rs web.rs rustfmt.toml src bin main.rs build build.rs error.rs lib.rs v2 codegen author.rs emitter.rs impls.rs mod.rs object.rs state.rs mod.rs tests README.md test_app.rs test_codegen.rs test_errors.rs test_k8s Cargo.toml lib.rs
**NOTE:** `k8s-v1.16.0-alpha.0-openapi-v2.json` was obtained from [kubernetes repository](https://github.com/kubernetes/kubernetes/tree/afd928b8bc81cea385eba4c94558373df7aeae75/api/openapi-spec). It seemed like an ideal candidate for testing. # Paperclip [![Build Status](https://api.travis-ci.org/wafflespeanut/paperclip.svg?branch=master)](https://travis-ci.org/wafflespeanut/paperclip) [![API docs](https://img.shields.io/badge/docs-latest-blue.svg)](https://paperclip.waffles.space/paperclip) [![Crates.io](https://img.shields.io/crates/v/paperclip.svg)](https://crates.io/crates/paperclip) Paperclip offers tooling for the [OpenAPI specification](https://github.com/OAI/OpenAPI-Specification/). Once complete, it will provide: - Code generation for efficient, type-safe, compile-time checked HTTP APIs (server, client and CLI) in Rust. - Support for processing, validating and hosting OpenAPI spec. - Customization for spec and code generation. It's currently under active development and may not be ready for production use just yet. You may be interested in: - [Examples and Usage](https://paperclip.waffles.space/). - [Features being worked on](https://github.com/wafflespeanut/paperclip/projects). - [API documentation](https://paperclip.waffles.space/paperclip). ## Developing locally - Make sure you have [`rustup`](https://rustup.rs/) installed. `cd` into this repository and run `make prepare` to setup your environment. - Now run `make` to build and run the tests. ## Contributing This project welcomes all kinds of contributions. No contribution is too small! If you want to contribute to this project but don't know how to begin or if you need help with something related to this project, feel free to send me an email (in Github profile) or join the [Discord server](https://discord.gg/PPu4Dhj). ## Code of Conduct This project follows the [Rust Code of Conduct](https://www.rust-lang.org/policies/code-of-conduct). ## License Licensed under either of - Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) - MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ## Sponsors Folks who have sponsored for the development of this project: <table> <tr> <td><a href="https://offscale.io"><img src="https://avatars1.githubusercontent.com/u/11748352" width="100"></a> </tr> </table> ## FAQ > Why is this generating raw Rust code instead of leveraging [procedural macros](https://doc.rust-lang.org/reference/procedural-macros.html) for compile-time codegen? I don't think proc macros are the right way to go for REST APIs. We need to be able to **see** the generated code somehow to identify names, fields, supported methods, etc. With proc macros, you sorta have to guess. This doesn't mean you can't generate APIs in compile-time. The only difference is that you'll be using [build scripts](https://paperclip.waffles.space/build-script.html) instead and `include!` the relevant code. That said, [we're using proc-macros](./macros) for other things. > The error thrown at compile-time doesn't look like it's very useful. Isn't there a better way to do this? None that I can think of, sadly. **New ideas are here needed.** [![API docs](https://img.shields.io/badge/docs-latest-blue.svg)](https://paperclip.waffles.space/paperclip) [![Crates.io](https://img.shields.io/crates/v/paperclip.svg)](https://crates.io/crates/paperclip) # Paperclip Paperclip is a OpenAPI code generator for efficient type-safe compile-time checked HTTP APIs in Rust. It's currently under active development and may not be ready for production use just yet. ### Features - Paperclip CLI can generate API client library (which checks some usage at [compile-time](compile-checks.md)) or a console for your API (which checks usage at [runtime](cli.md#runtime-checks)). - API client code can also be generated using [build scripts](build-script.md) which will then check parameters usage in your library at compile time. - [Acix-web plugin](actix-plugin.md) can be used to host the API spec for your `actix-web` application. ### Design TODO
etulutas_task4
README.md as-pect.config.js asconfig.json package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068) # `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ```
mfurkanturk_near-developer-course-pactice-part-II
README.md as-pect.config.js asconfig.json neardev dev-account.env package.json scripts 1.dev-deploy.sh 2.use-contract.sh 3.cleanup.sh README.md src as_types.d.ts simple __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts singleton __tests__ as-pect.d.ts index.unit.spec.ts asconfig.json assembly index.ts tsconfig.json utils.ts
# `near-sdk-as` Starter Kit This is a good project to use as a starting point for your AssemblyScript project. ## Samples This repository includes a complete project structure for AssemblyScript contracts targeting the NEAR platform. The example here is very basic. It's a simple contract demonstrating the following concepts: - a single contract - the difference between `view` vs. `change` methods - basic contract storage There are 2 AssemblyScript contracts in this project, each in their own folder: - **simple** in the `src/simple` folder - **singleton** in the `src/singleton` folder ### Simple We say that an AssemblyScript contract is written in the "simple style" when the `index.ts` file (the contract entry point) includes a series of exported functions. In this case, all exported functions become public contract methods. ```ts // return the string 'hello world' export function helloWorld(): string {} // read the given key from account (contract) storage export function read(key: string): string {} // write the given value at the given key to account (contract) storage export function write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} ``` ### Singleton We say that an AssemblyScript contract is written in the "singleton style" when the `index.ts` file (the contract entry point) has a single exported class (the name of the class doesn't matter) that is decorated with `@nearBindgen`. In this case, all methods on the class become public contract methods unless marked `private`. Also, all instance variables are stored as a serialized instance of the class under a special storage key named `STATE`. AssemblyScript uses JSON for storage serialization (as opposed to Rust contracts which use a custom binary serialization format called borsh). ```ts @nearBindgen export class Contract { // return the string 'hello world' helloWorld(): string {} // read the given key from account (contract) storage read(key: string): string {} // write the given value at the given key to account (contract) storage @mutateState() write(key: string, value: string): string {} // private helper method used by read() and write() above private storageReport(): string {} } ``` ## Usage ### Getting started (see below for video recordings of each of the following steps) INSTALL `NEAR CLI` first like this: `npm i -g near-cli` 1. clone this repo to a local folder 2. run `yarn` 3. run `./scripts/1.dev-deploy.sh` 3. run `./scripts/2.use-contract.sh` 4. run `./scripts/2.use-contract.sh` (yes, run it to see changes) 5. run `./scripts/3.cleanup.sh` ### Videos **`1.dev-deploy.sh`** This video shows the build and deployment of the contract. [![asciicast](https://asciinema.org/a/409575.svg)](https://asciinema.org/a/409575) **`2.use-contract.sh`** This video shows contract methods being called. You should run the script twice to see the effect it has on contract state. [![asciicast](https://asciinema.org/a/409577.svg)](https://asciinema.org/a/409577) **`3.cleanup.sh`** This video shows the cleanup script running. Make sure you add the `BENEFICIARY` environment variable. The script will remind you if you forget. ```sh export BENEFICIARY=<your-account-here> # this account receives contract account balance ``` [![asciicast](https://asciinema.org/a/409580.svg)](https://asciinema.org/a/409580) ### Other documentation - See `./scripts/README.md` for documentation about the scripts - Watch this video where Willem Wyndham walks us through refactoring a simple example of a NEAR smart contract written in AssemblyScript https://youtu.be/QP7aveSqRPo ``` There are 2 "styles" of implementing AssemblyScript NEAR contracts: - the contract interface can either be a collection of exported functions - or the contract interface can be the methods of a an exported class We call the second style "Singleton" because there is only one instance of the class which is serialized to the blockchain storage. Rust contracts written for NEAR do this by default with the contract struct. 0:00 noise (to cut) 0:10 Welcome 0:59 Create project starting with "npm init" 2:20 Customize the project for AssemblyScript development 9:25 Import the Counter example and get unit tests passing 18:30 Adapt the Counter example to a Singleton style contract 21:49 Refactoring unit tests to access the new methods 24:45 Review and summary ``` ## The file system ```sh ├── README.md # this file ├── as-pect.config.js # configuration for as-pect (AssemblyScript unit testing) ├── asconfig.json # configuration for AssemblyScript compiler (supports multiple contracts) ├── package.json # NodeJS project manifest ├── scripts │   ├── 1.dev-deploy.sh # helper: build and deploy contracts │   ├── 2.use-contract.sh # helper: call methods on ContractPromise │   ├── 3.cleanup.sh # helper: delete build and deploy artifacts │   └── README.md # documentation for helper scripts ├── src │   ├── as_types.d.ts # AssemblyScript headers for type hints │   ├── simple # Contract 1: "Simple example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 1 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 1 │   ├── singleton # Contract 2: "Singleton-style example" │   │   ├── __tests__ │   │   │   ├── as-pect.d.ts # as-pect unit testing headers for type hints │   │   │   └── index.unit.spec.ts # unit tests for contract 2 │   │   ├── asconfig.json # configuration for AssemblyScript compiler (one per contract) │   │   └── assembly │   │   └── index.ts # contract code for contract 2 │   ├── tsconfig.json # Typescript configuration │   └── utils.ts # common contract utility functions └── yarn.lock # project manifest version lock ``` You may clone this repo to get started OR create everything from scratch. Please note that, in order to create the AssemblyScript and tests folder structure, you may use the command `asp --init` which will create the following folders and files: ``` ./assembly/ ./assembly/tests/ ./assembly/tests/example.spec.ts ./assembly/tests/as-pect.d.ts ``` ## Setting up your terminal The scripts in this folder are designed to help you demonstrate the behavior of the contract(s) in this project. It uses the following setup: ```sh # set your terminal up to have 2 windows, A and B like this: ┌─────────────────────────────────┬─────────────────────────────────┐ │ │ │ │ │ │ │ A │ B │ │ │ │ │ │ │ └─────────────────────────────────┴─────────────────────────────────┘ ``` ### Terminal **A** *This window is used to compile, deploy and control the contract* - Environment ```sh export CONTRACT= # depends on deployment export OWNER= # any account you control # for example # export CONTRACT=dev-1615190770786-2702449 # export OWNER=sherif.testnet ``` - Commands _helper scripts_ ```sh 1.dev-deploy.sh # helper: build and deploy contracts 2.use-contract.sh # helper: call methods on ContractPromise 3.cleanup.sh # helper: delete build and deploy artifacts ``` ### Terminal **B** *This window is used to render the contract account storage* - Environment ```sh export CONTRACT= # depends on deployment # for example # export CONTRACT=dev-1615190770786-2702449 ``` - Commands ```sh # monitor contract storage using near-account-utils # https://github.com/near-examples/near-account-utils watch -d -n 1 yarn storage $CONTRACT ``` --- ## OS Support ### Linux - The `watch` command is supported natively on Linux - To learn more about any of these shell commands take a look at [explainshell.com](https://explainshell.com) ### MacOS - Consider `brew info visionmedia-watch` (or `brew install watch`) ### Windows - Consider this article: [What is the Windows analog of the Linux watch command?](https://superuser.com/questions/191063/what-is-the-windows-analog-of-the-linuo-watch-command#191068)
levalleux-ludo_decentradocs
BUIDL.md README.md angular.json build contracts DVSRegistry.json e2e protractor.conf.js src app.e2e-spec.ts app.po.ts tsconfig.json fakeArweave .travis.yml .vscode settings.json tasks.json README.md package-lock.json package.json src greeter.ts index.ts test greeter-spec.ts index-spec.ts tsconfig.json tslint.json firebase.json karma.conf.js migrations 1_initial_migration.js 2_DVSRegistry_migration.js package-lock.json package.json src app _helpers Hasher.ts WindowRef.ts auth.guard.ts crypto.ts _model DocCollectionData.ts DocInstance.ts DocMetaData.ts DocVersion.ts Document.ts app-routing.module.ts app.component.html app.component.spec.ts app.component.ts app.module.ts arweave arweave.mock.ts arweave.module.ts arweave.service.spec.ts arweave.service.ts constants.ts transactions.service.spec.ts transactions.service.ts authenticate authenticate.service.spec.ts authenticate.service.ts blockchain IDecentraDocsContract.ts blockchain.service.spec.ts blockchain.service.ts components access-ctrl-dialog access-ctrl-dialog.component.html access-ctrl-dialog.component.spec.ts access-ctrl-dialog.component.ts arweave-connect arweave-connect.component.html arweave-connect.component.spec.ts arweave-connect.component.ts confirm-dialog confirm-dialog.component.html confirm-dialog.component.spec.ts confirm-dialog.component.ts document-details document-details.component.html document-details.component.spec.ts document-details.component.ts document-list document-list.component.html document-list.component.spec.ts document-list.component.ts document-upload-form document-upload-form.component.html document-upload-form.component.spec.ts document-upload-form.component.ts document-upload document-upload.component.html document-upload.component.spec.ts document-upload.component.ts ethereum-connect ethereum-connect.component.html ethereum-connect.component.spec.ts ethereum-connect.component.ts help access-control access-control.component.html access-control.component.spec.ts access-control.component.ts checker checker.component.html checker.component.spec.ts checker.component.ts search search.component.html search.component.spec.ts search.component.ts subscription subscription.component.html subscription.component.spec.ts subscription.component.ts version-control version-control.component.html version-control.component.spec.ts version-control.component.ts left-menu-bar left-menu-bar.component.html left-menu-bar.component.spec.ts left-menu-bar.component.ts material-file-select material-file-select.component.html material-file-select.component.spec.ts material-file-select.component.ts material-file-upload material-file-upload.component.html material-file-upload.component.spec.ts material-file-upload.component.ts navbar navbar.component.html navbar.component.spec.ts navbar.component.ts near-connect near-connect.component.html near-connect.component.spec.ts near-connect.component.ts profile-details profile-details.component.html profile-details.component.spec.ts profile-details.component.ts routes access-ctrl access-ctrl.component.html access-ctrl.component.spec.ts access-ctrl.component.ts authenticate authenticate.component.html authenticate.component.spec.ts authenticate.component.ts doc-checker doc-checker.component.html doc-checker.component.spec.ts doc-checker.component.ts download download.component.html download.component.spec.ts download.component.ts help help.component.html help.component.spec.ts help.component.ts home home.component.html home.component.spec.ts home.component.ts my-documents my-documents.component.html my-documents.component.spec.ts my-documents.component.ts publish publish.component.html publish.component.spec.ts publish.component.ts search search.component.html search.component.spec.ts search.component.ts shared-documents shared-documents.component.html shared-documents.component.spec.ts shared-documents.component.ts doc-manager doc.service.spec.ts doc.service.ts ethereum Contract.ts DVSRegistry.ts dvs.service.spec.ts dvs.service.ts eth.module.ts eth.service.spec.ts eth.service.ts tokens.ts library library.service.spec.ts library.service.ts near NEARDecentraDocsContract.ts near.service.spec.ts near.service.ts assets arweave-logo-1.svg arweave-logo-2.svg ethereum-icon-black.734e726b.svg ethereum-icon-black.ff315678.svg ethereum-icon-purple.56fbef66.svg near_logo.svg near_logo_white.svg environments environment.prod.ts environment.ts index.html main.ts polyfills.ts test.ts test DVSRegistry.spec.js truffle-config.js tsconfig.app.json tsconfig.json tsconfig.spec.json tslint.json webpack.config.js
# DecentraDocs # Welcome to ## What is <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span>? <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span> is a decentralized, application (dAPP) providing a “cloud” opened platform that enables people and organizations to store, share, market and manage all kind of documents in the “PermaWeb”. In some ways, <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span> could be seen as a World-Wide Library. ![](src/assets/library-1.jpg) ## What is the **"Permaweb"**? ![](src/assets/Arweave-Permaweb-1.png) Build on top of the [Arweave network](https://www.arweave.org/#arweave-intro), the **permaweb** is “a global, community-owned web that anyone can contribute to or get paid to maintain. The **permaweb** looks just like the normal web, but all of its content – from images to full web apps – is permanent, retrieved quickly, and decentralized – forever. Just as the first web connected people over vast distances, the **permaweb** connects people over extremely long periods of time.” It means the document stored with <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span> will stay available forever, for every user allowed to access it, without running cost (storage costs are paid upfront, for 100+ years!), without risk of loss, tampering or censorship. ## What does **_decentralized_** mean? While a traditional storage server may fail, can tamper stored information (intentionally or not) or can deny access, the decentralized storage solutions run without any central authority or single point of failure offering, by design: **resilience**, **immutability**, **censorship resistance** and **openness**. However, the consensus protocol on the decentralized networks provides a trustless mechanism to certify **authenticity** of the stored information. ![](src/assets/decentralized-1.png) ## Yet another _cloud storage service_? So what? <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span> is built on top of: * the [![](src/assets/arweave-logo-1-crop.png)Arweave network](https://www.arweave.org/#arweave-intro) providing the storage platform for managed documents (permaweb) * a couple of smart contracts (available on both [![](src/assets/eth-logo-1.png)Ethereum](https://ethereum.org) or [![](src/assets/near_logo_crop.png)NEAR Protocol](https://nearprotocol.com) blockchains): providing the document access control and subscription features ![](src/assets/document-information.png) Hence, it’s not only a world-wide shared hard drive and is quite different from the usual “cloud storage service”, in 2 main aspects: 1. First, because the decentralized architecture guarantees the provided services and specifications don’t rely on any third-party or central authority 2. Secondly, because this architecture also provides exclusive document management services like: * ### Version Control ![](src/assets/version-control-1.jpg) Each time you make a change in a document, then you update your publication in <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span>, a **new version** of your document will be created Every version of your document remains accessible forever, hence you have a complete tracability of your document history ![](src/assets/history-revision-1.png) * ### Search Engine and Proof of Authenticty (PoA) The **Search Engine** allow to search per: * author * keyword in title * keyword in description![](src/assets/Terminology-document-1600x1024.jpg) The **Doc Checker** allows to search in the Library if a given document already exists. This feature is very useful when your document has been copied to be shared with others: Given a copy, the doc checker retrieve which publication and which version it corresponds to. ![](src/assets/business-paperwork-1.jpg) The Doc Checker can also be used to **certify the authenticity** of any document. ![](src/assets/authentic-1.jpg) * ### Access Control ![](src/assets/32134175-access-granted-denied-stamp.jpg) When a document is published, the following option are available: * #### Public The document can be downloaded by any user * #### Restricted The document can be downloaded only by permissionned users After a restricted document is published, the author can still update the authorizations to allow/deny some users. When permission is granted to a user for a given document, it applies to all versions (past and future). * ### Subscription/Monetization Services ![](src/assets/monetize-1.jpg) A restricted document can also be monetized: the author set the price for users to pay if they want to download the publication. When a user has subscribed to a given document, it applies to all versions (past and future). ## Why should I need to use <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span>? Whatever if you’re an individual or an organization, <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span> is very useful as soon as you’re interested in: * Storing (and share) documents forever, * Keeping track of each version, * Setting up a subscription process for customers to access your publications, * Monetizing your publications, * Authentify/certifying any copy of a document. ![](src/assets/Information-Search.jpg) Here is a non-exhaustive list of what you can (should) manage in <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span>: * Office templates * Images * Music sheets * Artworks * Food recipes * Product datasheets * E-books * Regulations * Accounting reports * KPI reports * … ## How much does it cost? ![](src/assets/6a0120a85dcdae970b01a3fcc55683970b-800wi.png) Blockchain networks need contributions from users in order to work and last in the long term. That’s why there are a few fees when performing transactions. Here are the fees associated with common actions in <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span>: <table _ngcontent-xre-c122="" border="1" cellpadding="10"> <tbody> <tr _ngcontent-xre-c122=""> <th _ngcontent-xre-c122="">User Role</th> <th _ngcontent-xre-c122="">Action</th> <th _ngcontent-xre-c122="">Fees</th> </tr> <tr _ngcontent-xre-c122=""> <td _ngcontent-xre-c122="">Document Publisher</td> <td _ngcontent-xre-c122="">Publish a Document (1st version)</td> <td _ngcontent-xre-c122="">Arweave gas + Eth/Near gas</td> </tr> <tr _ngcontent-xre-c122=""> <td _ngcontent-xre-c122="">Document Publisher</td> <td _ngcontent-xre-c122="">Publish a new version of a Document</td> <td _ngcontent-xre-c122="">Arweave gas</td> </tr> <tr _ngcontent-xre-c122=""> <td _ngcontent-xre-c122="">Document Publisher</td> <td _ngcontent-xre-c122="">Change Access Control rules</td> <td _ngcontent-xre-c122="">Eth/Near gas</td> </tr> <tr _ngcontent-xre-c122=""> <td _ngcontent-xre-c122="">Reader</td> <td _ngcontent-xre-c122="">Browse over documents in Global Library</td> <td _ngcontent-xre-c122="">no fee</td> </tr> <tr _ngcontent-xre-c122=""> <td _ngcontent-xre-c122="">Reader</td> <td _ngcontent-xre-c122="">Download a given version of a document (assuming the user is authorized or the document is public)</td> <td _ngcontent-xre-c122="">no fee</td> </tr> <tr _ngcontent-xre-c122=""> <td _ngcontent-xre-c122="">Reader</td> <td _ngcontent-xre-c122="">Subscribe to a Document</td> <td _ngcontent-xre-c122="">Eth/Near gas</td> </tr> </tbody> </table> ## What would <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span> feature later? DecentraDocs is only on its early stage. Here is a list of features that could be added next: * Dedicated fungible token to manage subscription fees * Dedicated non-fungible token to manage access control * Categories of documents to classify in library (for instance: ‘regulations’, ‘food recipes’, ‘templates’, …) * Folders (gathering a list of documents), allowing a reader to subscribe to the folder itself * Multi-authoring (or delegation) * Rating/reviews of publications from subscribers * Notification system (for instance, when a new version is available) * Document Signing/Approval process * Diff viewer (between 2 versions) * Advanced Document Workflows * … ## Try <img src="src/assets/DecentraDocs.png" width="50px"><span _ngcontent-xre-c122="" class="decentradocs">DecentraDocs</span> now! ### [Demo 1](https://decentradocs.web.app/) ### [Demo 2](https://decentradocs.levalleuxludo.now.sh/) ## BUIDL IT See [BUIDL.md](BUIDL.md) # Credits thanks to https://www.browserling.com/tools/html-to-markdown for generating this file from HTML This project has been generated using the generator: https://github.com/opitzconsulting/generator-node-express-typescript ```console yo node-express-typescript --mocha ``` [![Build Status](https://travis-ci.org/{{github-user-name}}/{{github-app-name}}.svg?branch=master)](https://travis-ci.org/{{github-user-name}}/{{github-app-name}}.svg?branch=master) [![Coverage Status](https://coveralls.io/repos/github/{{github-user-name}}/{{github-app-name}}/badge.svg?branch=master)](https://coveralls.io/github/{{github-user-name}}/{{github-app-name}}?branch=master) [![MIT license](http://img.shields.io/badge/license-MIT-brightgreen.svg)](http://opensource.org/licenses/MIT) # Using this module in other modules Here is a quick example of how this module can be used in other modules. The [TypeScript Module Resolution Logic](https://www.typescriptlang.org/docs/handbook/module-resolution.html) makes it quite easy. The file `src/index.ts` is a [barrel](https://basarat.gitbooks.io/typescript/content/docs/tips/barrel.html) that re-exports selected exports from other files. The _package.json_ file contains `main` attribute that points to the generated `lib/index.js` file and `typings` attribute that points to the generated `lib/index.d.ts` file. > If you are planning to have code in multiple files (which is quite natural for a NodeJS module) that users can import, make sure you update `src/index.ts` file appropriately. Now assuming you have published this amazing module to _npm_ with the name `my-amazing-lib`, and installed it in the module in which you need it - - To use the `Greeter` class in a TypeScript file - ```ts import { Greeter } from "my-amazing-lib"; const greeter = new Greeter("World!"); greeter.greet(); ``` - To use the `Greeter` class in a JavaScript file - ```js const Greeter = require('my-amazing-lib').Greeter; const greeter = new Greeter('World!'); greeter.greet(); ``` ## Setting travis and coveralls badges 1. Sign in to [travis](https://travis-ci.org/) and activate the build for your project. 2. Sign in to [coveralls](https://coveralls.io/) and activate the build for your project. 3. Replace {{github-user-name}}/{{github-app-name}} with your repo details like: "ospatil/generator-node-typescript".